To have one’s posts removed by Facebook, Twitter and YouTube is not, in a legal sense, a violation of one’s First Amendment rights. Facebook’s rights to manage speech on Facebook are identical to your First Amendment rights to manage your own website.
While we want social media platforms to remove content that harms national security or incites violence, to be evicted by Facebook or any other platform that dominates the marketplace of ideas greatly limits the reach of one’s voice. Striking the right balance – finding the right formula for content moderation – has proven to be perplexing for industry and highly controversial for policy makers. The deletion of harmless content is frequently the result of artificial intelligence programs that fail to understand the nuances of human communication. Thus, we find posts that document war crimes by terrorists in Syria are conflated with terrorism and removed. In a new podcast from the Electronic Frontier Foundation, Cindy Cohn and Danny O’Brien talk with Daphne Keller of the Stanford Center for Internet and Society to explore the pitfalls of the current regime of content moderation – and how ideas for reform might make it better or worse. In the early internet era, platforms for speech were distributed throughout society, which prompted digital visionaries to wax poetic about the democratization of speech. Now internet speech is more centralized within dominant platforms. The perception is widespread that the national dialogue is being distorted. Why has this happened? Keller says: The sheer scale of moderation on a Facebook for example means that they have to adopt the most reductive, non-nuanced rules they can in order to communicate them to a distributed global workforce. And that distributed global workforce inevitably is going to interpret things differently and have inconsistent outcomes. And then having the central decision-maker sitting in Palo Alto or Mountain View in the U.S. subject to a lot of pressure from, say, whoever sits in the White House, or from advertisers, means that there’s both a huge room for error in content moderation, and inevitably policies will be adopted that 50 percent of the population thinks are wrong policies. Keller, Cohn and O’Brien discuss possible solutions, including schemes to reduce internet standards of conduct to the “level of a local café” or town square. EFF’s podcast is a thoughtful discussion, one in which the speakers show great humility in the potential for suggested reforms to have serious unintended consequences. For our part, Protect The 1st supports the Platform Accountability and Consumer Transparency (PACT) Act. This bipartisan Senate-sponsored legislation would require greater transparency by social media companies and some due process for consumer complaints by those who’ve had content removed. The PACT Act would likely not be a comprehensive solution to the dilemma of internet content moderation. But enacting it would undoubtedly reveal paths to further improvements and refinements in how speech is moderated by a handful of companies, without compromising the First Amendment. Comments are closed.
|
Archives
February 2023
Categories
All
|
ABOUT |
ISSUES |
TAKE ACTION |