Conservatives are hopping mad about the perception that social media companies have trigger fingers when it comes to removing posts with right-leaning political content. Liberals loathe “hate speech” online and the posting of material they deem to be a threat to public safety, and want more of it removed. On the question of content moderation, lawmakers and federal courts are now tangled up like players in a game of Twister.
In the exercise of free speech, the First Amendment has long recognized the right of social media companies to make their own content moderation decisions without government interference. That settled principle is now being contested. A split in decisions of two federal circuit courts of appeal may lead to the U.S. Supreme Court taking the historic step of defining rules for how Facebook, Twitter, and other social platforms must moderate the stream of millions of daily posts.
Such a review became likely after Florida’s Attorney General filed a petition last week asking the Supreme Court to review a decision by the 11th Circuit Court of Appeals that overturned a Florida law prohibiting social media platforms from removing the posts of political candidates. The Republican AG was encouraged to make this move after the 5th Circuit Court of Appeals approved a Texas social media law that bars companies from removing posts based on a poster’s political ideology.
The 5th Circuit’s decision reverses years of First Amendment law by holding that the government can restrict private speech (in this case, forcing social media companies to carry content it deems offensive) without violating the First Amendment.
Those arguing for a greater role for government in content moderation maintain that a handful of social media companies have a dominant role in the national online debate. If Amazon, for instance, decides to delist a book, that author loses access to the most robust sales platform for their speech. It Twitter removes a politician’s posts, it has meaningfully hindered that politician’s ability to respond in the national debate in real time.
Countering those arguments is the reality that alternatives to these platforms do exist. If someone no longer enjoys access to Twitter, there's always Facebook or other platforms upon which views can be disseminated. This includes the opportunity for prominent politicians to start their own social media services where they have total control over the content on their site.
Moreover, the dominance of these media platforms does not make them common carriers like providers of phone or email services. For example, unlike the phone company, social media companies under Section 230 of the Communications Decency Act are empowered to restrict access to material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” The words “harassing” and “objectionable” provide a lot of room for interpretation.
Section 230 gives social media platforms of all sizes liability protection against lawsuits over items posted by users. Without this protection, thousands of commercial and non-profit sites would fold instantly, killing the business model of much of the internet.
Social media companies warn, not without reason, that to be forced to post speech that goes against their written policies would not only constitute mandatory speech (violating the First Amendment), but it would also violate their ability to keep their sites relatively clean. It could force U.S. social media to run Russian propaganda on Ukraine, neo-Nazi posts denying the Holocaust, and posts encouraging children to take up risky behaviors.
What does all this add up to? One thing is certain – the status quo has broken down.
“We are in a new arena, a very extensive one, for speakers and for those who would moderate their speech. None of the precedents fit seamlessly,” wrote Judge Leslie Southwick, who dissented from the 5th Circuit’s opinion. Supreme Court Justice Samuel Alito has stated that the issue “will plainly merit this court’s review.”
As much as we might criticize how social media companies moderate their content, they have an absolute right under the First Amendment to manage the speech under their purview. So how can we strike a new and better balance?
As the law evolves, we urge jurists and lawmakers to give deeper consideration to the principles behind the Platform Accountability and Consumer Transparency Act, sponsored by Sens. Brian Schatz (D-HI) and John Thune (R-SD). The PACT Act would require social media companies to publish and adhere to clear standards for their content moderation decisions in exchange for receiving the liability protections of Section 230. It would also give users due process, allowing them to appeal for quick resolution of complaints.
There are more than 100 state bills currently pending that are along the lines of the Texas and Florida legislation. Instead of opening the door to the potential for government to mandate content moderation standards, we hope that the Supreme Court will reaffirm longstanding First Amendment law by allowing social media sites to make their own content moderation decisions. At the same time, however, Congress should take a harder look at modifying the terms of liability protection in exchange for clearer standards in how content is moderated.
The one set of principles that must not be modified is the First Amendment.