Lindke v. Freed The U.S. Supreme Court is set to address several critical free-speech cases this session related to speech rights in the context of social media. One of those questions was recently settled, with the Court ruling on whether an official who blocks a member of the public from their social media account is engaging in a state action or acting as a private citizen. Answer: It depends on the context.
Writing for a unanimous Court in the case of Lindke v. Freed, Justice Amy Coney Barrett reaffirmed that members of the public can sue a public official where their actions are “attributable to the State” (consistent with U.S.C. §1983). In order to make that determination, the Court issued a new test, holding that: “A public official who prevents someone from commenting on the official’s social-media page engages in state action under §1983 only if the official both (1) possessed actual authority to speak on the State’s behalf on a particular matter, and (2) purported to exercise that authority when speaking in the relevant social-media posts.” This is a holistic analysis, consistent with the Protect The 1st amicus brief filed in O’Connor-Ratcliff v. Garnier. We argued that “no single factor is required to establish state action; rather, all relevant factors must be considered together to determine whether an account was operated under color of law.” That case, along with the Court’s banner case, Lindke v. Freed, is now vacated and remanded for new proceedings consistent with the Court’s novel test. When, as the Court acknowledges, “a government official posts about job-related topics on social media, it can be difficult to tell whether the speech is official or private.” So the Court set down rules. A state actor must have the actual authority – traced back to “statute, ordinance, regulation, custom, or usage” – to speak on behalf of the state. However, should an account be clearly designated as “personal,” an official “would be entitled to a heavy (though not irrebuttable) presumption that all of the posts on [their] page were personal.” In Lindke v. Freed, the public official’s Facebook account was neither designated as “personal” nor “official.” Therefore, a fact-specific analysis must be undertaken “in which posts’ content and function are the most important considerations.” As the Court explains: “A post that expressly invokes state authority to make an announcement not available elsewhere is official, while a post that merely repeats or shares otherwise available information is more likely personal. Lest any official lose the right to speak about public affairs in his personal capacity, the plaintiff must show that the official purports to exercise state authority in specific posts.” When a public official blocks a citizen from commenting on any of his posts on a “mixed-use” social media account, he risks liability for those that are professional in nature. Justice Barrett writes that a “public official who fails to keep personal posts in a clearly designated personal account therefore exposes himself to greater potential liability.” It's always been good policy to keep official and private accounts separate. The public must be able to have access to government-issued information, whether through a social media account or a public notice posted on the door of a government building. Moreover, citizens should be able to speak on issues of public concern, whether through Facebook or in a public square. Officials – presidents and former presidents included – should take note. A video depicting a recent interaction between an Oklahoma woman and three FBI agents has become a Rashomon-style meditation on the power of perception, with advocates and activists from across the ideological spectrum drawing their own object lessons from it. Review the video and you will see that the underlying issue at hand is fundamentally about the speech rights of an American citizen.
Here are the facts: Early in the morning of March 19, Rolla Abdeljawad of Stillwater, Oklahoma, answered her front door to find three FBI agents. Their purpose: To discuss some of the Egyptian-American’s Facebook posts. Abdeljawad is critical of Israel’s actions in the Gaza Strip. According to Washington Post, she regularly refers to Israel as “Isra-hell” and calls the Israeli Defense Forces “terrorist filth.” What she has not done is advocate for violence. You may find her posts unfair, but they do not rise to the level of a First Amendment exception, such as a true threat. Abdeljawad proved herself savvy regarding her civil rights. She recorded her interaction with the FBI agents, in which they can be heard claiming that Facebook “gave us a couple of screenshots of your account.” "So we no longer live in a free country, and we can't say what we want?" Abdeljawad responded. “No, we totally do. That's why we're not here to arrest you or anything," replied another agent. “We do this every day, all day long. It's just an effort to keep everybody safe and make sure nobody has any ill will.” (Emphasis added.) The implication here is that the FBI undertakes door-knocking expeditions “every day, all day long” to grill civilians about their protected speech online so no one has “ill will.” If someone is not calling for violence, as is the case here, there is no reason for a visit from the FBI. After all, such a visit by armed agents will never be taken as a benign consultation. It can’t help but have a chilling effect on speech. According to a report from Reason, “Meta's official policy is to hand over Facebook data to U.S. law enforcement in response to a court order, a subpoena, a search warrant, or an emergency situation involving ‘imminent harm to a child or risk of death or serious physical injury to any person.’” Clearly, judging from Abdeljawad’s encounter with the FBI, that policy can be misconstrued or ignored entirely. Law enforcement should never be harassing rank-and-file citizens over protected speech. Abdeljawad’s lawyer, Hassan Shibly, posted the video of the interaction across platforms with some good advice for others who may find themselves with unwanted visitors with FBI badges and spurious questions:
Americans should not accept as routine government agents coming to our homes to question us about opinions they find abrasive. There is no federal bureau of civil discourse, nor should there be in a First Amendment society. The recent House passage of a bill to force the sale of TikTok from its Chinese parent company – or suffer an outright ban – triggers obvious questions about the First Amendment. Many of our fellow civil liberties organizations have come to TikTok’s defense, making the point that if the government can silence one social media platform, it can close any media outlet, newspaper, website, or TV channel.
They point to many of TikTok’s strongest critics, who accuse it of pushing China’s line on sensitive issues and dividing Americans in what promises to be an especially heated election season. But our civil liberties allies remind us that the First Amendment protects all speech, no matter how divisive, even if it echoes foreign propaganda. That is fine as far as it goes, but there are other issues beyond the First Amendment in the TikTok debate. Here is where we break ranks with some of our peers: We see real danger in TikTok’s accumulation of the personal data of its 150 million American users, and 67 percent of U.S. teens – and how TikTok’s influence could harm the First Amendment by threatening the freedom of the press and the speech of users. After reviewing results from a year-long, bipartisan investigation, the House concluded that TikTok is being used by Beijing to spy on American citizens. TikTok’s parent company, ByteDance, has had a notorious relationship with the Chinese Communist Party (CCP). As we wrote last year, the Department of Justice and FBI have been investigating ByteDance over CCP access to Americans’ data. According to Emily Baker-White, a Forbes reporter who was herself surveilled by ByteDance, the department and U.S. Attorney for the Eastern District of Virginia have hit the Chinese firm with subpoenas about its purported surveillance of U.S. journalists. The company’s data policies have led multiple states to ban the app on state employee devices. It would be a flagrant violation to ban a newspaper for its content. But what if a hostile power deliberately manufactured newspapers with arsenic dye, toxic to the touch? In such a case, First Amendment issues would be irrelevant. ByteDance is compelled by Chinese law to share all its data with the Beijing government, and its military and intelligence agencies. Senators should determine whether the toxicity of the threats posed by TikTok's data practices and its relationship with the CCP necessitate action. This is not the first time the United States has forced a Chinese company to divest a social media platform. In 2020, the Committee on Foreign Investment in the United States raised the alarm about Kunlun Tech’s acquisition of Grindr, a popular LGBTQ dating app. The app already had a poor reputation for data security, but the committee was reportedly worried that the Chinese government could use personal data from the app to blackmail U.S. citizens, including government officials. The committee gave Kunlun a deadline by which it had to sell Grindr, and the app was sold back to an American owner. Forcing a media outlet to sell or go out of business is a drastic action, not to be undertaken lightly. But as the Senate debates, we should keep in mind that there are issues at stake in the TikTok controversy that go beyond the First Amendment. The U.S. Supreme Court heard oral arguments Monday in Murthy v. Missouri, a case addressing the government's covert efforts to influence social media content moderation during the Covid-19 pandemic. Under pressure from federal and state actors, social media companies reportedly engaged in widespread censorship of disfavored opinions, including those of medical professionals commenting within their areas of expertise.
The case arose when Missouri and Louisiana filed suit against the federal government arguing that the Biden Administration pressured social media companies to censor certain views. In reply, the government responded that it only requested, not pressured or demanded, that social media companies comply. Brian Fletcher, U.S. Principal Deputy Solicitor General, told the Court it should “reaffirm that government speech crosses the line into coercion only if, viewed objectively, it conveys a threat of adverse government action.” This argument seems reasonable, but a call from a federal agency or the White House is not just any request. When one is pulled over by a police officer, even if the conversation is nothing but a cordial reminder to get a car inspected, the interaction is not voluntarily. Social media companies are large players, and an interaction with federal officials is enough to whip up fears of investigations, regulations, or lawsuits. In Murthy v. Missouri, it just so happens that the calls from federal officials were not just mere requests. According to Benjamin Aguiñaga, Louisiana’s Solicitor General, “as the Fifth Circuit put it, the record reveals unrelenting pressure by the government to coerce social media platforms to suppress the speech of millions of Americans. The District Court which analyzed this record for a year, described it as arguably the most massive attack against free speech in American history, including the censorship of renowned scientists opining in their areas of expertise.” At the heart of Murthy v. Missouri lies a fundamental question: How far can the government go in influencing social media's handling of public health misinformation without infringing on free speech? Public health is a valid interest of the government, but that can never serve as a pretense to crush our fundamental rights. When pressure to moderate speech is exerted behind the scenes – as it was by 80 FBI agents secretly advising platforms what to remove – that can only be called censorship. Transparency is the missing link in the government's current approach. Publicly contesting misinformation, rather than quietly directing social media platforms to act, respects both the public's intelligence and the principle of free expression. The government's role should be clear and open, fostering an environment where informed decisions are made in the public arena. Perhaps the government should take a page from Ben Franklin’s book (H/T Jeff Neal): “when Men differ in Opinion, both Sides ought equally to have the Advantage of being heard by the Publick; and that when Truth and Error have fair Play, the former is always an overmatch for the latter …” Protect The 1st looks forward to further developments in this case. Noted fraudster former Rep. George Santos made headlines last week when he sued television personality Jimmy Kimmel for – what else? – fraud after Kimmel broadcasted a series of 14 personalized Cameo videos he requested from the disgraced former congressman. It “may be the most preposterous lawsuit of all time,” said Kimmel. Yet, ironically, Santos’ former colleagues in Congress could soon be the ones to legitimize Santos’ claims.
Santos, expelled from Congress for a range of misdeeds including stealing from campaign donors and money laundering, is perhaps as ripe a target for satire as can be found in our astonishingly silly and self-centered era. Santos, days after being removed from the U.S. House of Representatives joined Cameo, a video service where B-list celebrities record personalized messages for a few hundred bucks a pop ($500 in Santos’ case), providing catnip for comedians. The No AI FRAUD Act now under consideration in Congress could make comedy a crime, or at least a tort. To be fair, the bill does address a real concern: it is intended to protect “Americans’ individual right to their likeness and voice,” providing actors, celebrities, and others the means to safeguard their image. But it is overbroad in addressing the pitfalls of artificial intelligence, an issue that has captured the minds – and anxieties – of lawmakers across the country. This bill would create a federal right to a digital “replica” akin to a right of publicity, recognized under many state laws. It would allow a plaintiff to sue for commercial misappropriation of one’s likeness or voice. As written, the bill would “restrict a range of content wide enough to ensnare parody videos, comedic impressions, political cartoons, and much more” (H/T Reason). Sponsors, Reps. María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Pa.), are right to be concerned about AI-generated fakes and forgeries, as we saw in the recent AI fake of President Biden’s voice during the New Hampshire primary. But their bill is overbroad, capturing all manner of media representations currently protected by the First Amendment. Specifically, it prohibits any “replica, imitation, or approximation of the likeness of an individual that is created or altered in whole or part using digital technology.” That means an “actual or simulated image … regardless of the means of creation, that is readily identifiable as the individual.” Read this again – any imitation of an individual created by digital technology. Such a law could cover photos, recordings, parodies – even political cartoons. To quote Reason’s Elizabeth Nolan Brown, “if it involved recording or portraying a human, it’s probably covered.” A host of organizations have spoken out on the “No AI FRAUD Act,” including the Motion Picture Association, which argues that the creation of a digital “replica” right would “constitute a content-based restriction on speech.” As the Motion Picture Association wrote in a letter to Congress, “the government has no compelling interest in restricting creative depictions of public figures (including performers) in stories about them or the world they inhabit.” Congress should find ways to protect actors from having their image and career exploited without permission. This can be done without the chilling effects on artistic forms of expression, especially comedy and commentary. Doing so will require a nuanced and well-articulated bill. As it stands, the bill’s prohibitions are decidedly not funny. They could encompass sketch comedy, impressions, cartoons depicting real-life persona, or depictions of historical figures. As the Electronic Frontier Foundation wrote, “there’s not much that wouldn’t fall into [the category of prohibitions]—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered.” We need a sewing needle – not a hammer – to develop nuanced AI prohibitions that are consistent with the First Amendment. A bill that reads like it originated from a ChatGPT query doesn’t cut it. The U.S. Court of Appeals for the Second Circuit recently heard oral arguments in the case of Volokh v. James. It’s another in a series of critical recent cases involving government regulation of online speech – and one the Empire State should ultimately lose.
In 2022, distinguished legal scholar and Protect The 1st Senior Legal Advisor Eugene Volokh – along with social media platforms Rumble and Locals – brought suit against the state of New York after it passed a law prohibiting “hateful” conduct (or speech) online. Specifically, the law prohibits “the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” The law also requires platforms to develop and publish a policy laying out how exactly they will respond to such forms of online expression, as well as to create a complaint process for users to report objectionable content falling within the boundaries of New York’s (vague and imprecise) prohibitions. Should they fail to comply, websites could face fines of up to $1,000 per day. There are a number of problems with New York’s bid to regulate online speech – not least of which is that there is no hate speech exception to the First Amendment. As the Supreme Court noted in Matal v. Tam, “speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’” Moreover, the law fails to define key terms like “vilify,” “humiliate,” or “incite” – leaving its interpretation up to the eye of the beholder. As Volokh explained in a piece for Reason, “it targets speech that could simply be perceived by someone, somewhere, at some point in time, to vilify or humiliate, rendering the law's scope entirely subjective.” Does an atheist’s post criticizing religion “vilify” people of faith? Does a video of John Oliver making fun of the British monarchy “humiliate” the British people? The hypotheticals are endless because one’s subjective interpretation of another’s speech could cut a million different ways. In February 2023, a district court ruled against New York, broadly agreeing with Volokh’s arguments. As Judge Andrew L. Carter, Jr. wrote: “The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal.” To be fair, there is a purported government interest at play here, even if it’s not compelling in the broader context of the law’s vast, unconstitutional reach. The New York law is a legislative response to a 2022 Buffalo supermarket shooting perpetrated by a white supremacist who was, by all accounts, steeped in an online, racist milieu. Every decent person wants to give extremist views no oxygen. But incitement to violence is already a well-established First Amendment exception – unprotected by the law. Broadly compelling websites to create processes for addressing subjective, individualized offenses simply goes too far. Anticipating New York’s appeal to the Second Circuit, a number of ideologically disparate organizations joined with the Foundation for Individual Rights and Expression, or FIRE, (which is prosecuting the case), submitting amicus curiae briefs in solidarity with Volokh and his co-plaintiffs. Those groups – which include the American Civil Liberties Union, the Electronic Frontier Foundation, the Cato Institute, and satirical website the Babylon Bee – stand in uncommon solidarity against the proposition that government should ever be involved in private content moderation policies. As the ACLU and EFF assert, "government interjection of itself into that process in any form raises serious First Amendment, and broader human rights, concerns." True to form, the Babylon Bee’s brief notes that “New York's Online Hate Speech Law would be laughable – if its consequences weren't so serious.” When the U.S. Supreme Court renders its opinion on the Texas and Florida social media laws, it will give legislatures a better guide to developing more precise, articulable means of addressing online content. When does a legal reporting requirement for a social media company become a violation of the First Amendment? When it drums up public and political pressure to enforce viewpoint discrimination.
This is the conclusion of legal scholar Eugene Volokh and Protect The First Foundation, which filed an amicus brief late Wednesday before the Ninth Circuit Court of Appeals asking it to overturn a lower court ruling that upheld a California law requiring social media companies to disclose their content moderation practices. California Bill AB 587, signed into law by Gov. Gavin Newsom in 2022, compels social media companies to produce two such reports a year on their moderation practices and decisions, to be published on the website of the California Attorney General. This law “violates the First Amendment’s stringent prohibition on viewpoint discrimination” by “requiring social media companies to define viewpoint-based categories of speech,” declared Volokh, Senior Legal Advisor to Protect The 1st. “The law also requires these companies to report their policies as to those viewpoints, but not other viewpoints ...” This brief supports the challenge from X Corp.’s lawsuit filed in September 2023 that also asserted that AB 587 violates the First Amendment, which “unequivocally prohibits this kind of interference with a traditional publisher’s editorial judgment.” Volokh and Protect The 1st cited the landmark U.S. Supreme Court case, NAACP v. Alabama (1958), in which the Court overturned an Alabama law that would have compelled disclosure of the NAACP’s membership lists. The threat behind this law, the Court noted, relied on governmental and private community pressures that would result in the harassment of individuals and discouragement of their speech. “Generating either massive fines or public ‘pressure,’ a euphemism for public hostility, triggers the most exacting scrutiny our Constitution demands,” Volokh told the court. “California Assembly Bill 587 violates the First Amendment’s stringent prohibition on viewpoint discrimination. And AB 587 does so by leaning on social media companies to do the government’s dirty work, either through fear of fine or public pressure.” The brief cites a Supreme Court opinion that states “what cannot be done directly [under the Constitution] cannot be done indirectly.” Volokh writes: “The intent behind the law is clear from its legislative history, comments by its enforcer (Attorney General Rob Bonta), and common sense. That intent is to strongarm social media companies to restrict certain viewpoints—to combine law and public pressure to do something about how platforms treat those particular viewpoints, and not other viewpoints. That confirms that the facial viewpoint classification in the statute is indeed a viewpoint-based government action aimed at suppressing speech—and that violates the First Amendment.” Protect The 1st will continue to report on X Corp.v. Bonta as an important flashpoint in the continuous struggle to keep speech free of official regulation. Should we move to a post-Section 230 internet? Is liability-free content hosting coming to an end?
In Wired, Jaron Lanier and Allison Stanger argue for ending that provision of the Communications Decency Act that protects social media platforms from liability over the content of third-party posts. The two have penned a thoughtful and entertaining analysis about the problems and trajectory of a Section 230-based internet. It’s worth reading but takes its conclusions to an unjustifiable extreme – with unexamined consequences. The authors assert that while Section 230 may have served us well for a time, they argue that long-running negative trends have outpaced the benefits that Section 230 provided. The authors write that modern, 230-protected algorithms heavily influence the promotion of lies and inflammatory speech online, which it obviously does. “People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice,” Lanier and Stanger write. “If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm.” They argue algorithms and the “advertising” business model appeal to the most primal elements of the human brain, effectively capturing engagement by promoting the most tantalizing content. “We have learned that humans are most engaged, at least from an algorithm’s point of view, by rapid-fire emotions related to fight-or-flight responses and other high-stakes interactions.” This dynamic has had enormous downstream consequences for politics and society; Section 230 “has inadvertently rendered impossible deliberation between citizens who are supposed to be equal before the law. Perverse incentives promote cranky speech, which effectively suppresses thoughtful speech.” All this has led to a roundabout form of censorship, where arbitrary rules, doxing, and cancel culture stifle speech. Lanier and Stanger call this iteration of the internet the “sewer of least-common-denominator content that holds human attention but does not bring out the best in us.” Lanier and Stanger offer valid criticisms of the current state of the net. It is undeniable that discourse has coarsened in connection with the rise of social media platforms and toxic algorithms. Worse, the authors are correct that algorithms provide an incentive for the spreading of lies about people and institutions. Writing that John Smith is a lying SOB who takes bribes will, to paraphrase Twain, pull in a million “likes” around the world before John Smith can tie his shoes. So what is to be done? First, do not throw out Section 230 in toto. As we previously said in our brief before the U.S. Supreme Court with former Senator Rick Santorum, gutting Section 230 “would cripple the free speech and association that the internet currently fosters.” Without immunity, internet platforms could not organize content in a way that would be relevant and interesting to users. Without Section 230 protections, media platforms would avoid nearly any controversial content if they could be frivolously sued anytime someone got offended. Second, do consider modifications of Section 230 to reduce the algorithmic incentives that fling and spread libels and proven falsehoods. Lanier and Stanger make the point that the current online incentives are so abusive that the unhinged curtail the free speech of the hinged. We should explore ways to reduce the gasoline-pouring tendency of social media algorithms without impinging on speech. Further reform might be along the lines of the bipartisan Internet PACT Act, which requires platforms to have clear and transparent standards in content moderation, and redress for people and organizations who have been unfairly deposted, deplatformed, and demonetized. Lanier and Stanger are thinking hard and honestly about real problems, but the problems they would create would be much worse. A post-230 social media platform would be either be curated to the point of being inane, or not curated at all. Now that would be a sewer. Still, we give Lanier and Stanger credit for stimulating thought. Everyone agrees something needs to change online to promote more constructive dialogue. Perhaps we are getting closer to realizing what that change should be. Woman Arrested for Social Media Snark While High Court Protects Suspect’s Right to Tell Police to “Worry About a Head Shot”A woman in Morris County, New Jersey, was arrested in December for her social media posts that officials say constituted threats of terrorism, harassment, and retaliation. The last two of those “threats” seem to pertain to the authorities, who stretched statements from the merely obnoxious to appear as a “true threat.”
Monica Ciardi had been posting to Facebook for weeks about her child custody dispute. Her posts, coming by the dozens, criticized her ex-husband and the Morris County judges presiding over her case. In late December, police finally arrested Ciardi for posting “Judge Bogaard and Judge DeMarzo: If you don’t do what I want then you don’t get to see your kids. Hmm.” Here’s the catch, Ciardi was parroting what the two judges said to her, accidentally forgetting to use quotation marks. Ciardi had meant to post what the judges had declared in court – that if she didn’t do what they wanted, then she wouldn’t get to see her children. Ciardi offered an insightful metaphor: “This is my personal Facebook page with 50 people on it. They came to my page and then turned around and said I harassed them. That’s like if I know you don’t like me, I go to your house, I stand on your front porch, I overhear you saying bad things about me, and then I call the cops and say, ‘She’s harassing me. I know I’m on her porch, but you should just hear what she said.'” Ciardi’s attorney said the incident amounted to “the government punishing and jailing a woman for simply speaking her mind.” Ciardi claims her experience in jail was a nightmare. While there, she says that she received death threats, saw several assaults, and got caught in the line of fire of correctional officers’ pepper spray twice. She says she suffered panic attacks, lost 15 pounds, and was placed in protective custody, which meant she didn’t leave her cell “for more than 45 minutes two to three times a week, max.” That’s stiff punishment for venting frustrations online. Ciardi spent 35 days in jail until Superior Court Judge Mark Ali, who had originally ordered her detention, ordered her release. Ali cited a recent New Jersey Supreme Court ruling that raised the bar for terroristic threat charges. That case, too, is problematic, but for the opposite reason. Even a First Amendment organization like our own is agog at that ruling. The New Jersey high court ruled on Jan. 16 that a man who told police during a domestic disturbance call to “worry about a head shot” if they enter his property. He also posted online that he knew where the officers lived and the cars they drove. The court ruled that prosecutors had failed to prove that such statements were credible threats that “instill fear of injury in a reasonable person in the victim’s position” and are not merely “political dissent or angry hyperbole.” While we are pleased to see that Ciardi has been released, we disagree with the New Jersey court’s new precedent. In the case of the “head shot,” the threat was made against the putative target, the police. Unless the comment was made in an obviously sarcastic way or in some manner that indicated its insincerity, law enforcement should be able to take such claims seriously. There must be an easy line to be drawn between arresting a woman for her Facebook posts about a pending trial and a true threat of violence against law enforcement officers. We look forward to further developments in this latest case. In the closing days of 2023, Elon Musk and X Corp lost the first round of their bid in a state court to overturn a California law that would require social media platforms to disclose their content moderation policies. The law in question came into effect in 2022 and was advertised as a way to tamp down on hate speech, disinformation, harassment, and extremism.
The suit alleged that that the law’s real purpose was to coerce social media platforms into censoring content deemed problematic by the state. While District Judge William Shubb ruled that the law does impose a substantial compliance burden, he found it does not unjustifiably infringe on First Amendment rights. Protect The 1st believes X has a strong basis to appeal under settled precedent. For example, in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio (1985), the U.S. Supreme Court found that states can require an advertiser to disclose information without violating the advertiser's First Amendment free speech protections. But the disclosure requirements must be reasonably related to the state’s interest in preventing deception of consumers. This is not a case of selling gummies and advertising them as cures for cancer. It is reasonable to assert that some social media companies might do themselves a favor by releasing simple, clear content moderation policies to the public. But we should never forget that these policies are confidential, proprietary information. Requiring their forced disclosure could tip the scales in favor of state-enforced censorship of social media, which at least one federal judge believes is already occurring on a mass scale. Worse, the California law violates the First Amendment by compelling speech on the part of the companies themselves. Protect The 1st expects X to appeal with good prospects to overturn this ruling. Censorship controversies made many headlines throughout 2023. We’ve seen revelations about heavy-handed content moderation by the government and social media companies, and the looming U.S. Supreme Court decisions on Florida and Texas laws to restrict social media. Behind these policies and laws is a surprising level of public support. A Pew Research poll offers a skeleton key for understanding the trend.
According to Pew, a majority of Americans now believe that the government and technology companies should make more concerted efforts to restrict false information online. Fifty-five percent of Pew respondents support the federal government removal of false information, up from only 39 percent in 2018. Some 65 percent of respondents support tech companies editing the false information flow, up from 56 percent in 2018. Most alarming of all, Americans adults are now more likely to value content moderation over freedom of information. In 2018, that preference was flipped, with Americans more inclined to prioritize freedom of information over restricting false information – 58 percent vs. 39 percent. Pew doesn’t editorialize when it posts its findings. For our part, these results reveal a disturbing slide in Americans’ appreciation for First Amendment principles. Online “noise” from social media trolls is annoying, to be sure, but sacrificing freedom of information for a reduction in bad information is anathema to the very notion of a free exchange of ideas. What is needed, instead, is better media literacy – not to mention a better understanding of what actually constitutes false information, as opposed to opinions with which one may simply disagree. Still, the poll goes a long way toward explaining some of the perplexing attitudes we’re seeing on college campuses, where polls show college students lack a basic understanding of the First Amendment and increasingly support the heckler’s veto. These poll results also speak to the increasing predilection of public officials to simply block constituents with whom they disagree. And it perhaps explains some of the push-and-pull we’re seeing between big, blue social media platforms and big, red states like Florida and Texas, where one side purports to protect free speech by infringing on the speech rights of others. While these results are interesting from an academic perspective, the suggested remedies raise major red flags. Americans want private technology companies to be the arbiters of truth. A lesser but still significant percentage wants the federal government to serve that role. Any institution comprised of human beings is bound to fail at such a task. Ultimately, if we want to protect the free exchange of information, that role must necessarily fall to each of us as discerning consumers of news. The extent to which we are unable to differentiate between factual and false information is an indictment of our educational system. And, as far as content moderation policies are concerned, they must be clear, standardized, and include some form of due process for those subjected to censorship decisions. More than anything, Americans need to relearn that if we open the door to a private or public sector “Ministry of Truth,” we will eviscerate the First Amendment as we know it. You might be on the winning side initially, but eventually we all lose. A federal judge in Texas has upheld the state’s TikTok ban on devices used for government business. It’s the right ruling – a correct response to a precise law which undergirds the state’s legitimate interest in prohibiting the use of a potentially harmful social media app in official settings.
TikTok is a Chinese company with user data stored on servers in the PRC. It holds inordinate sway over young people in the US, with 67% of teens using the platform with some regularity, according to Pew. Yet, there is now credible public evidence that China’s officials enjoy open access to personal data on the platform, using it to spy on pro-democracy protestors. An employee of ByteDance, the corporate owner of TikTok, has made that claim. The Coalition for Independent Technology Research filed the lawsuit in July, arguing that the Texas ban compromises academic freedom. One teacher from the University of North Texas even suggested that they cannot sufficiently assign work without use of the app. Texas’ law specifically disallows the use of TikTok on state-owned, official devices. That’s in contrast to Montana’s outright ban on the app – for everyone. There, U.S. District Judge Molloy asserted that Montana’s law infringed on free speech rights and exceeded the bounds of state authority. He was right, too, and it was a significant affirmation of the importance of safeguarding fundamental rights in the digital age, particularly within the context of online platforms that serve as crucial arenas for expression. This court split exemplifies the balance we must strike between protecting user freedoms and enabling a safe digital environment without compromising free expression. States have every right to prohibit use of a foreign-controlled app on government owned phones. At the same time, blanket banning of TikTok is neither a constitutional nor reasonable response. Americans can speak freely and freely associate, even if they are unaware of the implications in doing so. State officials and employees, by contrast, are subject to different rules. But they are welcome to use TikTok on their personal phones. As Judge Robert L. Pitman correctly asserts, state universities constitute a “non-public” forum – the touchstone of which is whether “[restrictions] are reasonable in light of the purpose which the forum at issue serves.” Here, “Texas is providing a restriction on state-owned and -managed devices, which constitute property under Texas’s governmental control….” It is both viewpoint neutral and reasonable – which is all that is needed in such cases. Whether TikTok itself is viewpoint neutral is a question for another day. PruneYard Shopping: Are the Speech Rights of Shopping Centers Really Like Those of Social Media?12/19/2023
The Cato Institute’s recent amicus brief making the case that social media laws passed by the states of Texas and Florida are unconstitutional also takes aim at a precedent from 1980, PruneYard Shopping Center v. Robins. Cato’s brief raises the question: Does it make sense to analogize the speech rights of those who own a physical property with those who own a social media company?
In PruneYard, the U.S. Supreme Court held that the California Constitution protected reasonably exercised speech on the privately owned PruneYard shopping center against the owner’s wishes. The Court noted the California Constitution has broader protections for speech than the Bill of Rights. The Court correctly reasoned that states can have greater and positive protections for speech than the negatively defined rights of the First Amendment, which forbids government censorship and curtailments of speech rights. Based on this singular insight, the Court’s opinion established that the shopping center could not prevent outsiders from protesting or soliciting for political purposes on its private property. In its brief, Cato argues that the Supreme Court should at the proper time address this odd ruling and hold that forcing private property owners to accommodate on their premises speech they do not support is a violation of the property owners’ First Amendment rights. Cato also argues that social media platforms should similarly be protected from being forced to carry the speech of others. While Protect The 1st agrees with Cato that the Texas and Florida laws are unconstitutional, the analogy to PruneYard is flawed. Cato’s comparison with real property, however, remains useful, offering an illuminating look at what is unique about social media. As Protect The 1st previously reported, the Florida law would prohibit social media platforms from removing the posts of political candidates, while the Texas law would bar companies from removing posts based on a poster’s political ideology. The former law was struck down by the Eleventh Circuit, while the latter was upheld by the Fifth Circuit. Both cases are now headed to what promises to be a landmark digital speech review by the Supreme Court. But is the extension of the critiques of the PruneYard applicable to social media? This seems inapt because property owners who allow outsiders to mount politically-charged events on their premises might face liability for that speech, just as newspapers can be sued for speech contained in letters-to-the-editor. Social media is different. Section 230 of the Communications Decency Act is a government grant of immunity to social media platforms for third-party speech, while allowing some discretion for the platforms to moderate content. Despite frustrations over actual content management by social media companies, and government involvement in it, Section 230 has allowed a thriving online world to develop – along, of course, with all the attendant psychic garbage. This is utterly unlike shopping centers, which don’t enjoy any such government immunity and could be held legally accountable for the speech that occurs on their property. The two state laws have obvious First Amendment flaws and striking them down doesn’t require revising precedents. The authors of the Texas and Florida laws, concerned about the manipulation of the online debate, would further intrude government meddling into social media content moderation. This power would likely extend far beyond what these politicians imagine (and perhaps even to their specific detriment). We suggest the Supreme Court take a more straightforward analysis of the Florida and Texas laws as it invalidates them under the First Amendment. U.S. District Judge Donald Molloy recently blocked Montana's ban of the Chinese-owned social media platform TikTok, standing up for free speech but leaving a host of issues for policymakers to resolve. Montana’s ban, which was slated to take effect at the beginning of 2024, made it the first U.S. state to take such a measure against the popular video sharing app.
Judge Molloy asserted that Montana’s law infringed on free speech rights and exceeded the bounds of state authority. This decision is a significant affirmation of the importance of safeguarding fundamental rights in the digital age, particularly within the context of online platforms that serve as crucial arenas for expression. While celebrating this victory for free speech, it remains essential to acknowledge legitimate concerns over national security and data privacy regarding social media platforms answerable to a malevolent foreign government. TikTok's ownership by China's ByteDance raises pertinent questions about safeguarding user data and its potential exploitation by foreign entities. So worrying were the reports that the FBI opened an investigation into ByteDance in March. The need for robust measures to protect against data scraping, digital surveillance, and misuse of personal information is a valid concern. This case prompts reflection on the broader social welfare implications of platform regulation. TikTok's substantial user base, particularly youth, holds significant sway over American culture. Striking a balance between protecting user freedoms and privacy enables a safer digital environment without compromising free expression. Even storing Americans’ data in the United States might not be enough to lessen the danger that the regime in Beijing might override any firewalls. A better solution could be to incentivize China's ByteDance to divest TikTok's ownership to American ownership. This move would alleviate worries about data security by placing the platform under the oversight and governance of a company within the United States, subject to American laws and regulations. Ultimately, Judge Molloy's ruling upholds the sanctity of free speech in the digital realm. It should fuel constructive dialogues on the complex challenges to the United States posed by TikTok, particularly to the tension between individual liberties, national security imperatives in the face of a hostile regime, and the responsibility of digital platforms. Finding a delicate equilibrium among these facets remains an ongoing challenge that requires creative solutions, not restrictions on speech. A recent Federalist Society debate between NYU law professor Richard Epstein and the Cato Institute’s Clark Neily offered an illuminating preview of an urgent legal question soon to be addressed by the U.S. Supreme Court: can states constitutionally regulate the content moderation policies of social media platforms like Facebook and X (Twitter)?
Florida and Texas say “yes.” A Florida law bars social media companies from banning political candidates and removing anything posted by a “journalistic enterprise” based on its content. A Texas law prohibits platforms with at least 50 million active users from downgrading, removing, or demonetizing content based on a user’s views. Both bills are a response to legislative perceptions of tech censorship against conservative speakers. These two laws are based on the premise that states can regulate online platforms. But two federal courts came to two entirely different conclusions on that point. In 2022, the U.S. Court of Appeals for the Eleventh Circuit struck down the Florida law, finding “that it is substantially likely that social-media companies – even the biggest ones – are ‘private actors’ whose rights the First Amendment protects ...” Also in 2022, the Fifth Circuit Court of Appeals ruled for Texas, allowing the state law to stand. In the FedSoc debate, Epstein and Neily agreed about many of the problems some have with social media platforms but diverged – radically – on the remedies. Epstein argued that social media companies should be regulated like “common carriers,” fee-based public transportation businesses and entities offering communication transmission services such as phone companies. Under federal law, common carriers are required to provide their services indiscriminately; they cannot refuse service to someone based on their political views. Epstein – who himself was deplatformed from YouTube for offering contrarian views on Covid-19 policy – believes this is an appropriate requirement for social media platforms, too. Epstein cited a number of examples that he classifies as bad behavior by social media companies (collusion with government, acquiescence to government coercion, effective defamation of the deplatformed) which, in his view, compound an underlying censorship concern. He said: “…[I]t’s a relatively low system of intervention to apply a non-discrimination principle which is as much a part of the constitutional law of the United States as is the freedom of expression principle….” Neily, by contrast, took the Eleventh Circuit’s perspective, arguing that social media platforms are private companies that make constitutionally protected editorial decisions in order to curate a specific experience for their users. Neily said: “Even the torrent of Richard’s erudition cannot change three immutable facts. First, social media platforms are private property. There are some countries where that doesn’t matter, and we’re not one of them. Second, these are not just any private companies. These are private companies in the business of speech – of facilitating it and of curating it. That means providing a particular kind of experience. And third, you simply cannot take the very large and very square peg of the social media industry and pound it into the very round hole of common carrier doctrine or monopoly theory or regulated utilities ….” Protect The 1st understands Epstein’s frustration. Social media platforms routinely curate the content posted by third parties in order to ensure conformity with the platforms’ policies and terms of use. Modification of the content or refusal to publish often enrages the party who made the submission. But we remain decisively inclined towards Neily’s view. The First Amendment only prohibits repression of speech by the government. To carve out constitutional exceptions against private companies based on the disaffection of some with curation decisions would be a tremendously shortsighted error. To again quote Neily: “This is how you lose a constitution of limited government – one exception at a time.” One of the examples of the bad behavior to which Epstein alludes is presently being litigated in Missouri v. Biden. In that case, it is alleged that the government coerced social media platforms into downgrading or removing content that did not comport with the government’s efforts to ensure the provision of accurate information to the public regarding the Covid-19 pandemic, such as the effectiveness of vaccines. And while coercion is certainly reprehensible, we again agree with Neily as to how it should be addressed – through existing legal remedies. Said Neily: “What we should be doing instead [of regulating] is identifying the officials who engaged in this conduct and going after them with a meat axe.” When platforms engage in content moderation practices that are aggressive, they risk compromising their status as mere hosts of other’s content to become publishers of the content. The threat of losing the liability protections of Section 230 in these cases would serve as a useful deterrent to egregious content modification. Meat axes and other hyperboles aside, what we need most is an articulable roadmap for distinguishing between coercion and legitimate government interaction with tech platforms. Advocates of the common carrier argument tend to accurately diagnose the problem but overprescribe the solution. The preponderance of new issues that would arise if we transformed platforms into common carriers is staggering. Shareholder value would plummet, and retirement plans would suffer. And then there’s the problem of deciding which particular bureaucrats should be entrusted with overseeing these thriving, innovative, bleeding-edge technology companies, and the social media townhall. It’s unlikely the federal bench is that deep. We cannot seamlessly apply common carrier doctrine to social media platforms, nor should we nullify their constitutional rights just because of their success. As Neily said: “The idea that somehow you begin to lose your First Amendment rights just because you create some new way of affecting public discourse or even the political process, just because you hit it big … That is utterly alien to our tradition.” In a recent Fox News interview, presidential candidate Nikki Haley drew a lot of raspberries when she called online anonymous posting a “national security threat.” She proposed that social media platforms require identity verification for all users to stop foreign disinformation campaigns.
Nikki Haley is legitimately concerned with real online dangers. But such a requirement would chill speech, stifle the free flow of ideas and information, harm journalists and their sources, and land several American Founders in internet jail. Anonymity serves as a shield for individuals to freely express opinions without fear of retribution or persecution. For marginalized communities, victims of abuse, or those living under oppressive regimes, online anonymity can be a lifeline, allowing them to voice their concerns and opinions without risking personal safety. Banning anonymity would silence these voices. Furthermore, this proposal fails to acknowledge the pivotal role played by anonymous sources in investigative journalism. Banning online anonymity could place an insurmountable barrier for journalists to protect their sources, impeding the public's right to know about crucial matters of public interest. Platforms that allow anonymity often become safe spaces for open discussions on sensitive topics, mental health, or personal struggles. Removing this protective veil might discourage individuals from seeking help or sharing their experiences, ultimately stifling lifesaving conversations. Rather than enhancing security, enforced identification online could create an environment ripe for censorship and surveillance, where individuals feel compelled to self-censor out of fear. It may also pave the way for increased government intrusion into private online spaces, eroding the very freedoms the First Amendment aims to protect. Anonymity plays a vital role in many areas of American life, not just online speech. Since the landmark Supreme Court ruling in 1958, NAACP v. Alabama, the anonymity of donors has been recognized as critical to the protection of speech and the flourishing of the First Amendment. Perhaps that is why civil liberties groups on both the left and right have united to challenge laws that seek to expose donors, given such laws’ history with coercion, discrimination, and surveillance. Perhaps most important of all, this great country and our freedoms might not exist if not for anonymity. Friends of history will know that America’s Founders and pivotal figures made generous use of anonymity. Alexander Hamilton, John Jay, and James Madison wrote under the pseudonym “Publius” when they drafted The Federalist Papers. So too did their opponents, who published the Anti-Federalist Papers anonymously under multiple pseudonyms like “Brutus,” “Cato,” and “Federalist Farmer.” Thomas Paine published Common Sense anonymously. Less monumental in scope, Benjamin Franklin wrote under the name “Mrs. Silence Dogood” for the New-England Courant when his brother, the founder and publisher of the newspaper, refused to publish his letters under Benjamin’s real name. Were it not for anonymity, American history would look very different. To be sure, online anonymity has an ugly side. Social media platforms such as Facebook and Linked In have a First Amendment right to restrict anonymity and do so for sound business and public policy reasons. A personal attack or bombastic ideological statement without an identifiable author, however, inherently lacks credibility. We believe Americans have become savvier in their online judgments about online graffito than many experts believe. Instead of advocating for the elimination of anonymity, we should focus instead on promoting responsible online behavior, fostering digital literacy, and developing mechanisms that balance security concerns with the preservation of free speech rights. We’ve said it before – it would be a pointless victory to combat Russian disinformation if we become Russia. The Ninth Circuit Court of Appeals in March issued a controversial opinion in Twitter v. Garland that the Electronic Frontier Foundation calls “a new low in judicial deference to classification and national security, even against the nearly inviolable First Amendment right to be free of prior restraints against speech.”
X (née Twitter) is appealing this opinion before the U.S. Supreme Court. Whatever you think of X or Elon Musk, this case is an important inflection point for free speech and government surveillance accountability. Among many under-acknowledged aspects of our national security apparatus is the regularity with which the government – through FBI national security letters and secretive FISA orders – demands customer information from online platforms like Facebook and X. In 2014, Twitter sought to publish a report documenting the number of surveillance requests it received from the government the prior year. It was a commendable effort from a private actor to provide a limited measure of transparency in government monitoring of its customers, offering some much-needed public oversight in the process. The FBI and DOJ, of course, denied Twitter’s efforts, and over the past ten years the company has kept up the fight, continuing under its new ownership. At issue is X’s desire to publish the total number of surveillance requests it receives, omitting any identifying details about the targets of those requests. This purpose is noble. It would provide users an important metric in surveillance trends not found in the annual Statistical Transparency Report of the Office of the Director of National Intelligence. Nevertheless, in April 2020, a federal district court ruled against the company’s efforts at transparency. In March 2023, the Ninth Circuit upheld the lower court’s ruling, sweeping away a substantial body of prior restraint precedent in the process. Specifically, the Ninth Circuit carved out a novel exemption to long established prior restraint limitations: “government restrictions on the disclosure of information transmitted confidentially as part of a legitimate government process.” The implications of this new category of censorable speech are incalculable. To quote the EFF amicus brief: “The consequences of the lower court’s decision are severe and far-reaching. It carves out, for the first time, a whole category of prior restraints that receive no more scrutiny than subsequent punishments for speech—expanding officials’ power to gag virtually anyone who interacts with a government agency and wishes to speak publicly about that interaction.” This is an existential speech issue, far beyond concerns of party or politics. If the ruling is allowed to stand, it sets up a convenient standard for the government to significantly expand its censorship of speech – whether of the left, right or center. Again, quoting EFF, “[i]ndividuals who had interactions with law enforcement or border officials—such as someone being interviewed as a witness to a crime or someone subjected to police misconduct—could be barred from telling their family or going to the press.” Moreover, the ruling is totally incongruous with a body of law that goes back a century. Prior restraints on speech are the most disfavored of speech restrictions because they freeze speech in its entirety (rather than subsequently punishing it). As such, prior restraint is typically subject to the most exacting level of judicial scrutiny. Yet the Ninth Circuit applied a lower level of strict scrutiny, while entirely ignoring the procedural protections typically afforded to plaintiffs in prior restraint cases. As such, the “decision enables the government to unilaterally impose prior restraints on speech about matters of public concern, while restricting recipients’ ability to meaningfully test these gag orders in court.” We stand with X and EFF in urging the Supreme Court to promptly address this alarming development. O’Connor-Ratcliff v. Garnier The U.S. Supreme Court on Tuesday wrestled with a question of increasing urgency: When public officials block critics on social media, are they acting in their official roles and therefore liable for violating the First Amendment?
Arguments stemmed from two cases with differing outcomes. In O’Connor-Ratcliff v. Garnier, two California school board members blocked a couple – the Garniers – who regularly posted critical messages on the board members’ Facebook pages. The Ninth Circuit ruled that blocking the couple constituted state action due to the nature and character of the board members’ accounts, which frequently featured posts about official government business. In a separate, joined case (Lindke v. Freed), a Michigan city manager blocked a constituent – Lindke – following critical comments Lindke made regarding the city’s COVID-19 policies. There, the Sixth Circuit came to the opposite conclusion, finding that the city manager’s account was predominately personal in nature. That court held that a public official’s social media activity only constitutes state action when they are engaged in official duties. The Court’s questioning in Tuesday’s hearing offered no clear delineation between conservative and liberal justices. All nine, however, recognized the difficulty in determining when an official is acting in a public versus private capacity. “This is all a question of how broadly do we define authority or duty,” Justice Amy Coney Barrett said. The Biden Administration, through an amicus brief, sided with the public officials in both cases, arguing that officials have a right to block people from their social media accounts because those accounts constitute a type of private property. Chief Justice John Roberts and Justice Samuel Alito questioned the government’s position from different angles. “It doesn’t cost anything to open a Facebook page,” said Alito. “To make so much turn on who owns the Facebook page seems quite artificial.” Justice Roberts added, “In what sense is this really private property?” Lawyers representing the parties offered their own varying tests for determining what constitutes public versus private action. Hashim Mooppan, representing the school board members who lost in the Ninth Circuit, asserted that, the “only principled and workable test is to ask whether they exercised any duties or authorities of their job.” Justice Elena Kagan then asked if that would mean President Trump was acting in a private capacity when he blocked critics on his Twitter account (a lower court previously ruled that he was not). Mooppan conceded that, under his test, President Trump would be acting as a private citizen. Representing the Garniers, attorney Pamela Karlan offered a different test: if the board members were broadly doing their jobs when they blocked the Garniers, then it should be presumed as state action. Justice Alito, in turn, expressed concern about the breadth of Karlan’s test, noting that officials “have told me they’re always on call. They’re always doing their job. They’re always being approached by constituents.” Representing Kevin Lindke, the Michigan resident who lost in the Sixth Circuit, attorney Allon Kedem argued that “a public official who creates a channel for communicating with constituents about conduct in office and then blocks a user from that channel must abide by the Constitution.” Justice Clarence Thomas pointed out a key distinction between the two cases, which is that the board members in California only had a few personal posts on their Facebook pages, while the city manager in Michigan had many. It’s unclear where the Court will land on this issue. As Justice Neil Gorsuch said, there is a “profusion of possible tests” available. In an amicus brief filed by the foundation of Protect The 1st in O’Connor-Ratcliff v. Garnier, we argued that “no single factor is required to establish state action; rather, all relevant factors must be considered together to determine whether an account was operated under color of law.” In other words, a holistic test is likely appropriate here. More practically, “governmental bodies can and should adopt clear rules separating official accounts from private ones,” as Congress has done. Doing so would safeguard First Amendment rights for public officials and citizens alike. UPDATE: Supreme Court to Hear Arguments on Government Influence Over Social Media Platforms10/24/2023
In July we analyzed an order issued by Judge Terry A. Doughty of the U.S. District Court for the Western District of Louisiana that enjoined the Biden Administration and a wide range of federal agencies from “urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms.”
Last month, PT1st covered a subsequent ruling from the Fifth Circuit Court of Appeals, which significantly narrowed the scope of the district court’s injunction, reducing the district court’s ten prohibitions on government communications with social media platforms to one, and greatly limiting the agencies subject to the injunction to the White House, the FBI, the surgeon general’s office and the CDC. Now, acting on a request for review by the government, the U.S. Supreme Court has agreed to hear the case, staying the lower courts’ injunction in the meantime. At least until the High Court rules on the case, Biden Administration officials are not barred from interacting with social media platforms to combat what they view as misinformation. Justice Alito, joined by Justices Gorsuch and Thomas, dissented from granting the stay, writing: “At this time in the history of our country, what the court has done, I fear, will be seen by some as giving the government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.” Protect The 1st is not so sure. Given that the Court is now set to hear this case, executive branch officials will have good reason to be especially circumspect in the interim. Whatever happens, this case will be of great importance for the First Amendment’s application to online speech and permissible levels of government involvement in urging platforms to moderate content. And it is only one of several big cases set for consideration before our highest court in the coming months. The Supreme Court also recently agreed to hear a dispute stemming from Florida’s and Texas’ efforts to prohibit social media companies from engaging in some forms of content moderation, which the platforms have always viewed as protected by the First Amendment. In another case set in a hearing later this month, the Court will tackle the question of whether public officials can block their critics on social media. Regarding the present controversy, the Fifth Circuit ruled in September that the White House, the Surgeon General’s office, the FBI, and the CDC either coerced or significantly encouraged social media platforms to moderate protected speech primarily regarding election misinformation and misinformation about the pandemic. In the stay application, Solicitor General Elizabeth B. Prelogar argued that the platforms are private entities that made independent content moderation decisions. The government’s interactions with them, in turn, constituted routine advice consistent with its duties to protect public health and safety. “A central dimension of presidential power,” wrote Prelogar, “is the use of the office’s bully pulpit to seek to persuade Americans – and American companies – to act in ways that the president believes would advance the public interest.” The attorneys general of Missouri and Louisiana, both plaintiffs in the case, responded that the bully pulpit “is not a pulpit to bully,” arguing that the administration went too far in its communications by engaging in threatening and coercive behavior. As such, they assert, the decisions to remove or downgrade certain posts and accounts constituted government action. “The government’s incessant demands to platforms,” they wrote, “were conducted against the backdrop of a steady drumbeat of threats of adverse legal consequences from the White House, senior federal officials, members of Congress and key congressional staffers — made over a period of at least five years.” If, in the end, the Supreme Court determines that the government is threatening social media platforms, that will be a consequential finding. As the dissenting Justices write, “Government censorship of private speech is antithetical to our democratic form of government ...” At the same time, the government must be able to speak to private actors, including social media platforms, on issues of public concern. Ultimately, we need a roadmap for distinguishing between legitimate government action and coercion. A robust discussion at the national level is best suited to parse the nuances at play when it comes to social media and free speech. Congress should hold bipartisan hearings to determine the circumstances where government advice may be helpful to platforms’ content moderation decisions versus the circumstances where such advice may be coercive. We’ll be watching this case closely as it progresses. Earlier this summer, we wrote about an opinion and order issued by Judge Terry Doughty of the U.S. District Court for the Western District of Louisiana in the case of Missouri v. Biden. The controversy stemmed from accusations of government censorship and viewpoint discrimination against speech – under both the Biden and the Trump administrations – most notably social media posts related to COVID-19.
The plaintiffs argued that the government pressured social media platforms to such a degree that it interfered with the First Amendment right of the platforms to make their own content moderation decisions. Judge Doughty agreed. The district judge’s controversial order enjoined the White House and a broad range of government agencies from engaging in a wide array of communications with social media platforms, with 10 separate provisions laying out the parameters. The administration appealed to the Fifth Circuit, which stayed the injunction. Now, a three-judge panel from the Fifth Circuit has weighed in. Broadly, they side with Judge Doughty’s finding that the White House, the Surgeon General’s office, the FBI, and the CDC either coerced or significantly encouraged social media platforms to moderate protected speech. At the same time, the court significantly reduced the scope of the injunction order, striking nine out of the 10 prohibitions for vagueness, overbreadth, or redundancy. Further, the court found that a range of enjoined parties – including former NIH Infectious Disease Director Anthony Fauci and the State Department – did not engage in impermissible conduct. What we are now left with is a much narrower new injunction with a single prohibition reading as follows: “Defendants, and their employees and agents, shall take no actions, formal or informal, directly, or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies’ decision-making processes.” Unsurprisingly, the Biden administration is appealing the ruling – this time to the highest court in the land. The U.S. Supreme Court granted the administration’s request for an administrative stay of the Fifth Circuit injunction as the administration prepares to file a petition for certiorari by Oct. 13 (which would allow the Supreme Court to hear the controversy this term). While it is at least reasonably likely that the Court will agree to hear this case, we stand by our prior position on the issue – that questions surrounding the limits of government interaction with social media companies merit a vigorous, informed public debate. We again urge Congress to hold bipartisan hearings to examine among other questions whether social media platforms find the communications with government to be unwelcome pressure or whether they find the information provided to be helpful. In order to combat a tide of Covid misinformation, in 2021 the White House began closely monitoring social media companies’ health related postings. The sense of urgency felt by federal officials was soon reflected in sometimes hyperbolic communications to the public that reflected the deep concern with a flood of harmful misinformation that they believed was getting in the way of the provision of accurate Covid related information to the public. In July 2021, at a White House presser, the Surgeon General accused social media companies of “enabl[ing] misinformation to poison” the public. Soon after, President Biden responded with his own comment about social media “killing people” and the White House publicly discussed legal options. Social media companies apparently understood the message, changing internal policies and making new efforts to deplatform users like the “disinfo dozen,” a list of influencers deemed problematic by the White House. Still, the administration continued its public messaging, with the White House Press Secretary at one point expressing explicit support for Section 230 reforms so the companies can be held accountable for “the harms they cause.” Of course, the government must be able to communicate freely to the public and with private companies, especially on matters of public health and safety. The parties released from the District Court’s injunction likely exercised that right appropriately. There is danger, however, when the government works with social media silently to remove content, with no public transparency, especially if there is a hint (or more than a hint) of coercion. What is that danger, exactly? Reasonable people agree there are public health messages that are irresponsible and harmful. But secret censorship, no matter the justification, is the royal road to a censored society. Protect The 1st hopes that congressional hearings and a high Court review will bring clarity on the question of government communications with social media, now America’s main public square. In 2000, the U.S. Supreme Court in Hill v. Colorado found that restrictions on speech-related conduct outside abortion clinics was content-neutral and thus subject only to intermediate scrutiny, a lesser degree of protection. Since that time, lower courts have upheld similar state and local restrictions on speech based on this binding precedent – and despite a raft of subsequent cases that call Hill’s reasoning into question.
The recent case of Vitagliano v. County of Westchester is a perfect exemple of these ongoing challenges. It is now up for potential review before the Court. It offers a good opportunity to overturn Hill and the unconstitutional legal trend it originated. Here are the facts of the case: Debra Vitagliano is a devout Catholic whose mission is to offer compassionate counsel to women seeking abortions at the last minute, when such counsel might be most effective. Westchester County, like many jurisdictions before it, passed a law establishing a 100-foot buffer zone around reproductive health care facilities (encompassing public sidewalks), prohibiting anyone looking to offer such assistance from getting within eight feet of another person unless they receive explicit consent. Critics of the Hill decision, including 14 states that recently filed an amicus brief, argue that Hill misapplied the legal test for determining whether a speech restriction is content-based. Specifically, they argue that the Court erroneously relied on Colorado’s references to “access” and “privacy” as justification for the statute’s purported neutrality. Since 2000, the Supreme Court has conspicuously refrained from drawing on Hill’s reasoning, and in Dobbs v. Jackson went so far as to call it a distortion of First Amendment doctrines. Whenever the government passes a speech restriction that is obviously content-based (as it is here) it must be looked at through the lens of strict scrutiny. It must be narrowly tailored to serve a compelling government interest. This means a government cannot simply abridge its citizens’ First Amendment rights because of some particular policy preference – for example, in another context, the idea that protest should not be allowed outside military recruitment facilities because it discourages young people from enlisting. It’s clear that Hill was a policy decision, and while one may agree with its intent, it also opened the door to overstepping when it comes to restricting speech in public places. The sidewalk has long been held to be a public forum. In fact, it’s arguably the place where speech about contentious political issues most belongs. As the Supreme Court wrote in McCullen v. Coakley, sidewalk speech reflects the First Amendment’s goal to “preserve an uninhibited marketplace of ideas in which truth will ultimately prevail.” Criminalizing certain speech on public sidewalks endangers that goal. And preventing Debra Vitagliano from engaging in peaceable, non-violent conversation amounts to the kind of overbreadth that seals the deal when it comes to a law’s unconstitutionality, particularly when laws already exist prohibiting assault, trespass, and blocking clinical access. Whatever your views on abortion, Hill was a bad decision that should be overturned. To quote First Amendment scholar and Harvard professor Lawrence Tribe, the case was “slam-dunk simple.” Its ruling: “slam-dunk wrong.” In April, Protect The 1st reported on two pending cases before the Supreme Court, O’Connor-Ratcliff v. Garnier and Lindke v. Freed, addressing the question of what constitutes a public forum on Facebook. In both lawsuits, public officials blocked criticism from constituents on their social media sites; in both instances, the constituents sued.
Now, the U.S. Supreme Court is set to deliberate the urgent question: When does a personal account become public? This is the first time the Court will address the difference between public and private fora against the backdrop of the digital age. In our Protect The First Foundation amicus brief in O’Connor-Ratcliff, we write: “The state action question in this case implicates two vital First Amendment rights: that of citizens to access government fora, and that of public officials to control with whom and how they communicate when they speak in their private capacities. As this case demonstrates, those rights are in tension when it is not immediately apparent whether a government representative is operating a social media account in her public or private capacity.” The petitioners argue that they should be able to block constituents from their social media profiles, on which they discussed government business, as long as their actions aren’t affirmatively required as one of their government duties and they don’t explicitly invoke state authority. In short, they wish to summon their own First Amendment rights to silence their critics in a public forum. For many years now, Members of Congress have segregated their personal and public accounts. They are correct in doing so, and this situation shows why. The legal issue is at what point does a public official’s actions constitute “state action.” And here, the officials’ social media pages are draped in their status as public servants – even though they began as personal campaign pages. With great regularity, they post about official government business and use their accounts to facilitate their government duties. As such, they cannot then claim that when they operate those accounts they are private actors. Government officials, like everyone else, have First Amendment rights. But they cannot have their cake and eat it too by speaking with the authority of government while erasing the access of their critics to that speech. The fact is that we must – now – delineate the limits and boundaries of social media’s power in the context of public service. If you are a public official, you cannot – must not – be able to silence your critics in a public forum under the auspices of your own First Amendment rights. Sorry. Sometimes you just have to take the heat. Should Salesforce.com be held liable as a participant in sex trafficking because it sold customer relationship management software to the now-defunct Backpage.com?
Such a ruling would run smack into Twitter v. Taamneh, in which the U.S. Supreme Court made it clear that despite the fact that ISIS terrorists used that popular social media platform to communicate, Twitter could not be held liable as an aider and abettor of terrorism. Holding Salesforce liable for Backpage’s misdeeds would also contradict rulings with similar principles from the 9th Circuit Court of Appeals and the DC Circuit Court of Appeals. These courts, writes Mike Masnick of TechDirt, found that it “would be ridiculous to hold out every service provider for liability just because a drug trafficking, sex trafficking, or terrorist organization used those tools to improve their reach.” But the Seventh Circuit Court of Appeals found otherwise. Backpage was a classified advertising site that was shuttered and began a long saga in the courts after being hit with 100 counts involving prostitution and sex trafficking in 2018. Salesforce, according to the Seventh Circuit, should have somehow known as early as 2013 that it was involved in sex trafficking by selling operational software to this client. Every decent person deplores sex trafficking, just as every decent person condemns terrorism. But it is bad logic and morally confused to extend liability for sex trafficking from bad actors to vendors – people lacking in investigative skills and precognitive ability to see how the law will treat a customer years later. There are clear First Amendment implications in conflating the speech and actions of a customer with those of a vendor. We agree with Masnick – “it would be nice if the Supreme Court told the 7th Circuit to knock it off.” Jeff Kosseff, associate professor of cybersecurity law at the U.S. Naval Academy, titled his acclaimed book about Section 230, The Twenty-Six Words that Created the Internet. Those exact words:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Kosseff did not exaggerate. This statute, part of the Communications Decency Act of 1996, protects platforms and websites from any liability contained in third-party posts. Section 230 not only protects Facebook or Twitter (now X) from being sued for libelous posts made by its users, it also protects myriad web-based businesses – from Angi (formerly Angie’s List), to Rate My Professors, to a thousand sites that run reviews of hotels, restaurants, and businesses of all sorts. Without Section 230, a wide swath of U.S. digital commerce would cease to exist overnight. And yet, Justice Clarence Thomas hit a nerve in 2021 when he mused in an opinion that the “right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions.” Such questions certainly seemed interesting to lawmakers in Florida and Texas. Texas passed a law that bars companies from removing posts based on a poster’s political ideology. This law was upheld last year by the Fifth Circuit. The Florida law, which would prohibit social media from removing the posts of political candidates, was stricken last year by the Eleventh Circuit. At the time, we wrote that: Cert bait doesn’t get more appealing than this. Consider: A split between federal circuits. Laws that would protect free expression in the marketplace of ideas while simultaneously curtailing the speech rights of unpopular companies. Two similar laws with differences governing the moderation of political speech. The petition for SCOTUS reviewing the Texas and Florida laws practically writes itself. The First Amendment is aimed only at the government. It protects the editorial decisions of social media companies while forbidding government control of speech. But being kicked off X, Facebook, Google, and Amazon would certainly feel like being censored. And there may well be First Amendment implications whenever federal agencies are secretly involved in content management decisions. But if Section 230 is overthrown, what will replace it? In the face of the current circuit split, legal principles get tangled up like fishing lines on a tourist boat. As Kosseff notes in Wired, Americans living under the Fifth Circuit may see drastic alteration of the regulation of internet companies. In the Eleventh Circuit, Section 230 prevails as it is. The resulting confusion is why it is likely the Supreme Court will have to take up a challenge from NetChoice, which represents tech companies. If the Court doesn’t cut this Gordian knot, we could wind up with a Red State internet and a Blue State internet. While the judiciary sorts out its thinking, Congress should act. Protect The 1st continues to press policymakers to look at principles similar to those of the bipartisan Platform Accountability and Consumer Transparency Act, which would require big social media companies to offer clear standards and due process for those who post in exchange for the liability protections of Section 230. A New York Times op-ed by two U.S. senators offers a bipartisan counter to the power of Big Tech – eliminate the legal liability protections that have been the cornerstone of the internet since 1996, while imposing “an independent, bipartisan regulator charged with licensing and policing the nation’s biggest tech companies.”
The ability to license and police is, of course, the ability to control some of America’s largest social media platforms. If enacted, this measure proposed by Sens. Elizabeth Warren (D-MA) and Lindsey Graham (R-SC) would strip away the ability of minority opinion and contentious views from being heard, while subjecting speech to official, top-down policing by a regulator. The op-ed doesn’t name Section 230, the law that protects platforms that host third-party speech from legal liability. We respect the earnest desire of these two senators to improve the state of online speech, but replacing Section 230 with the vague mandate of a regulator could be profoundly dangerous for the First Amendment’s guarantee of free speech, the lifeblood of democracy. Section 230 restricts the legal liability for illegal acts to the speaker, not the website. It holds those who break the law online accountable for their actions, while holding platforms accountable for preventing serious federal crimes, like posting child abuse sex material. It empowers minorities of all sorts, allowing controversial or unpopular opinions to have their day. Without Section 230, the internet would devolve into a highly sanitized, curated space where any controversial statement or contentious argument would be red penciled. The elimination of Section 230 would take away the vibrant clash of opinions and replace it with endless cat videos and perhaps the regulator’s officially sanctioned views. Many believe, and we agree, that Section 230 needs reform. The bipartisan PACT Act would require platforms to give speakers a way to protest having posts removed, while respecting the First Amendment rights of both companies and speakers, with less risk of government heavy-handedness and censorship. |
Archives
January 2024
Categories
All
|
ABOUT |
ISSUES |
TAKE ACTION |