UPDATE: Supreme Court to Hear Arguments on Government Influence Over Social Media Platforms10/24/2023
In July we analyzed an order issued by Judge Terry A. Doughty of the U.S. District Court for the Western District of Louisiana that enjoined the Biden Administration and a wide range of federal agencies from “urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms.”
Last month, PT1st covered a subsequent ruling from the Fifth Circuit Court of Appeals, which significantly narrowed the scope of the district court’s injunction, reducing the district court’s ten prohibitions on government communications with social media platforms to one, and greatly limiting the agencies subject to the injunction to the White House, the FBI, the surgeon general’s office and the CDC. Now, acting on a request for review by the government, the U.S. Supreme Court has agreed to hear the case, staying the lower courts’ injunction in the meantime. At least until the High Court rules on the case, Biden Administration officials are not barred from interacting with social media platforms to combat what they view as misinformation. Justice Alito, joined by Justices Gorsuch and Thomas, dissented from granting the stay, writing: “At this time in the history of our country, what the court has done, I fear, will be seen by some as giving the government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.” Protect The 1st is not so sure. Given that the Court is now set to hear this case, executive branch officials will have good reason to be especially circumspect in the interim. Whatever happens, this case will be of great importance for the First Amendment’s application to online speech and permissible levels of government involvement in urging platforms to moderate content. And it is only one of several big cases set for consideration before our highest court in the coming months. The Supreme Court also recently agreed to hear a dispute stemming from Florida’s and Texas’ efforts to prohibit social media companies from engaging in some forms of content moderation, which the platforms have always viewed as protected by the First Amendment. In another case set in a hearing later this month, the Court will tackle the question of whether public officials can block their critics on social media. Regarding the present controversy, the Fifth Circuit ruled in September that the White House, the Surgeon General’s office, the FBI, and the CDC either coerced or significantly encouraged social media platforms to moderate protected speech primarily regarding election misinformation and misinformation about the pandemic. In the stay application, Solicitor General Elizabeth B. Prelogar argued that the platforms are private entities that made independent content moderation decisions. The government’s interactions with them, in turn, constituted routine advice consistent with its duties to protect public health and safety. “A central dimension of presidential power,” wrote Prelogar, “is the use of the office’s bully pulpit to seek to persuade Americans – and American companies – to act in ways that the president believes would advance the public interest.” The attorneys general of Missouri and Louisiana, both plaintiffs in the case, responded that the bully pulpit “is not a pulpit to bully,” arguing that the administration went too far in its communications by engaging in threatening and coercive behavior. As such, they assert, the decisions to remove or downgrade certain posts and accounts constituted government action. “The government’s incessant demands to platforms,” they wrote, “were conducted against the backdrop of a steady drumbeat of threats of adverse legal consequences from the White House, senior federal officials, members of Congress and key congressional staffers — made over a period of at least five years.” If, in the end, the Supreme Court determines that the government is threatening social media platforms, that will be a consequential finding. As the dissenting Justices write, “Government censorship of private speech is antithetical to our democratic form of government ...” At the same time, the government must be able to speak to private actors, including social media platforms, on issues of public concern. Ultimately, we need a roadmap for distinguishing between legitimate government action and coercion. A robust discussion at the national level is best suited to parse the nuances at play when it comes to social media and free speech. Congress should hold bipartisan hearings to determine the circumstances where government advice may be helpful to platforms’ content moderation decisions versus the circumstances where such advice may be coercive. We’ll be watching this case closely as it progresses. Earlier this summer, we wrote about an opinion and order issued by Judge Terry Doughty of the U.S. District Court for the Western District of Louisiana in the case of Missouri v. Biden. The controversy stemmed from accusations of government censorship and viewpoint discrimination against speech – under both the Biden and the Trump administrations – most notably social media posts related to COVID-19.
The plaintiffs argued that the government pressured social media platforms to such a degree that it interfered with the First Amendment right of the platforms to make their own content moderation decisions. Judge Doughty agreed. The district judge’s controversial order enjoined the White House and a broad range of government agencies from engaging in a wide array of communications with social media platforms, with 10 separate provisions laying out the parameters. The administration appealed to the Fifth Circuit, which stayed the injunction. Now, a three-judge panel from the Fifth Circuit has weighed in. Broadly, they side with Judge Doughty’s finding that the White House, the Surgeon General’s office, the FBI, and the CDC either coerced or significantly encouraged social media platforms to moderate protected speech. At the same time, the court significantly reduced the scope of the injunction order, striking nine out of the 10 prohibitions for vagueness, overbreadth, or redundancy. Further, the court found that a range of enjoined parties – including former NIH Infectious Disease Director Anthony Fauci and the State Department – did not engage in impermissible conduct. What we are now left with is a much narrower new injunction with a single prohibition reading as follows: “Defendants, and their employees and agents, shall take no actions, formal or informal, directly, or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies’ decision-making processes.” Unsurprisingly, the Biden administration is appealing the ruling – this time to the highest court in the land. The U.S. Supreme Court granted the administration’s request for an administrative stay of the Fifth Circuit injunction as the administration prepares to file a petition for certiorari by Oct. 13 (which would allow the Supreme Court to hear the controversy this term). While it is at least reasonably likely that the Court will agree to hear this case, we stand by our prior position on the issue – that questions surrounding the limits of government interaction with social media companies merit a vigorous, informed public debate. We again urge Congress to hold bipartisan hearings to examine among other questions whether social media platforms find the communications with government to be unwelcome pressure or whether they find the information provided to be helpful. In order to combat a tide of Covid misinformation, in 2021 the White House began closely monitoring social media companies’ health related postings. The sense of urgency felt by federal officials was soon reflected in sometimes hyperbolic communications to the public that reflected the deep concern with a flood of harmful misinformation that they believed was getting in the way of the provision of accurate Covid related information to the public. In July 2021, at a White House presser, the Surgeon General accused social media companies of “enabl[ing] misinformation to poison” the public. Soon after, President Biden responded with his own comment about social media “killing people” and the White House publicly discussed legal options. Social media companies apparently understood the message, changing internal policies and making new efforts to deplatform users like the “disinfo dozen,” a list of influencers deemed problematic by the White House. Still, the administration continued its public messaging, with the White House Press Secretary at one point expressing explicit support for Section 230 reforms so the companies can be held accountable for “the harms they cause.” Of course, the government must be able to communicate freely to the public and with private companies, especially on matters of public health and safety. The parties released from the District Court’s injunction likely exercised that right appropriately. There is danger, however, when the government works with social media silently to remove content, with no public transparency, especially if there is a hint (or more than a hint) of coercion. What is that danger, exactly? Reasonable people agree there are public health messages that are irresponsible and harmful. But secret censorship, no matter the justification, is the royal road to a censored society. Protect The 1st hopes that congressional hearings and a high Court review will bring clarity on the question of government communications with social media, now America’s main public square. Jeff Kosseff, associate professor of cybersecurity law at the U.S. Naval Academy, titled his acclaimed book about Section 230, The Twenty-Six Words that Created the Internet. Those exact words:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Kosseff did not exaggerate. This statute, part of the Communications Decency Act of 1996, protects platforms and websites from any liability contained in third-party posts. Section 230 not only protects Facebook or Twitter (now X) from being sued for libelous posts made by its users, it also protects myriad web-based businesses – from Angi (formerly Angie’s List), to Rate My Professors, to a thousand sites that run reviews of hotels, restaurants, and businesses of all sorts. Without Section 230, a wide swath of U.S. digital commerce would cease to exist overnight. And yet, Justice Clarence Thomas hit a nerve in 2021 when he mused in an opinion that the “right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions.” Such questions certainly seemed interesting to lawmakers in Florida and Texas. Texas passed a law that bars companies from removing posts based on a poster’s political ideology. This law was upheld last year by the Fifth Circuit. The Florida law, which would prohibit social media from removing the posts of political candidates, was stricken last year by the Eleventh Circuit. At the time, we wrote that: Cert bait doesn’t get more appealing than this. Consider: A split between federal circuits. Laws that would protect free expression in the marketplace of ideas while simultaneously curtailing the speech rights of unpopular companies. Two similar laws with differences governing the moderation of political speech. The petition for SCOTUS reviewing the Texas and Florida laws practically writes itself. The First Amendment is aimed only at the government. It protects the editorial decisions of social media companies while forbidding government control of speech. But being kicked off X, Facebook, Google, and Amazon would certainly feel like being censored. And there may well be First Amendment implications whenever federal agencies are secretly involved in content management decisions. But if Section 230 is overthrown, what will replace it? In the face of the current circuit split, legal principles get tangled up like fishing lines on a tourist boat. As Kosseff notes in Wired, Americans living under the Fifth Circuit may see drastic alteration of the regulation of internet companies. In the Eleventh Circuit, Section 230 prevails as it is. The resulting confusion is why it is likely the Supreme Court will have to take up a challenge from NetChoice, which represents tech companies. If the Court doesn’t cut this Gordian knot, we could wind up with a Red State internet and a Blue State internet. While the judiciary sorts out its thinking, Congress should act. Protect The 1st continues to press policymakers to look at principles similar to those of the bipartisan Platform Accountability and Consumer Transparency Act, which would require big social media companies to offer clear standards and due process for those who post in exchange for the liability protections of Section 230. A New York Times op-ed by two U.S. senators offers a bipartisan counter to the power of Big Tech – eliminate the legal liability protections that have been the cornerstone of the internet since 1996, while imposing “an independent, bipartisan regulator charged with licensing and policing the nation’s biggest tech companies.”
The ability to license and police is, of course, the ability to control some of America’s largest social media platforms. If enacted, this measure proposed by Sens. Elizabeth Warren (D-MA) and Lindsey Graham (R-SC) would strip away the ability of minority opinion and contentious views from being heard, while subjecting speech to official, top-down policing by a regulator. The op-ed doesn’t name Section 230, the law that protects platforms that host third-party speech from legal liability. We respect the earnest desire of these two senators to improve the state of online speech, but replacing Section 230 with the vague mandate of a regulator could be profoundly dangerous for the First Amendment’s guarantee of free speech, the lifeblood of democracy. Section 230 restricts the legal liability for illegal acts to the speaker, not the website. It holds those who break the law online accountable for their actions, while holding platforms accountable for preventing serious federal crimes, like posting child abuse sex material. It empowers minorities of all sorts, allowing controversial or unpopular opinions to have their day. Without Section 230, the internet would devolve into a highly sanitized, curated space where any controversial statement or contentious argument would be red penciled. The elimination of Section 230 would take away the vibrant clash of opinions and replace it with endless cat videos and perhaps the regulator’s officially sanctioned views. Many believe, and we agree, that Section 230 needs reform. The bipartisan PACT Act would require platforms to give speakers a way to protest having posts removed, while respecting the First Amendment rights of both companies and speakers, with less risk of government heavy-handedness and censorship. In an amicus brief before the U.S. Supreme Court earlier this year, Protect The 1st told the Court that curtailing Section 230 of the Communications Decency Act of 1996 “would cripple the free speech and association that the internet currently fosters.” Consistent with that recommendation, the Court today declined various invitations to curtail that law’s important protections for free speech.
Joining with former Sen. Rick Santorum, we demonstrated in our amicus brief that Section 230 – which offers liability protection to computer-services providers that host third-party speech – is essential to enabling focused discussions and keeping the internet from devolving into a meaningless word soup. “If platforms faced liability for merely organizing and displaying user content in a user-friendly manner, they would likely remove or block controversial – but First Amendment protected – speech from their algorithmic recommendations,” PT1st declared. We stated that a vibrant, open discussion must include a degree of protection for sponsors of internet conversations. With Congress always able to amend Section 230 if new challenges necessitate a change in policy, there is no need for the Supreme Court to rewrite that law. The Supreme Court had shown recent interest in reexamining Section 230. That could still happen, but the two cases that were before the Court turned out to be weak vessels for that review. On Thursday, the Court declined to consider reinterpreting this law in Gonzalez v. Google and Twitter v. Taamneh, finding that the underlying complaints were weak. The Court neither expressly affirmed nor rejected our approach, leaving these issues open for another day and another case. Protect The 1st will remain vigilant against future challenges to Section 230 that could undermine the freedom of speech online. Our policy director, Erik Jaffe, discusses the U.S. Supreme Court oral argument in Gonzalez v. Google with The Federalist Society.
Via The Federalist Society: On February 21, 2023, the U.S. Supreme Court will hear oral argument in Gonzalez v. Google. After U.S. citizen Nohemi Gonzalez was killed by a terrorist attack in Paris, France, in 2015, Gonzalez’s father filed an action against Google, Twitter, and Facebook. Mr. Gonzalez claimed that Google aided and abetted international terrorism by allowing ISIS to use YouTube for recruiting and promulgating its message. At issue is the platform’s use of algorithms that suggest additional content based on users’ viewing history. Additionally, Gonzalez claims the tech companies failed to take meaningful action to counteract ISIS’ efforts on their platforms. The district court granted Google’s motion to dismiss the claim based on Section 230(c)(1) of the Communications Decency Act, and the U.S. Court of Appeals for the Ninth Circuit affirmed. The question now facing the Supreme Court is does Section 230 immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information? Observers of the U.S. Supreme Court have long wondered if Justice Clarence Thomas would lead his colleagues to hold internet companies that post users’ content to the same liability standard as a publisher.
In a concurrence last year, Justice Thomas questioned Section 230 – a statute that provides immunity for internet companies that post user content. Justice Thomas noted that the “right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions.” In the case heard today, Gonzalez v. Google, the family of a woman murdered by terrorists in Paris is suing Google not for a direct post, but for a YouTube algorithm that temporarily “recommended” ISIS material after the crime. In oral argument, Justice Thomas posed a more skeptical note. “If you call information and ask for al-Baghdadi’s number and they give it to you, I don’t see how that’s aiding and abetting,” he said. Justices returned to precedents about lending libraries and bookstores not being held accountable for the content in their books. Protect The 1st joined with former Sen. Rick Santorum in an amici brief before the Court arguing that Section 230 protections are absolutely needed to sustain a thriving online marketplace of ideas. Social media companies make a good faith effort to screen out dangerous content, but with billions of messages, perfection is impossible. Google attorney Lisa Blatt brought this point home in a colorful way, noting that a negative ruling would “either force sites to take down any content that was remotely problematic or to allow all content no matter how vile. You’d have ‘The Truman Show’ versus a horror show.” The tone and direction of today’s oral argument suggests that the Justices appreciate the potential for an opinion that could have negative unforeseen consequences for free speech. Justice Brett M. Kavanaugh added that the court should not “crash the digital economy.” Protect The 1st looks forward to reading the Court’s opinion and seeing its reasoning. Former U.S. Senator Rick Santorum today joined with Protect The 1st to urge the U.S. Supreme Court to reject the petitioners’ argument in Gonzalez v. Google that the algorithmic recommendations of internet-based platforms should make them liable for users’ acts.
Santorum and Protect The 1st told the Court that curtailing Section 230 “would cripple the free speech and association that the internet currently fosters.” As a senator, Santorum had cast a vote for Section 230 to send the bill to President Bill Clinton’s desk for signature in 1996. The Protect The 1st amicus brief informed the Court:
The brief described for the Court the harm to society that would occur if the Court were to disregard Section 230’s inclusion of First Amendment-protected editorial judgments. The brief tells the Court:
And there is no need for the Supreme Court to rewrite Section 230: As amici explained, Congress can choose to amend Section 230 if new challenges necessitate a change in policy. For example, Congress recently eliminated Section 230 immunity when it conflicts with sex trafficking laws, and Congress is currently debating a variety of bills that would address specific concerns about algorithm-based recommendations. The Protect The 1st’s brief states: “The judiciary is never authorized to interpret statutes more narrowly than Congress wrote them, but it is especially inappropriate to do so when Congress is already considering whether and how to amend its own law.” Background: This Protect The 1st amicus brief answers the question before the U.S. Supreme Court in Gonzalez v. Google: “Does Section 230(c)(1) of the Communications Decency Act immunize interactive computer services when they make targeted recommendations of information provided by another information content provider?” Th case pending before the Court centers around the murder of Nohemi Gonzalez, a 23-year-old American who was killed in a terrorist attack in Paris in 2015. A day after this atrocity, the ISIS foreign terrorist organization claimed responsibility by issuing a written statement and releasing a YouTube video that attempted to glorify its actions. Gonzalez’s father sued Google, Twitter, and Facebook, claiming that social media algorithms that suggest content to users based on their viewing history makes these companies complicit in aiding and abetting international terrorism. No evidence has been presented that these services played an active role in the attack in which Ms. Gonzalez lost her life. A district court granted Google’s motion to dismiss the claim based on Section 230 of the Communications Decency Act, a measure that immunizes social media companies from content posted by users. The U.S. Court of Appeals for the Ninth Circuit affirmed the lower court’s ruling. The Supreme Court is scheduled to hear oral arguments Feb. 21. CLICK HERE FOR THE AMICUS BRIEF Protect The 1st is covering the growing likelihood that the split between the Eleventh and Fifth Circuit courts over the social media moderation content laws of Texas and Florida make it likely that the U.S. Supreme Court will resolve what decisions about political speech – if any – can be made by states.
As we reported last week, the Florida law – which would prohibit social media platforms from removing the posts of political candidates – was stricken by the Eleventh Circuit. The Texas law, which bars companies from removing posts based on a poster’s political ideology, was upheld by the Fifth Circuit. Both laws aim to address questionable content moderation decisions by Twitter, Meta, Google, and Amazon, by eroding the Section 230 liability shield in the Communications Decency Act. Cert bait doesn’t get more appealing than this. Consider: A split between federal circuits. Laws that would protect free expression in the marketplace of ideas while simultaneously curtailing the speech rights of unpopular companies. Two similar laws with differences governing the moderation of political speech. The petition for SCOTUS reviewing the Texas and Florida laws practically writes itself. We were not initially surprised when we heard reports the Supreme Court was stepping into the Section 230 fray. The Court, however, is set to examine a different set of challenges to Section 230 in a domain that is oblique to the central questions about political content posed by Texas and Florida. The court will examine whether the liability protections of Section 230 immunize Alphabet’s Google, YouTube, and Twitter against apparently tangential associations in two cases involving terrorist organizations. Do the loved ones of victims of terror attacks in Paris and Istanbul have an ability to breach 230’s shield? We don’t mean to diminish the importance of this question, especially to the victims. As far as the central questions of political content moderation and free speech are concerned, however, any decisions on these two cases will have modest impact on the rights and responsibilities of these platforms, a crucial issue at center of the national debate. It is our position that taking away Section 230 protections would collapse online commerce and dialogue, while violating the First Amendment rights of social media companies. Love social media companies or hate them – and millions of people are coming to hate them – if you abridge the right of one group of unpopular people to moderate their content, you degrade the power of the First Amendment for everyone else. We continue to press policymakers to look to the principles behind the bipartisan Platform Accountability and Transparency Act, which would compel the big social media companies to offer clear standards and due process for posters in exchange for continuing the liability protection of Section 230. |
Archives
November 2023
Categories
All
|
ABOUT |
ISSUES |
TAKE ACTION |