Our policy director, Erik Jaffe, discusses the U.S. Supreme Court oral argument in Gonzalez v. Google with The Federalist Society.
Via The Federalist Society:
On February 21, 2023, the U.S. Supreme Court will hear oral argument in Gonzalez v. Google.
After U.S. citizen Nohemi Gonzalez was killed by a terrorist attack in Paris, France, in 2015, Gonzalez’s father filed an action against Google, Twitter, and Facebook. Mr. Gonzalez claimed that Google aided and abetted international terrorism by allowing ISIS to use YouTube for recruiting and promulgating its message. At issue is the platform’s use of algorithms that suggest additional content based on users’ viewing history. Additionally, Gonzalez claims the tech companies failed to take meaningful action to counteract ISIS’ efforts on their platforms.
The district court granted Google’s motion to dismiss the claim based on Section 230(c)(1) of the Communications Decency Act, and the U.S. Court of Appeals for the Ninth Circuit affirmed. The question now facing the Supreme Court is does Section 230 immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?
Observers of the U.S. Supreme Court have long wondered if Justice Clarence Thomas would lead his colleagues to hold internet companies that post users’ content to the same liability standard as a publisher.
In a concurrence last year, Justice Thomas questioned Section 230 – a statute that provides immunity for internet companies that post user content. Justice Thomas noted that the “right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions.”
In the case heard today, Gonzalez v. Google, the family of a woman murdered by terrorists in Paris is suing Google not for a direct post, but for a YouTube algorithm that temporarily “recommended” ISIS material after the crime. In oral argument, Justice Thomas posed a more skeptical note.
“If you call information and ask for al-Baghdadi’s number and they give it to you, I don’t see how that’s aiding and abetting,” he said. Justices returned to precedents about lending libraries and bookstores not being held accountable for the content in their books.
Protect The 1st joined with former Sen. Rick Santorum in an amici brief before the Court arguing that Section 230 protections are absolutely needed to sustain a thriving online marketplace of ideas. Social media companies make a good faith effort to screen out dangerous content, but with billions of messages, perfection is impossible.
Google attorney Lisa Blatt brought this point home in a colorful way, noting that a negative ruling would “either force sites to take down any content that was remotely problematic or to allow all content no matter how vile. You’d have ‘The Truman Show’ versus a horror show.”
The tone and direction of today’s oral argument suggests that the Justices appreciate the potential for an opinion that could have negative unforeseen consequences for free speech. Justice Brett M. Kavanaugh added that the court should not “crash the digital economy.”
Protect The 1st looks forward to reading the Court’s opinion and seeing its reasoning.
Former U.S. Sen. Rick Santorum and Protect The 1st Tell Supreme Court that Curtailing Section 230 Would Harm Americans’ First Amendment Rights
Former U.S. Senator Rick Santorum today joined with Protect The 1st to urge the U.S. Supreme Court to reject the petitioners’ argument in Gonzalez v. Google that the algorithmic recommendations of internet-based platforms should make them liable for users’ acts.
Santorum and Protect The 1st told the Court that curtailing Section 230 “would cripple the free speech and association that the internet currently fosters.” As a senator, Santorum had cast a vote for Section 230 to send the bill to President Bill Clinton’s desk for signature in 1996.
The Protect The 1st amicus brief informed the Court:
The brief described for the Court the harm to society that would occur if the Court were to disregard Section 230’s inclusion of First Amendment-protected editorial judgments. The brief tells the Court:
And there is no need for the Supreme Court to rewrite Section 230: As amici explained, Congress can choose to amend Section 230 if new challenges necessitate a change in policy. For example, Congress recently eliminated Section 230 immunity when it conflicts with sex trafficking laws, and Congress is currently debating a variety of bills that would address specific concerns about algorithm-based recommendations.
The Protect The 1st’s brief states: “The judiciary is never authorized to interpret statutes more narrowly than Congress wrote them, but it is especially inappropriate to do so when Congress is already considering whether and how to amend its own law.”
This Protect The 1st amicus brief answers the question before the U.S. Supreme Court in Gonzalez v. Google: “Does Section 230(c)(1) of the Communications Decency Act immunize interactive computer services when they make targeted recommendations of information provided by another information content provider?”
Th case pending before the Court centers around the murder of Nohemi Gonzalez, a 23-year-old American who was killed in a terrorist attack in Paris in 2015. A day after this atrocity, the ISIS foreign terrorist organization claimed responsibility by issuing a written statement and releasing a YouTube video that attempted to glorify its actions. Gonzalez’s father sued Google, Twitter, and Facebook, claiming that social media algorithms that suggest content to users based on their viewing history makes these companies complicit in aiding and abetting international terrorism.
No evidence has been presented that these services played an active role in the attack in which Ms. Gonzalez lost her life. A district court granted Google’s motion to dismiss the claim based on Section 230 of the Communications Decency Act, a measure that immunizes social media companies from content posted by users. The U.S. Court of Appeals for the Ninth Circuit affirmed the lower court’s ruling.
The Supreme Court is scheduled to hear oral arguments Feb. 21.
CLICK HERE FOR THE AMICUS BRIEF
SCOTUS Edges Toward Section 230 Review
Protect The 1st is covering the growing likelihood that the split between the Eleventh and Fifth Circuit courts over the social media moderation content laws of Texas and Florida make it likely that the U.S. Supreme Court will resolve what decisions about political speech – if any – can be made by states.
As we reported last week, the Florida law – which would prohibit social media platforms from removing the posts of political candidates – was stricken by the Eleventh Circuit. The Texas law, which bars companies from removing posts based on a poster’s political ideology, was upheld by the Fifth Circuit. Both laws aim to address questionable content moderation decisions by Twitter, Meta, Google, and Amazon, by eroding the Section 230 liability shield in the Communications Decency Act.
Cert bait doesn’t get more appealing than this. Consider: A split between federal circuits. Laws that would protect free expression in the marketplace of ideas while simultaneously curtailing the speech rights of unpopular companies. Two similar laws with differences governing the moderation of political speech. The petition for SCOTUS reviewing the Texas and Florida laws practically writes itself.
We were not initially surprised when we heard reports the Supreme Court was stepping into the Section 230 fray. The Court, however, is set to examine a different set of challenges to Section 230 in a domain that is oblique to the central questions about political content posed by Texas and Florida.
The court will examine whether the liability protections of Section 230 immunize Alphabet’s Google, YouTube, and Twitter against apparently tangential associations in two cases involving terrorist organizations. Do the loved ones of victims of terror attacks in Paris and Istanbul have an ability to breach 230’s shield?
We don’t mean to diminish the importance of this question, especially to the victims. As far as the central questions of political content moderation and free speech are concerned, however, any decisions on these two cases will have modest impact on the rights and responsibilities of these platforms, a crucial issue at center of the national debate.
It is our position that taking away Section 230 protections would collapse online commerce and dialogue, while violating the First Amendment rights of social media companies. Love social media companies or hate them – and millions of people are coming to hate them – if you abridge the right of one group of unpopular people to moderate their content, you degrade the power of the First Amendment for everyone else.
We continue to press policymakers to look to the principles behind the bipartisan Platform Accountability and Transparency Act, which would compel the big social media companies to offer clear standards and due process for posters in exchange for continuing the liability protection of Section 230.