President-elect Donald Trump’s nominee for Chairman of the Federal Communications Commission promises he will “smash the censorship cartel.” A current FCC commissioner, Brendan Carr is a seasoned policymaker and scholar of communication law. He is an unabashed promoter of the free market, promising to reduce regulation and “refill America’s spectrum pipeline” to “unleash economic prosperity.” Carr authored the FCC section of Project 2025, which encapsulates what the FCC’s policy efforts are likely to encompass in the coming years. Relevant to the First Amendment is Carr’s approach to Section 230. This is the law that grants social media companies immunity from liability for content produced by third parties, while acknowledging the companies’ right to moderate their sites. Carr believes Section 230 has been expanded and abused to censor conservative and other speech, concluding it “is hard to imagine another industry in which a greater gap exists between power and accountability.” That’s why, in his view, the “FCC should issue an order that interprets Section 230 in a way that eliminates the expansive, non-textual immunities that courts have read into the statute.” Specifically, Carr suggests that the “FCC can clarify that Section 230(c)(1) does not apply broadly to every decision that a platform makes. Rather its protections apply only when a platform does not remove information provided by someone else. In contrast, the FCC should clarify that the more limited Section 230(c)(2) protections apply to any covered platform’s decision to restrict access to material provided by someone else.” What this means, in effect, will be much less immunity for platforms under Section 230(c)(1), broadly interpreted by courts to apply to both distribution and takedown decisions – even though Section 230(c)(2) speaks more directly to the latter. Carr’s proposal is a direct shot at the kind of censorship decisions that have so enflamed conservative circles in recent years, and it means platforms could have substantially less legal protection in such future cases. At the same time, basic publishing and editorial functions (even a hands-off editorial approach), as well as removal of lewd or violent material would likely remain covered under this framework. (For more on the distinction between Section 230(c)(1) and Section 230(c)(2)), we recommend this Congressional Research Service report.) Carr’s writings make frequent appeals to Congress to reform and update the laws governing the internet, eager to work with Congress to harmonize his regulatory approach with the law. Given the role of courts in interpreting rules against the statutes they are based upon, it is hard, however, to predict what this new framework will look like. There’s certainly a scenario where litigation against tech platforms could snowball in a way that harms innovation, consumer experience, and the overall speech climate. Moreover, the First Amendment upholds the right of social media companies to moderate their content. Courts should not allow any rule that compromises their rights. Still, Carr’s effort to carve out more respect for speech by reinterpreting Section 230 is a lighter touch than many legislative proposals. Carr suggests placing transparency rules on big social media platforms – specifically, requiring “platforms to provide greater specificity regarding their terms of service.” We would prefer social media companies to voluntarily take up these rules. Platforms’ moderation decisions should take place in the open, providing clarity to consumers and furthering free expression and association on the handful of sites that have become the nation’s townhall. Carr also advocates for returning “to Internet users the power to control their online experiences,” perhaps through choosing “their own content filters and fact checkers, if any.” At the same time, he concedes that such policies could be seen by some as intruding “on the First Amendment rights of corporations to exclude content from their private platforms.” Carr should heed his reservation. Protect The 1st wholeheartedly supports the speech rights of private companies and opposes external impositions on this fundamental right. Regarding national security, Carr wholeheartedly supports a ban on TikTok, espousing that it provides “Beijing with an opportunity to run a foreign influence campaign by determining the news and information that the app feeds to millions of Americans.” We support the law that requires divestment by China’s ByteDance. With a sale to a U.S. owner, there would be no need for a blanket ban on TikTok that infringes on the speech and associational rights of Americans. Lastly, Carr seeks to re-emphasize the establishment of wireless connectivity for all Americans by freeing up more spectrum and streamlining the permitting process for wireless builds. According to the FCC, 24 million Americans still lack high-speed Internet as of 2024, and that’s 24 million Americans who are less able to exercise their speech rights than their fellow countrymen. Overall, Carr’s focus is to modernize the FCC and promote prosperity by turning to a “pro-growth agenda” over the heavy hand of regulatory decree. “The FCC is a New Deal-era agency,” Carr writes. “Its history of regulation tends to reflect the view that the federal government should impose heavy-handed regulation rather than relying on competition and market forces to produce optimal outcomes.” In short, Brendan Carr promises to be a bold leader at the FCC who aims to break policy logjams. Protect The 1st looks forward to evaluating his proposals when they are fleshed out in January. Former senator and presidential candidate John Kerry said the quiet part out loud in recent comments before the World Economic Forum.
In answer to a question regarding critics of climate change, Kerry responded vigorously, saying: “You know, there’s a lot of discussion now about how you curb those entities in order to guarantee that you’re going to have some accountability on facts, etcetera. But look, if people only go to one source, and the source they go to is sick, and, you know, has an agenda, and they’re putting out disinformation, our First Amendment stands as a major block to be able to just, you know, hammer it out of existence.” We at Protect the 1st are no critics of the climate change debate, which is important. But we cast a critical eye at those who would minimize First Amendment protections to silence their opposition. Kerry said, "Democracies around the world now are struggling with the absence of a sort of truth arbiter, and there’s no one who defines what facts really are." With all respect to Kerry, we’re a hard pass on a Ministry of Truth. The free exchange of ideas, even bad ideas, is essential for an informed discourse. Earlier we compared the First Amendment records of Sen. J.D. Vance and Gov. Tim Walz, finding the two vice presidential candidates problematic with notable bright spots.
So how do the two candidates at the top of the ticket compare on defending speech? Answer: Even more problematic, but also with some bright spots. Vice President Kamala Harris As a U.S. Senator, Harris in 2017 co-sponsored an amendment with her fellow Californian and leading Democrat, the late Sen. Dianne Feinstein, that would have required federal agencies to obtain a probable cause warrant before the FISA Court could allow the government to review the contents of Americans’ emails. Protecting Americans from warrantless surveillance of their private communications concerning personal, political, and religious lives is one of the best ways to protect speech. As a senator, Harris also defended the First Amendment rights of social media platforms to moderate their content. This is not surprising given that she was from California and big tech is one of her best backers. The Washington Post reports that Karen Dunn, one of Google’s top attorneys in against the Biden administration’s antitrust case, is a top Harris advisor. This closeness suggests a danger that a Harris administration might lean heavily in support of using friendly relations with big tech as a backdoor way to censor critics and conservative speech. Consider that Harris once called for the cancellation of former President Donald Trump’s then-Twitter account, saying: “And the bottom line is that you can’t say that you have one rule for Facebook and you have a different rule for Twitter. The same rule has to apply, which is that there has to be a responsibility that is placed on these social media sites to understand their power … They are speaking to millions of people without any level of oversight or regulation. And that has to stop.” Why does it have to stop? Americans have spoken for two centuries without any level of oversight or regulation. You might find the speech of many to be vile, unhinged, hateful, or radical. But unless it calls for violence, or is obscene, it is protected by the First Amendment. When, exactly, did liberals lose their faith in the American people and replace it with a new faith in the regulation of speech? Worse, as California Attorney General, Harris got the ball rolling on trying to force nonprofits to turn over their federal IRS Form 990 Schedule B, which would have given her office the identities of donors. Under Harris’s successor, this case went to the U.S. Supreme Court. Protect The 1st was proud to submit an amicus brief, joined with amici from a coalition of groups from across the ideological spectrum. We demonstrated that the likely exposure of donors’ identities would result in various forms of “cancellation,” from firings and the destruction of businesses, to actual physical threats. A Supreme Court majority agreed with us in Americans for Prosperity Foundation v. Bonta in 2021 that the same principle that defended Alabama donors to the NAACP extends to all nonprofits. The Biden-Harris administration has also been mum on worldwide crackdowns on speech, from a Brazilian Supreme Court Justice’s cancellation of X, to hints from the French government that this U.S.-based platform might be the next target after the arrest of Telegram CEO Pavel Durov. Former President Donald Trump This is a harder one to judge. It’s long been said that Donald Trump wears better if you turn the sound off. On the plus side, President Trump took a notably strong approach in supporting surveillance reform. A victim himself of illicit surveillance justified by the FBI before the FISA Court with a doctored political dossier and a forged document, President Trump was sensitive to the First Amendment implications of an overweening surveillance state. To his credit, he nixed the reauthorization of one surveillance authority – Section 215, or the so-called “business records provision.” During the pandemic, Trump issued guidance in defense of religious liberty. He said: “Some governors have deemed liquor stores and abortion clinics essential but have left out churches and houses of worship. It’s not right. So I’m correcting this injustice and calling houses of worship essential.” He backed up his defense of religious liberty by appointing three Supreme Court Justices – Neil Gorsuch, Amy Coney Barrett, and Brett Kavanaugh – who have been strong defenders of religious liberty. But turn the sound back on and you will hear Donald Trump call the American press “the enemy of the people.” Call the media biased, corrupt, in the bag for the Democrats, whatever you like … but “enemy of the people?” Trump’s rhetoric on the media often edges toward physical hostility. As president, he mocked a CNN reporter who was hit with a rubber bullet while covering the 2020 riots in Minneapolis. “Remember that beautiful sight?” Trump asked. At a time when journalists are under threat in America and around the world, this is a decidedly un-American way to confront media bias. Donald Trump has also called for a loosening of the libel laws to allow elected officials to more easily pursue claims against journalists without having to meet the Supreme Court’s “actual malice” standard. We agree that there is room for sharpening libel law in the age of social media amplification, but allowing wealthy politicians to sue news outlets out of business would be one effective way to gut the First Amendment. So what should we conclude? Both Harris and Trump have mixed records. Both have taken bold stands for speech. Both have treated the opposition as so evil that they do not deserve legal protections. Both seem capable of surprising us, either by being more prone to censorship or to taking bold stands for free speech. Whatever your political leanings, urge your candidate and your party to lean on the side of the First Amendment. We’ve already heard a lot of rowdy speech from the two vice-presidential candidates, Democratic Minnesota Gov. Tim Walz and Republican U.S. Sen. J.D. Vance. Would they be as generous in applying the First Amendment to others as they do to themselves?
Tim Walz, who, despite correct opinions regarding the tragedy of Warren Zevon being left out of the Rock and Roll Hall of Fame, hasn’t been as on the money when it comes to which types of speech are protected and which are not. In 2022, Walz said on MSNBC: “There's no guarantee to free speech on misinformation or hate speech, and especially around our democracy. Tell the truth, where the voting places are, who can vote, who's able to be there….” As PT1st senior legal advisor Eugene Volokh points out in Reason: “Walz was quite wrong in saying that ‘There's no guarantee to free speech’ as to ‘hate speech.’ The Supreme Court has made clear that there is no ‘hate speech’ exception to the First Amendment (and see here for more details). The First Amendment generally protects the views that the government would label ‘hateful’ as much as it protects other views.” Legal treatment of misinformation is more complicated. In United States v. Alvarez, the Supreme Court held that lies “about philosophy, religion, history, the social sciences, the arts, and the like” are largely constitutionally protected. Libel, generally, is not – though, in a defamation case, a public official can only succeed in their claim if they can show that a false statement was published with “actual malice” – in other words, “with knowledge that it was false or with reckless disregard of whether it was false or not.” Categories of intentional misinformation that are patently not protected include lying to government investigators and fraudulent charitable fundraising. Walz may be on firmer ground when it comes to lies about the mechanics of voting – when, where, and how to vote. Thirteen states already ban such statements. As Volokh writes, “[I]f limited to the context that Walz seemed to have been describing – in the Court's words, ‘messages intended to mislead voters about voting requirements and procedures’ – Walz may well be correct.” On freedom of religion, Walz’s record as governor is concerning. During the pandemic lockdowns, the governor imposed particularly harsh restrictions on religious gatherings, limiting places of worship to a maximum of ten congregants, while allowing retailers to open up at 50 percent capacity. An ensuing lawsuit, which Walz lost, resulted in an agreement granting religious institutions parity with secular businesses. Walz also signed a law prohibiting colleges and universities that require a statement of faith from participating in a state program allowing high school students to earn college credits. As the bill’s sponsor conceded, the legislation was intended in part to coerce religious educational institutions into admitting students regardless of their beliefs – diluting their freedom of association. That controversy is currently being litigated in court. Little wonder the Catholic League declared that “Tim Walz is no friend of religious liberty.” The Knights of Columbus might agree – at least as pertains to the broader ticket. In 2018, during the federal judicial nomination hearing for Brian Buescher, then-Sen. Kamala Harris criticized the organization for its “extremist” (read: traditional) views on social issues. Harris also sponsored the “Do No Harm” Act, which would have required health care workers to perform abortions in violation of their religious beliefs. Regarding Vance, the former Silicon Valley investor is hostile to the speech rights of private tech companies (who certainly enjoy the same First Amendment protections as any other person or group). In March, the senator filed an amicus brief in support of the State of Ohio’s lawsuit against Google, which seeks to regulate the company as a common carrier. In his brief, Vance argues Google’s claim that it creates bespoke, curated search results that directly conflict with its past claims of neutrality. Sen. Vance writes: “[Google’s] functions are essentially the same as any communications network: it connects people by transmitting their words and exchanging their messages. It functions just like an old telephone switchboard, but rather than connect people with cables and electromagnetic circuits, Google uses indices created through data analysis. As such, common carrier regulation is appropriate under Ohio law.” Vance’s argument creeps in the direction of Texas and Florida laws that seek to regulate social media companies’ internal curation policies. Both laws were found wanting by the Supreme Court. The Court in a strongly worded remand on both laws wrote: “[I]t is no job for government to decide what counts as the right balance of private expression – to ‘un-bias’ what it thinks is biased, rather than to leave such judgments to speakers and their audiences.” Yet Vance also attempts to “un-bias” social media platforms, leaving little to no room for independent curatorial judgment. On the plus side, Vance has cosponsored numerous bills aimed at curtailing government censorship, including the “Free Speech Protection Act,” which prohibits government officials from “directing online platforms to censor any speech that is protected by the First Amendment.” He also sponsored the PRESERVE Online Speech Act, which would force social media companies to disclose government communications urging the censoring or deplatforming of users. As the election season progresses, we can hope for more clarity on the candidates’ positions regarding our First Amendment freedoms. It is already clear, however, that both candidates are far from purists when it comes to protecting other people’s speech. NetChoice v. Texas, FloridaWhen the U.S. Supreme Court put challenges to Florida and Texas laws regulating social media content moderation on the docket, it seemed assured that this would be one of the yeastiest cases in recent memory. The Supreme Court’s majority opinion came out Monday morning. At first glance, the yeast did not rise after all. These cases were remanded back to the appellate courts for a more thorough review.
But a closer look at the opinion shows the Court offering close guidance to the appellate court, with serious rebukes of the Texas law. Anticipation was high for a more robust decision. The Court was to resolve a split between the Fifth Circuit, which upheld the Texas law prohibiting viewpoint discrimination by large social media platforms, while the Eleventh Circuit upheld the injunction against a Florida law regulating the deplatforming of political candidates. The Court’s ruling was expected to resolve once and for all the hot-button issue of whether Facebook and other major social media platforms can depost and deplatform. Instead, the Court found fault with the scope and precision of both the Fifth and the Eleventh Circuit opinions, vacating both of them. The majority opinion, authored by Justice Elena Kagan, found that the lower courts failed to consider the extent to which their ruling would affect social media services other than Facebook’s News feed, including entirely different digital animals, such as direct messages. The Supreme Court criticized the lower courts for not asking how each permutation of social media would be impacted by the Texas and Florida laws. Overall, the Supreme Court is telling the Fifth and Eleventh to drill down and spell out a more precise doctrine that will be a durable guide for First Amendment jurisprudence in social media content moderation. But today’s opinion also contained ringing calls for stronger enforcement of First Amendment principles. The Court explicitly rebuked the Fifth Circuit for approval of the Texas law, “whose decision rested on a serious misunderstanding of the First Amendment precedent and principle.” It pointed to a precedent, Miami Herald Publishing Co. v. Tornillo, in which the Court held that a newspaper could not be forced to run a political candidate’s reply to critical coverage. The opinion is rife with verbal minefields that will likely doom the efforts of Texas and Florida to enforce their content moderation laws. For example: “But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression – to ‘un-bias’ what it thinks is biased, rather than to leave such judgments to speakers and their audiences.” The Court delved into the reality of content moderation, noting that the “prioritization of content” selected by algorithms from among billions of posts and videos in a customized news feed necessarily involves judgment. An approach without standards would turn any social media site into a spewing firehose of disorganized mush. The Court issued a brutal account of the Texas law, which prohibits blocking posts “based on viewpoint.” The Court wrote: “But if the Texas law is enforced, the platforms could not – as they in fact do now – disfavor posts because they:
So what appeared on the surface to be a punt is really the Court’s call for a more fleshed out doctrine that respects the rights of private entities to manage their content without government interference. For a remand, this opinion is surprisingly strong – and strong in protection of the First Amendment. Murthy v. Surgeon General: Supreme Court Punts on Social Media Censorship – Alito Pens Fiery Dissent6/26/2024
The expected landmark, decision-of-the-century, Supreme Court opinion on government interaction with social media content moderation and possible official censorship of Americans’ speech ended today not with a bang, not even with a whimper, but with a shrug.
The Justices ruled 6-3 in Murthy v. Missouri to overturn a lower court’s decision that found that the federal government likely violated the First Amendment rights of Missouri, Louisiana, and five individuals whose views were targeted by the government for expressing “misinformation.” The Court’s reasoning, long story short, is that the two states and five individuals lacked Article III standing to bring this suit. The court denied that the individuals could identify traceable past injuries to their speech rights. In short, a case that could have defined the limits of government involvement in speech for the central media of our time was deflected by the court largely on procedural grounds. Justice Samuel Alito, writing a dissent signed by Justices Clarence Thomas and Neil Gorsuch, implicitly criticized this punt, calling Murthy v. Surgeon General “one of the most important free speech cases to reach this Court in years.” He compared the Court’s stance in this case to the recent National Rifle Association v. Vullo, an opinion that boldly protected private speech from government coercion. The dissenters disagreed with the Court on one of the plaintiffs’ standing, finding that Jill Hines, a healthcare activist whose opinions on Covid-19 were blotted out at the request of the government, most definitely had standing to sue. Alito wrote: “If a President dislikes a particular newspaper, he (fortunately) lacks the ability to put the paper out of business. But for Facebook and many other social media platforms, the situation is fundamentally different. They are critically dependent on the protections provided by §230 of the Communications Decency Act of 1996 … For these and other reasons, internet platforms have a powerful incentive to please important federal officials …” We have long argued that when the government wants to weigh in on “misinformation” (and “disinformation” from malicious governments), it must do so publicly. Secret communications from the government to the platforms to take down one post or another is inherently offensive to the Constitution and likely to lead us to a very un-American place. Let us hope that the Court selects a case in which it accepts the standing of the plaintiffs in order to give the government, and our society, a rule to live by. William Schuck writing in a letter-to-the-editor in The Wall Street Journal:
“The world won’t end if Section 230 sunsets, but it’s better to fix it. Any of the following can be done with respect to First Amendment-protected speech, conduct and association: Require moderation to be transparent, fair (viewpoint and source neutral), consistent, and appealable; prohibit censorship and authorize a right of private action for violations; end immunity for censorship and let the legal system work out liability. “In any case, continue immunity for moderation of other activities (defamation, incitement to violence, obscenity, criminality, etc.), and give consumers better ways to screen out information they don’t want. Uphold free speech rather than the prejudices of weak minds.” The House Energy and Commerce Committee recently held a hearing on a bill that would sunset Section 230 of the Communications Decency Act within 18 months. This proposed legislation, introduced by Chair Cathy McMorris Rodgers and Ranking Member Frank Pallone, aims to force Big Tech to collaborate with Congress to establish a new framework for liability. This push to end Section 230 has reopened the debate about the future of online speech and the protections that underpin it.
Section 230 has been a cornerstone of internet freedom, allowing online platforms to host user-generated content without being liable for what their users post. This legal shield has enabled the growth of vibrant online communities, empowered individuals to express themselves freely, and supported small businesses and startups in the digital economy. The bill’s proponents claim that Section 230 has outlived its usefulness and is now contributing to a dangerous online environment. This perspective suggests that without the threat of liability, platforms have little incentive to protect users from predators, drug dealers, and other malicious actors. We acknowledge the problems. But without Section 230, social media platforms would either become overly cautious, censoring a wide range of lawful content to avoid potential lawsuits, or they might avoid moderating content altogether to escape liability. This could lead to a less free and more chaotic internet, contrary to the bill’s intentions. It is especially necessary for social media sites to reveal when they’ve been asked by agents of the FBI and other federal agencies to remove content because it constitutes “disinformation.” When the government makes a request of a highly regulated business, it is not treated by that business as a request. This is government censorship by another name. If the government believes a post is from a foreign troll, or foments dangerous advice, it should log its objection on a public, searchable database. Any changes to Section 230 must carefully balance the need to protect users from harm with the imperative to uphold free speech. Sweeping changes or outright repeal would stifle innovation and silence marginalized voices. Protect The 1st looks forward to further participation in this debate. William Schuck writing in a letter-to-the-editor in The Wall Street Journal:
“The world won’t end if Section 230 sunsets, but it’s better to fix it. Any of the following can be done with respect to First Amendment-protected speech, conduct and association: Require moderation to be transparent, fair (viewpoint and source neutral), consistent, and appealable; prohibit censorship and authorize a right of private action for violations; end immunity for censorship and let the legal system work out liability. “In any case, continue immunity for moderation of other activities (defamation, incitement to violence, obscenity, criminality, etc.), and give consumers better ways to screen out information they don’t want. Uphold free speech rather than the prejudices of weak minds.” Can a government regulator threaten adverse consequences for banks or financial services firms that do business with a controversial advocacy group like the National Rifle Association? Can FBI agents privately jawbone social media platforms to encourage the removal of a post the government regards as “disinformation”?
As the U.S. Supreme Court considers these questions in NRA v. Vullo and Murthy v. Missouri, a FedSoc Film explores the boundary between a government that informs and one that uses public resources for propaganda or to coerce private speech. (“Nice social media company you have there. Shame if anything happened to it.”) Posted next to this film, Jawboned, on the Federalist Society website is Protect The 1st’s own Erik Jaffe, who in a podcast explores the extent to which the government, using public monies and resources, should be allowed to speak, if at all, on matters of opinion. Is the expenditure of tax dollars to push a favored government viewpoint a violation of the First Amendment rights of Americans who disagree with that view? Jaffe thinks so and argues why this is the logical conclusion of decades of First Amendment jurisprudence. Furthermore, when the government tells a private entity subject to its power or control what the government thinks it ought to be saying (or not saying), Jaffe says, “there’s always an implied ‘or else.’” And even the government’s own public speech often has coercive consequences. As if to underscore this point, Jawboned recounts the story of how the federal Office of Price Administration during World War Two lacked the authority to order companies to reduce prices but did threaten to publicly label them and their executives as “unpatriotic.” That was a very real threat in wartime. Imagine the “or else” sway government has today over highly regulated firms like X, Meta, or Google. In short, Jaffe argues that a line is crossed when “the power and authority of the government” is invoked to use “the power of office to coerce people.” But it also crosses the line when the government uses its resources (funded by compelled taxes and other fees) to amplify its own viewpoint on questions being debated by the public. Such compelled support for viewpoint selective speech violates the freedom of speech of the public in the same way compelled support for private expressive groups and viewpoints does. Click here to listen to more of Erik Jaffe’s thoughts on the limits of government speech and to watch Jawboned. The U.S. Supreme Court heard oral arguments Monday in Murthy v. Missouri, a case addressing the government's covert efforts to influence social media content moderation during the Covid-19 pandemic. Under pressure from federal and state actors, social media companies reportedly engaged in widespread censorship of disfavored opinions, including those of medical professionals commenting within their areas of expertise.
The case arose when Missouri and Louisiana filed suit against the federal government arguing that the Biden Administration pressured social media companies to censor certain views. In reply, the government responded that it only requested, not pressured or demanded, that social media companies comply. Brian Fletcher, U.S. Principal Deputy Solicitor General, told the Court it should “reaffirm that government speech crosses the line into coercion only if, viewed objectively, it conveys a threat of adverse government action.” This argument seems reasonable, but a call from a federal agency or the White House is not just any request. When one is pulled over by a police officer, even if the conversation is nothing but a cordial reminder to get a car inspected, the interaction is not voluntarily. Social media companies are large players, and an interaction with federal officials is enough to whip up fears of investigations, regulations, or lawsuits. In Murthy v. Missouri, it just so happens that the calls from federal officials were not just mere requests. According to Benjamin Aguiñaga, Louisiana’s Solicitor General, “as the Fifth Circuit put it, the record reveals unrelenting pressure by the government to coerce social media platforms to suppress the speech of millions of Americans. The District Court which analyzed this record for a year, described it as arguably the most massive attack against free speech in American history, including the censorship of renowned scientists opining in their areas of expertise.” At the heart of Murthy v. Missouri lies a fundamental question: How far can the government go in influencing social media's handling of public health misinformation without infringing on free speech? Public health is a valid interest of the government, but that can never serve as a pretense to crush our fundamental rights. When pressure to moderate speech is exerted behind the scenes – as it was by 80 FBI agents secretly advising platforms what to remove – that can only be called censorship. Transparency is the missing link in the government's current approach. Publicly contesting misinformation, rather than quietly directing social media platforms to act, respects both the public's intelligence and the principle of free expression. The government's role should be clear and open, fostering an environment where informed decisions are made in the public arena. Perhaps the government should take a page from Ben Franklin’s book (H/T Jeff Neal): “when Men differ in Opinion, both Sides ought equally to have the Advantage of being heard by the Publick; and that when Truth and Error have fair Play, the former is always an overmatch for the latter …” Protect The 1st looks forward to further developments in this case. The U.S. Court of Appeals for the Second Circuit recently heard oral arguments in the case of Volokh v. James. It’s another in a series of critical recent cases involving government regulation of online speech – and one the Empire State should ultimately lose.
In 2022, distinguished legal scholar and Protect The 1st Senior Legal Advisor Eugene Volokh – along with social media platforms Rumble and Locals – brought suit against the state of New York after it passed a law prohibiting “hateful” conduct (or speech) online. Specifically, the law prohibits “the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” The law also requires platforms to develop and publish a policy laying out how exactly they will respond to such forms of online expression, as well as to create a complaint process for users to report objectionable content falling within the boundaries of New York’s (vague and imprecise) prohibitions. Should they fail to comply, websites could face fines of up to $1,000 per day. There are a number of problems with New York’s bid to regulate online speech – not least of which is that there is no hate speech exception to the First Amendment. As the Supreme Court noted in Matal v. Tam, “speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’” Moreover, the law fails to define key terms like “vilify,” “humiliate,” or “incite” – leaving its interpretation up to the eye of the beholder. As Volokh explained in a piece for Reason, “it targets speech that could simply be perceived by someone, somewhere, at some point in time, to vilify or humiliate, rendering the law's scope entirely subjective.” Does an atheist’s post criticizing religion “vilify” people of faith? Does a video of John Oliver making fun of the British monarchy “humiliate” the British people? The hypotheticals are endless because one’s subjective interpretation of another’s speech could cut a million different ways. In February 2023, a district court ruled against New York, broadly agreeing with Volokh’s arguments. As Judge Andrew L. Carter, Jr. wrote: “The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal.” To be fair, there is a purported government interest at play here, even if it’s not compelling in the broader context of the law’s vast, unconstitutional reach. The New York law is a legislative response to a 2022 Buffalo supermarket shooting perpetrated by a white supremacist who was, by all accounts, steeped in an online, racist milieu. Every decent person wants to give extremist views no oxygen. But incitement to violence is already a well-established First Amendment exception – unprotected by the law. Broadly compelling websites to create processes for addressing subjective, individualized offenses simply goes too far. Anticipating New York’s appeal to the Second Circuit, a number of ideologically disparate organizations joined with the Foundation for Individual Rights and Expression, or FIRE, (which is prosecuting the case), submitting amicus curiae briefs in solidarity with Volokh and his co-plaintiffs. Those groups – which include the American Civil Liberties Union, the Electronic Frontier Foundation, the Cato Institute, and satirical website the Babylon Bee – stand in uncommon solidarity against the proposition that government should ever be involved in private content moderation policies. As the ACLU and EFF assert, "government interjection of itself into that process in any form raises serious First Amendment, and broader human rights, concerns." True to form, the Babylon Bee’s brief notes that “New York's Online Hate Speech Law would be laughable – if its consequences weren't so serious.” When the U.S. Supreme Court renders its opinion on the Texas and Florida social media laws, it will give legislatures a better guide to developing more precise, articulable means of addressing online content. Should we move to a post-Section 230 internet? Is liability-free content hosting coming to an end?
In Wired, Jaron Lanier and Allison Stanger argue for ending that provision of the Communications Decency Act that protects social media platforms from liability over the content of third-party posts. The two have penned a thoughtful and entertaining analysis about the problems and trajectory of a Section 230-based internet. It’s worth reading but takes its conclusions to an unjustifiable extreme – with unexamined consequences. The authors assert that while Section 230 may have served us well for a time, they argue that long-running negative trends have outpaced the benefits that Section 230 provided. The authors write that modern, 230-protected algorithms heavily influence the promotion of lies and inflammatory speech online, which it obviously does. “People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice,” Lanier and Stanger write. “If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm.” They argue algorithms and the “advertising” business model appeal to the most primal elements of the human brain, effectively capturing engagement by promoting the most tantalizing content. “We have learned that humans are most engaged, at least from an algorithm’s point of view, by rapid-fire emotions related to fight-or-flight responses and other high-stakes interactions.” This dynamic has had enormous downstream consequences for politics and society; Section 230 “has inadvertently rendered impossible deliberation between citizens who are supposed to be equal before the law. Perverse incentives promote cranky speech, which effectively suppresses thoughtful speech.” All this has led to a roundabout form of censorship, where arbitrary rules, doxing, and cancel culture stifle speech. Lanier and Stanger call this iteration of the internet the “sewer of least-common-denominator content that holds human attention but does not bring out the best in us.” Lanier and Stanger offer valid criticisms of the current state of the net. It is undeniable that discourse has coarsened in connection with the rise of social media platforms and toxic algorithms. Worse, the authors are correct that algorithms provide an incentive for the spreading of lies about people and institutions. Writing that John Smith is a lying SOB who takes bribes will, to paraphrase Twain, pull in a million “likes” around the world before John Smith can tie his shoes. So what is to be done? First, do not throw out Section 230 in toto. As we previously said in our brief before the U.S. Supreme Court with former Senator Rick Santorum, gutting Section 230 “would cripple the free speech and association that the internet currently fosters.” Without immunity, internet platforms could not organize content in a way that would be relevant and interesting to users. Without Section 230 protections, media platforms would avoid nearly any controversial content if they could be frivolously sued anytime someone got offended. Second, do consider modifications of Section 230 to reduce the algorithmic incentives that fling and spread libels and proven falsehoods. Lanier and Stanger make the point that the current online incentives are so abusive that the unhinged curtail the free speech of the hinged. We should explore ways to reduce the gasoline-pouring tendency of social media algorithms without impinging on speech. Further reform might be along the lines of the bipartisan Internet PACT Act, which requires platforms to have clear and transparent standards in content moderation, and redress for people and organizations who have been unfairly deposted, deplatformed, and demonetized. Lanier and Stanger are thinking hard and honestly about real problems, but the problems they would create would be much worse. A post-230 social media platform would be either be curated to the point of being inane, or not curated at all. Now that would be a sewer. Still, we give Lanier and Stanger credit for stimulating thought. Everyone agrees something needs to change online to promote more constructive dialogue. Perhaps we are getting closer to realizing what that change should be. Censorship controversies made many headlines throughout 2023. We’ve seen revelations about heavy-handed content moderation by the government and social media companies, and the looming U.S. Supreme Court decisions on Florida and Texas laws to restrict social media. Behind these policies and laws is a surprising level of public support. A Pew Research poll offers a skeleton key for understanding the trend.
According to Pew, a majority of Americans now believe that the government and technology companies should make more concerted efforts to restrict false information online. Fifty-five percent of Pew respondents support the federal government removal of false information, up from only 39 percent in 2018. Some 65 percent of respondents support tech companies editing the false information flow, up from 56 percent in 2018. Most alarming of all, Americans adults are now more likely to value content moderation over freedom of information. In 2018, that preference was flipped, with Americans more inclined to prioritize freedom of information over restricting false information – 58 percent vs. 39 percent. Pew doesn’t editorialize when it posts its findings. For our part, these results reveal a disturbing slide in Americans’ appreciation for First Amendment principles. Online “noise” from social media trolls is annoying, to be sure, but sacrificing freedom of information for a reduction in bad information is anathema to the very notion of a free exchange of ideas. What is needed, instead, is better media literacy – not to mention a better understanding of what actually constitutes false information, as opposed to opinions with which one may simply disagree. Still, the poll goes a long way toward explaining some of the perplexing attitudes we’re seeing on college campuses, where polls show college students lack a basic understanding of the First Amendment and increasingly support the heckler’s veto. These poll results also speak to the increasing predilection of public officials to simply block constituents with whom they disagree. And it perhaps explains some of the push-and-pull we’re seeing between big, blue social media platforms and big, red states like Florida and Texas, where one side purports to protect free speech by infringing on the speech rights of others. While these results are interesting from an academic perspective, the suggested remedies raise major red flags. Americans want private technology companies to be the arbiters of truth. A lesser but still significant percentage wants the federal government to serve that role. Any institution comprised of human beings is bound to fail at such a task. Ultimately, if we want to protect the free exchange of information, that role must necessarily fall to each of us as discerning consumers of news. The extent to which we are unable to differentiate between factual and false information is an indictment of our educational system. And, as far as content moderation policies are concerned, they must be clear, standardized, and include some form of due process for those subjected to censorship decisions. More than anything, Americans need to relearn that if we open the door to a private or public sector “Ministry of Truth,” we will eviscerate the First Amendment as we know it. You might be on the winning side initially, but eventually we all lose. U.S. District Judge Donald Molloy recently blocked Montana's ban of the Chinese-owned social media platform TikTok, standing up for free speech but leaving a host of issues for policymakers to resolve. Montana’s ban, which was slated to take effect at the beginning of 2024, made it the first U.S. state to take such a measure against the popular video sharing app.
Judge Molloy asserted that Montana’s law infringed on free speech rights and exceeded the bounds of state authority. This decision is a significant affirmation of the importance of safeguarding fundamental rights in the digital age, particularly within the context of online platforms that serve as crucial arenas for expression. While celebrating this victory for free speech, it remains essential to acknowledge legitimate concerns over national security and data privacy regarding social media platforms answerable to a malevolent foreign government. TikTok's ownership by China's ByteDance raises pertinent questions about safeguarding user data and its potential exploitation by foreign entities. So worrying were the reports that the FBI opened an investigation into ByteDance in March. The need for robust measures to protect against data scraping, digital surveillance, and misuse of personal information is a valid concern. This case prompts reflection on the broader social welfare implications of platform regulation. TikTok's substantial user base, particularly youth, holds significant sway over American culture. Striking a balance between protecting user freedoms and privacy enables a safer digital environment without compromising free expression. Even storing Americans’ data in the United States might not be enough to lessen the danger that the regime in Beijing might override any firewalls. A better solution could be to incentivize China's ByteDance to divest TikTok's ownership to American ownership. This move would alleviate worries about data security by placing the platform under the oversight and governance of a company within the United States, subject to American laws and regulations. Ultimately, Judge Molloy's ruling upholds the sanctity of free speech in the digital realm. It should fuel constructive dialogues on the complex challenges to the United States posed by TikTok, particularly to the tension between individual liberties, national security imperatives in the face of a hostile regime, and the responsibility of digital platforms. Finding a delicate equilibrium among these facets remains an ongoing challenge that requires creative solutions, not restrictions on speech. A recent Federalist Society debate between NYU law professor Richard Epstein and the Cato Institute’s Clark Neily offered an illuminating preview of an urgent legal question soon to be addressed by the U.S. Supreme Court: can states constitutionally regulate the content moderation policies of social media platforms like Facebook and X (Twitter)?
Florida and Texas say “yes.” A Florida law bars social media companies from banning political candidates and removing anything posted by a “journalistic enterprise” based on its content. A Texas law prohibits platforms with at least 50 million active users from downgrading, removing, or demonetizing content based on a user’s views. Both bills are a response to legislative perceptions of tech censorship against conservative speakers. These two laws are based on the premise that states can regulate online platforms. But two federal courts came to two entirely different conclusions on that point. In 2022, the U.S. Court of Appeals for the Eleventh Circuit struck down the Florida law, finding “that it is substantially likely that social-media companies – even the biggest ones – are ‘private actors’ whose rights the First Amendment protects ...” Also in 2022, the Fifth Circuit Court of Appeals ruled for Texas, allowing the state law to stand. In the FedSoc debate, Epstein and Neily agreed about many of the problems some have with social media platforms but diverged – radically – on the remedies. Epstein argued that social media companies should be regulated like “common carriers,” fee-based public transportation businesses and entities offering communication transmission services such as phone companies. Under federal law, common carriers are required to provide their services indiscriminately; they cannot refuse service to someone based on their political views. Epstein – who himself was deplatformed from YouTube for offering contrarian views on Covid-19 policy – believes this is an appropriate requirement for social media platforms, too. Epstein cited a number of examples that he classifies as bad behavior by social media companies (collusion with government, acquiescence to government coercion, effective defamation of the deplatformed) which, in his view, compound an underlying censorship concern. He said: “…[I]t’s a relatively low system of intervention to apply a non-discrimination principle which is as much a part of the constitutional law of the United States as is the freedom of expression principle….” Neily, by contrast, took the Eleventh Circuit’s perspective, arguing that social media platforms are private companies that make constitutionally protected editorial decisions in order to curate a specific experience for their users. Neily said: “Even the torrent of Richard’s erudition cannot change three immutable facts. First, social media platforms are private property. There are some countries where that doesn’t matter, and we’re not one of them. Second, these are not just any private companies. These are private companies in the business of speech – of facilitating it and of curating it. That means providing a particular kind of experience. And third, you simply cannot take the very large and very square peg of the social media industry and pound it into the very round hole of common carrier doctrine or monopoly theory or regulated utilities ….” Protect The 1st understands Epstein’s frustration. Social media platforms routinely curate the content posted by third parties in order to ensure conformity with the platforms’ policies and terms of use. Modification of the content or refusal to publish often enrages the party who made the submission. But we remain decisively inclined towards Neily’s view. The First Amendment only prohibits repression of speech by the government. To carve out constitutional exceptions against private companies based on the disaffection of some with curation decisions would be a tremendously shortsighted error. To again quote Neily: “This is how you lose a constitution of limited government – one exception at a time.” One of the examples of the bad behavior to which Epstein alludes is presently being litigated in Missouri v. Biden. In that case, it is alleged that the government coerced social media platforms into downgrading or removing content that did not comport with the government’s efforts to ensure the provision of accurate information to the public regarding the Covid-19 pandemic, such as the effectiveness of vaccines. And while coercion is certainly reprehensible, we again agree with Neily as to how it should be addressed – through existing legal remedies. Said Neily: “What we should be doing instead [of regulating] is identifying the officials who engaged in this conduct and going after them with a meat axe.” When platforms engage in content moderation practices that are aggressive, they risk compromising their status as mere hosts of other’s content to become publishers of the content. The threat of losing the liability protections of Section 230 in these cases would serve as a useful deterrent to egregious content modification. Meat axes and other hyperboles aside, what we need most is an articulable roadmap for distinguishing between coercion and legitimate government interaction with tech platforms. Advocates of the common carrier argument tend to accurately diagnose the problem but overprescribe the solution. The preponderance of new issues that would arise if we transformed platforms into common carriers is staggering. Shareholder value would plummet, and retirement plans would suffer. And then there’s the problem of deciding which particular bureaucrats should be entrusted with overseeing these thriving, innovative, bleeding-edge technology companies, and the social media townhall. It’s unlikely the federal bench is that deep. We cannot seamlessly apply common carrier doctrine to social media platforms, nor should we nullify their constitutional rights just because of their success. As Neily said: “The idea that somehow you begin to lose your First Amendment rights just because you create some new way of affecting public discourse or even the political process, just because you hit it big … That is utterly alien to our tradition.” UPDATE: Supreme Court to Hear Arguments on Government Influence Over Social Media Platforms10/24/2023
In July we analyzed an order issued by Judge Terry A. Doughty of the U.S. District Court for the Western District of Louisiana that enjoined the Biden Administration and a wide range of federal agencies from “urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms.”
Last month, PT1st covered a subsequent ruling from the Fifth Circuit Court of Appeals, which significantly narrowed the scope of the district court’s injunction, reducing the district court’s ten prohibitions on government communications with social media platforms to one, and greatly limiting the agencies subject to the injunction to the White House, the FBI, the surgeon general’s office and the CDC. Now, acting on a request for review by the government, the U.S. Supreme Court has agreed to hear the case, staying the lower courts’ injunction in the meantime. At least until the High Court rules on the case, Biden Administration officials are not barred from interacting with social media platforms to combat what they view as misinformation. Justice Alito, joined by Justices Gorsuch and Thomas, dissented from granting the stay, writing: “At this time in the history of our country, what the court has done, I fear, will be seen by some as giving the government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.” Protect The 1st is not so sure. Given that the Court is now set to hear this case, executive branch officials will have good reason to be especially circumspect in the interim. Whatever happens, this case will be of great importance for the First Amendment’s application to online speech and permissible levels of government involvement in urging platforms to moderate content. And it is only one of several big cases set for consideration before our highest court in the coming months. The Supreme Court also recently agreed to hear a dispute stemming from Florida’s and Texas’ efforts to prohibit social media companies from engaging in some forms of content moderation, which the platforms have always viewed as protected by the First Amendment. In another case set in a hearing later this month, the Court will tackle the question of whether public officials can block their critics on social media. Regarding the present controversy, the Fifth Circuit ruled in September that the White House, the Surgeon General’s office, the FBI, and the CDC either coerced or significantly encouraged social media platforms to moderate protected speech primarily regarding election misinformation and misinformation about the pandemic. In the stay application, Solicitor General Elizabeth B. Prelogar argued that the platforms are private entities that made independent content moderation decisions. The government’s interactions with them, in turn, constituted routine advice consistent with its duties to protect public health and safety. “A central dimension of presidential power,” wrote Prelogar, “is the use of the office’s bully pulpit to seek to persuade Americans – and American companies – to act in ways that the president believes would advance the public interest.” The attorneys general of Missouri and Louisiana, both plaintiffs in the case, responded that the bully pulpit “is not a pulpit to bully,” arguing that the administration went too far in its communications by engaging in threatening and coercive behavior. As such, they assert, the decisions to remove or downgrade certain posts and accounts constituted government action. “The government’s incessant demands to platforms,” they wrote, “were conducted against the backdrop of a steady drumbeat of threats of adverse legal consequences from the White House, senior federal officials, members of Congress and key congressional staffers — made over a period of at least five years.” If, in the end, the Supreme Court determines that the government is threatening social media platforms, that will be a consequential finding. As the dissenting Justices write, “Government censorship of private speech is antithetical to our democratic form of government ...” At the same time, the government must be able to speak to private actors, including social media platforms, on issues of public concern. Ultimately, we need a roadmap for distinguishing between legitimate government action and coercion. A robust discussion at the national level is best suited to parse the nuances at play when it comes to social media and free speech. Congress should hold bipartisan hearings to determine the circumstances where government advice may be helpful to platforms’ content moderation decisions versus the circumstances where such advice may be coercive. We’ll be watching this case closely as it progresses. Earlier this summer, we wrote about an opinion and order issued by Judge Terry Doughty of the U.S. District Court for the Western District of Louisiana in the case of Missouri v. Biden. The controversy stemmed from accusations of government censorship and viewpoint discrimination against speech – under both the Biden and the Trump administrations – most notably social media posts related to COVID-19.
The plaintiffs argued that the government pressured social media platforms to such a degree that it interfered with the First Amendment right of the platforms to make their own content moderation decisions. Judge Doughty agreed. The district judge’s controversial order enjoined the White House and a broad range of government agencies from engaging in a wide array of communications with social media platforms, with 10 separate provisions laying out the parameters. The administration appealed to the Fifth Circuit, which stayed the injunction. Now, a three-judge panel from the Fifth Circuit has weighed in. Broadly, they side with Judge Doughty’s finding that the White House, the Surgeon General’s office, the FBI, and the CDC either coerced or significantly encouraged social media platforms to moderate protected speech. At the same time, the court significantly reduced the scope of the injunction order, striking nine out of the 10 prohibitions for vagueness, overbreadth, or redundancy. Further, the court found that a range of enjoined parties – including former NIH Infectious Disease Director Anthony Fauci and the State Department – did not engage in impermissible conduct. What we are now left with is a much narrower new injunction with a single prohibition reading as follows: “Defendants, and their employees and agents, shall take no actions, formal or informal, directly, or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies’ decision-making processes.” Unsurprisingly, the Biden administration is appealing the ruling – this time to the highest court in the land. The U.S. Supreme Court granted the administration’s request for an administrative stay of the Fifth Circuit injunction as the administration prepares to file a petition for certiorari by Oct. 13 (which would allow the Supreme Court to hear the controversy this term). While it is at least reasonably likely that the Court will agree to hear this case, we stand by our prior position on the issue – that questions surrounding the limits of government interaction with social media companies merit a vigorous, informed public debate. We again urge Congress to hold bipartisan hearings to examine among other questions whether social media platforms find the communications with government to be unwelcome pressure or whether they find the information provided to be helpful. In order to combat a tide of Covid misinformation, in 2021 the White House began closely monitoring social media companies’ health related postings. The sense of urgency felt by federal officials was soon reflected in sometimes hyperbolic communications to the public that reflected the deep concern with a flood of harmful misinformation that they believed was getting in the way of the provision of accurate Covid related information to the public. In July 2021, at a White House presser, the Surgeon General accused social media companies of “enabl[ing] misinformation to poison” the public. Soon after, President Biden responded with his own comment about social media “killing people” and the White House publicly discussed legal options. Social media companies apparently understood the message, changing internal policies and making new efforts to deplatform users like the “disinfo dozen,” a list of influencers deemed problematic by the White House. Still, the administration continued its public messaging, with the White House Press Secretary at one point expressing explicit support for Section 230 reforms so the companies can be held accountable for “the harms they cause.” Of course, the government must be able to communicate freely to the public and with private companies, especially on matters of public health and safety. The parties released from the District Court’s injunction likely exercised that right appropriately. There is danger, however, when the government works with social media silently to remove content, with no public transparency, especially if there is a hint (or more than a hint) of coercion. What is that danger, exactly? Reasonable people agree there are public health messages that are irresponsible and harmful. But secret censorship, no matter the justification, is the royal road to a censored society. Protect The 1st hopes that congressional hearings and a high Court review will bring clarity on the question of government communications with social media, now America’s main public square. Jeff Kosseff, associate professor of cybersecurity law at the U.S. Naval Academy, titled his acclaimed book about Section 230, The Twenty-Six Words that Created the Internet. Those exact words:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Kosseff did not exaggerate. This statute, part of the Communications Decency Act of 1996, protects platforms and websites from any liability contained in third-party posts. Section 230 not only protects Facebook or Twitter (now X) from being sued for libelous posts made by its users, it also protects myriad web-based businesses – from Angi (formerly Angie’s List), to Rate My Professors, to a thousand sites that run reviews of hotels, restaurants, and businesses of all sorts. Without Section 230, a wide swath of U.S. digital commerce would cease to exist overnight. And yet, Justice Clarence Thomas hit a nerve in 2021 when he mused in an opinion that the “right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions.” Such questions certainly seemed interesting to lawmakers in Florida and Texas. Texas passed a law that bars companies from removing posts based on a poster’s political ideology. This law was upheld last year by the Fifth Circuit. The Florida law, which would prohibit social media from removing the posts of political candidates, was stricken last year by the Eleventh Circuit. At the time, we wrote that: Cert bait doesn’t get more appealing than this. Consider: A split between federal circuits. Laws that would protect free expression in the marketplace of ideas while simultaneously curtailing the speech rights of unpopular companies. Two similar laws with differences governing the moderation of political speech. The petition for SCOTUS reviewing the Texas and Florida laws practically writes itself. The First Amendment is aimed only at the government. It protects the editorial decisions of social media companies while forbidding government control of speech. But being kicked off X, Facebook, Google, and Amazon would certainly feel like being censored. And there may well be First Amendment implications whenever federal agencies are secretly involved in content management decisions. But if Section 230 is overthrown, what will replace it? In the face of the current circuit split, legal principles get tangled up like fishing lines on a tourist boat. As Kosseff notes in Wired, Americans living under the Fifth Circuit may see drastic alteration of the regulation of internet companies. In the Eleventh Circuit, Section 230 prevails as it is. The resulting confusion is why it is likely the Supreme Court will have to take up a challenge from NetChoice, which represents tech companies. If the Court doesn’t cut this Gordian knot, we could wind up with a Red State internet and a Blue State internet. While the judiciary sorts out its thinking, Congress should act. Protect The 1st continues to press policymakers to look at principles similar to those of the bipartisan Platform Accountability and Consumer Transparency Act, which would require big social media companies to offer clear standards and due process for those who post in exchange for the liability protections of Section 230. A New York Times op-ed by two U.S. senators offers a bipartisan counter to the power of Big Tech – eliminate the legal liability protections that have been the cornerstone of the internet since 1996, while imposing “an independent, bipartisan regulator charged with licensing and policing the nation’s biggest tech companies.”
The ability to license and police is, of course, the ability to control some of America’s largest social media platforms. If enacted, this measure proposed by Sens. Elizabeth Warren (D-MA) and Lindsey Graham (R-SC) would strip away the ability of minority opinion and contentious views from being heard, while subjecting speech to official, top-down policing by a regulator. The op-ed doesn’t name Section 230, the law that protects platforms that host third-party speech from legal liability. We respect the earnest desire of these two senators to improve the state of online speech, but replacing Section 230 with the vague mandate of a regulator could be profoundly dangerous for the First Amendment’s guarantee of free speech, the lifeblood of democracy. Section 230 restricts the legal liability for illegal acts to the speaker, not the website. It holds those who break the law online accountable for their actions, while holding platforms accountable for preventing serious federal crimes, like posting child abuse sex material. It empowers minorities of all sorts, allowing controversial or unpopular opinions to have their day. Without Section 230, the internet would devolve into a highly sanitized, curated space where any controversial statement or contentious argument would be red penciled. The elimination of Section 230 would take away the vibrant clash of opinions and replace it with endless cat videos and perhaps the regulator’s officially sanctioned views. Many believe, and we agree, that Section 230 needs reform. The bipartisan PACT Act would require platforms to give speakers a way to protest having posts removed, while respecting the First Amendment rights of both companies and speakers, with less risk of government heavy-handedness and censorship. In an amicus brief before the U.S. Supreme Court earlier this year, Protect The 1st told the Court that curtailing Section 230 of the Communications Decency Act of 1996 “would cripple the free speech and association that the internet currently fosters.” Consistent with that recommendation, the Court today declined various invitations to curtail that law’s important protections for free speech.
Joining with former Sen. Rick Santorum, we demonstrated in our amicus brief that Section 230 – which offers liability protection to computer-services providers that host third-party speech – is essential to enabling focused discussions and keeping the internet from devolving into a meaningless word soup. “If platforms faced liability for merely organizing and displaying user content in a user-friendly manner, they would likely remove or block controversial – but First Amendment protected – speech from their algorithmic recommendations,” PT1st declared. We stated that a vibrant, open discussion must include a degree of protection for sponsors of internet conversations. With Congress always able to amend Section 230 if new challenges necessitate a change in policy, there is no need for the Supreme Court to rewrite that law. The Supreme Court had shown recent interest in reexamining Section 230. That could still happen, but the two cases that were before the Court turned out to be weak vessels for that review. On Thursday, the Court declined to consider reinterpreting this law in Gonzalez v. Google and Twitter v. Taamneh, finding that the underlying complaints were weak. The Court neither expressly affirmed nor rejected our approach, leaving these issues open for another day and another case. Protect The 1st will remain vigilant against future challenges to Section 230 that could undermine the freedom of speech online. Our policy director, Erik Jaffe, discusses the U.S. Supreme Court oral argument in Gonzalez v. Google with The Federalist Society.
Via The Federalist Society: On February 21, 2023, the U.S. Supreme Court will hear oral argument in Gonzalez v. Google. After U.S. citizen Nohemi Gonzalez was killed by a terrorist attack in Paris, France, in 2015, Gonzalez’s father filed an action against Google, Twitter, and Facebook. Mr. Gonzalez claimed that Google aided and abetted international terrorism by allowing ISIS to use YouTube for recruiting and promulgating its message. At issue is the platform’s use of algorithms that suggest additional content based on users’ viewing history. Additionally, Gonzalez claims the tech companies failed to take meaningful action to counteract ISIS’ efforts on their platforms. The district court granted Google’s motion to dismiss the claim based on Section 230(c)(1) of the Communications Decency Act, and the U.S. Court of Appeals for the Ninth Circuit affirmed. The question now facing the Supreme Court is does Section 230 immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information? Observers of the U.S. Supreme Court have long wondered if Justice Clarence Thomas would lead his colleagues to hold internet companies that post users’ content to the same liability standard as a publisher.
In a concurrence last year, Justice Thomas questioned Section 230 – a statute that provides immunity for internet companies that post user content. Justice Thomas noted that the “right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions.” In the case heard today, Gonzalez v. Google, the family of a woman murdered by terrorists in Paris is suing Google not for a direct post, but for a YouTube algorithm that temporarily “recommended” ISIS material after the crime. In oral argument, Justice Thomas posed a more skeptical note. “If you call information and ask for al-Baghdadi’s number and they give it to you, I don’t see how that’s aiding and abetting,” he said. Justices returned to precedents about lending libraries and bookstores not being held accountable for the content in their books. Protect The 1st joined with former Sen. Rick Santorum in an amici brief before the Court arguing that Section 230 protections are absolutely needed to sustain a thriving online marketplace of ideas. Social media companies make a good faith effort to screen out dangerous content, but with billions of messages, perfection is impossible. Google attorney Lisa Blatt brought this point home in a colorful way, noting that a negative ruling would “either force sites to take down any content that was remotely problematic or to allow all content no matter how vile. You’d have ‘The Truman Show’ versus a horror show.” The tone and direction of today’s oral argument suggests that the Justices appreciate the potential for an opinion that could have negative unforeseen consequences for free speech. Justice Brett M. Kavanaugh added that the court should not “crash the digital economy.” Protect The 1st looks forward to reading the Court’s opinion and seeing its reasoning. Former U.S. Senator Rick Santorum today joined with Protect The 1st to urge the U.S. Supreme Court to reject the petitioners’ argument in Gonzalez v. Google that the algorithmic recommendations of internet-based platforms should make them liable for users’ acts.
Santorum and Protect The 1st told the Court that curtailing Section 230 “would cripple the free speech and association that the internet currently fosters.” As a senator, Santorum had cast a vote for Section 230 to send the bill to President Bill Clinton’s desk for signature in 1996. The Protect The 1st amicus brief informed the Court:
The brief described for the Court the harm to society that would occur if the Court were to disregard Section 230’s inclusion of First Amendment-protected editorial judgments. The brief tells the Court:
And there is no need for the Supreme Court to rewrite Section 230: As amici explained, Congress can choose to amend Section 230 if new challenges necessitate a change in policy. For example, Congress recently eliminated Section 230 immunity when it conflicts with sex trafficking laws, and Congress is currently debating a variety of bills that would address specific concerns about algorithm-based recommendations. The Protect The 1st’s brief states: “The judiciary is never authorized to interpret statutes more narrowly than Congress wrote them, but it is especially inappropriate to do so when Congress is already considering whether and how to amend its own law.” Background: This Protect The 1st amicus brief answers the question before the U.S. Supreme Court in Gonzalez v. Google: “Does Section 230(c)(1) of the Communications Decency Act immunize interactive computer services when they make targeted recommendations of information provided by another information content provider?” Th case pending before the Court centers around the murder of Nohemi Gonzalez, a 23-year-old American who was killed in a terrorist attack in Paris in 2015. A day after this atrocity, the ISIS foreign terrorist organization claimed responsibility by issuing a written statement and releasing a YouTube video that attempted to glorify its actions. Gonzalez’s father sued Google, Twitter, and Facebook, claiming that social media algorithms that suggest content to users based on their viewing history makes these companies complicit in aiding and abetting international terrorism. No evidence has been presented that these services played an active role in the attack in which Ms. Gonzalez lost her life. A district court granted Google’s motion to dismiss the claim based on Section 230 of the Communications Decency Act, a measure that immunizes social media companies from content posted by users. The U.S. Court of Appeals for the Ninth Circuit affirmed the lower court’s ruling. The Supreme Court is scheduled to hear oral arguments Feb. 21. CLICK HERE FOR THE AMICUS BRIEF Protect The 1st is covering the growing likelihood that the split between the Eleventh and Fifth Circuit courts over the social media moderation content laws of Texas and Florida make it likely that the U.S. Supreme Court will resolve what decisions about political speech – if any – can be made by states.
As we reported last week, the Florida law – which would prohibit social media platforms from removing the posts of political candidates – was stricken by the Eleventh Circuit. The Texas law, which bars companies from removing posts based on a poster’s political ideology, was upheld by the Fifth Circuit. Both laws aim to address questionable content moderation decisions by Twitter, Meta, Google, and Amazon, by eroding the Section 230 liability shield in the Communications Decency Act. Cert bait doesn’t get more appealing than this. Consider: A split between federal circuits. Laws that would protect free expression in the marketplace of ideas while simultaneously curtailing the speech rights of unpopular companies. Two similar laws with differences governing the moderation of political speech. The petition for SCOTUS reviewing the Texas and Florida laws practically writes itself. We were not initially surprised when we heard reports the Supreme Court was stepping into the Section 230 fray. The Court, however, is set to examine a different set of challenges to Section 230 in a domain that is oblique to the central questions about political content posed by Texas and Florida. The court will examine whether the liability protections of Section 230 immunize Alphabet’s Google, YouTube, and Twitter against apparently tangential associations in two cases involving terrorist organizations. Do the loved ones of victims of terror attacks in Paris and Istanbul have an ability to breach 230’s shield? We don’t mean to diminish the importance of this question, especially to the victims. As far as the central questions of political content moderation and free speech are concerned, however, any decisions on these two cases will have modest impact on the rights and responsibilities of these platforms, a crucial issue at center of the national debate. It is our position that taking away Section 230 protections would collapse online commerce and dialogue, while violating the First Amendment rights of social media companies. Love social media companies or hate them – and millions of people are coming to hate them – if you abridge the right of one group of unpopular people to moderate their content, you degrade the power of the First Amendment for everyone else. We continue to press policymakers to look to the principles behind the bipartisan Platform Accountability and Transparency Act, which would compel the big social media companies to offer clear standards and due process for posters in exchange for continuing the liability protection of Section 230. |
Archives
November 2024
Categories
All
|
ABOUT |
ISSUES |
TAKE ACTION |