Legal Scholar Eugene Volokh Tells Ninth Circuit: California Social Media Law Requires Companies to “Do the Government’s Dirty Work.”
When does a legal reporting requirement for a social media company become a violation of the First Amendment? When it drums up public and political pressure to enforce viewpoint discrimination.
This is the conclusion of legal scholar Eugene Volokh and Protect The First Foundation, which filed an amicus brief late Wednesday before the Ninth Circuit Court of Appeals asking it to overturn a lower court ruling that upheld a California law requiring social media companies to disclose their content moderation practices. California Bill AB 587, signed into law by Gov. Gavin Newsom in 2022, compels social media companies to produce two such reports a year on their moderation practices and decisions, to be published on the website of the California Attorney General.
This law “violates the First Amendment’s stringent prohibition on viewpoint discrimination” by “requiring social media companies to define viewpoint-based categories of speech,” declared Volokh, Senior Legal Advisor to Protect The 1st. “The law also requires these companies to report their policies as to those viewpoints, but not other viewpoints ...”
This brief supports the challenge from X Corp.’s lawsuit filed in September 2023 that also asserted that AB 587 violates the First Amendment, which “unequivocally prohibits this kind of interference with a traditional publisher’s editorial judgment.”
Volokh and Protect The 1st cited the landmark U.S. Supreme Court case, NAACP v. Alabama (1958), in which the Court overturned an Alabama law that would have compelled disclosure of the NAACP’s membership lists. The threat behind this law, the Court noted, relied on governmental and private community pressures that would result in the harassment of individuals and discouragement of their speech.
“Generating either massive fines or public ‘pressure,’ a euphemism for public hostility, triggers the most exacting scrutiny our Constitution demands,” Volokh told the court. “California Assembly Bill 587 violates the First Amendment’s stringent prohibition on viewpoint discrimination. And AB 587 does so by leaning on social media companies to do the government’s dirty work, either through fear of fine or public pressure.”
The brief cites a Supreme Court opinion that states “what cannot be done directly [under the Constitution] cannot be done indirectly.” Volokh writes:
“The intent behind the law is clear from its legislative history, comments by its enforcer (Attorney General Rob Bonta), and common sense. That intent is to strongarm social media companies to restrict certain viewpoints—to combine law and public pressure to do something about how platforms treat those particular viewpoints, and not other viewpoints. That confirms that the facial viewpoint classification in the statute is indeed a viewpoint-based government action aimed at suppressing speech—and that violates the First Amendment.”
Protect The 1st will continue to report on X Corp.v. Bonta as an important flashpoint in the continuous struggle to keep speech free of official regulation.
Should we move to a post-Section 230 internet? Is liability-free content hosting coming to an end?
In Wired, Jaron Lanier and Allison Stanger argue for ending that provision of the Communications Decency Act that protects social media platforms from liability over the content of third-party posts. The two have penned a thoughtful and entertaining analysis about the problems and trajectory of a Section 230-based internet. It’s worth reading but takes its conclusions to an unjustifiable extreme – with unexamined consequences.
The authors assert that while Section 230 may have served us well for a time, they argue that long-running negative trends have outpaced the benefits that Section 230 provided. The authors write that modern, 230-protected algorithms heavily influence the promotion of lies and inflammatory speech online, which it obviously does.
“People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice,” Lanier and Stanger write. “If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm.”
They argue algorithms and the “advertising” business model appeal to the most primal elements of the human brain, effectively capturing engagement by promoting the most tantalizing content. “We have learned that humans are most engaged, at least from an algorithm’s point of view, by rapid-fire emotions related to fight-or-flight responses and other high-stakes interactions.” This dynamic has had enormous downstream consequences for politics and society; Section 230 “has inadvertently rendered impossible deliberation between citizens who are supposed to be equal before the law. Perverse incentives promote cranky speech, which effectively suppresses thoughtful speech.”
All this has led to a roundabout form of censorship, where arbitrary rules, doxing, and cancel culture stifle speech. Lanier and Stanger call this iteration of the internet the “sewer of least-common-denominator content that holds human attention but does not bring out the best in us.”
Lanier and Stanger offer valid criticisms of the current state of the net. It is undeniable that discourse has coarsened in connection with the rise of social media platforms and toxic algorithms. Worse, the authors are correct that algorithms provide an incentive for the spreading of lies about people and institutions. Writing that John Smith is a lying SOB who takes bribes will, to paraphrase Twain, pull in a million “likes” around the world before John Smith can tie his shoes.
So what is to be done?
First, do not throw out Section 230 in toto. As we previously said in our brief before the U.S. Supreme Court with former Senator Rick Santorum, gutting Section 230 “would cripple the free speech and association that the internet currently fosters.” Without immunity, internet platforms could not organize content in a way that would be relevant and interesting to users. Without Section 230 protections, media platforms would avoid nearly any controversial content if they could be frivolously sued anytime someone got offended.
Second, do consider modifications of Section 230 to reduce the algorithmic incentives that fling and spread libels and proven falsehoods. Lanier and Stanger make the point that the current online incentives are so abusive that the unhinged curtail the free speech of the hinged. We should explore ways to reduce the gasoline-pouring tendency of social media algorithms without impinging on speech. Further reform might be along the lines of the bipartisan Internet PACT Act, which requires platforms to have clear and transparent standards in content moderation, and redress for people and organizations who have been unfairly deposted, deplatformed, and demonetized.
Lanier and Stanger are thinking hard and honestly about real problems, but the problems they would create would be much worse. A post-230 social media platform would be either be curated to the point of being inane, or not curated at all. Now that would be a sewer.
Still, we give Lanier and Stanger credit for stimulating thought. Everyone agrees something needs to change online to promote more constructive dialogue. Perhaps we are getting closer to realizing what that change should be.
Woman Arrested for Social Media Snark While High Court Protects Suspect’s Right to Tell Police to “Worry About a Head Shot”
A woman in Morris County, New Jersey, was arrested in December for her social media posts that officials say constituted threats of terrorism, harassment, and retaliation. The last two of those “threats” seem to pertain to the authorities, who stretched statements from the merely obnoxious to appear as a “true threat.”
Monica Ciardi had been posting to Facebook for weeks about her child custody dispute. Her posts, coming by the dozens, criticized her ex-husband and the Morris County judges presiding over her case. In late December, police finally arrested Ciardi for posting “Judge Bogaard and Judge DeMarzo: If you don’t do what I want then you don’t get to see your kids. Hmm.”
Here’s the catch, Ciardi was parroting what the two judges said to her, accidentally forgetting to use quotation marks. Ciardi had meant to post what the judges had declared in court – that if she didn’t do what they wanted, then she wouldn’t get to see her children.
Ciardi offered an insightful metaphor:
“This is my personal Facebook page with 50 people on it. They came to my page and then turned around and said I harassed them. That’s like if I know you don’t like me, I go to your house, I stand on your front porch, I overhear you saying bad things about me, and then I call the cops and say, ‘She’s harassing me. I know I’m on her porch, but you should just hear what she said.'”
Ciardi’s attorney said the incident amounted to “the government punishing and jailing a woman for simply speaking her mind.”
Ciardi claims her experience in jail was a nightmare. While there, she says that she received death threats, saw several assaults, and got caught in the line of fire of correctional officers’ pepper spray twice. She says she suffered panic attacks, lost 15 pounds, and was placed in protective custody, which meant she didn’t leave her cell “for more than 45 minutes two to three times a week, max.” That’s stiff punishment for venting frustrations online.
Ciardi spent 35 days in jail until Superior Court Judge Mark Ali, who had originally ordered her detention, ordered her release. Ali cited a recent New Jersey Supreme Court ruling that raised the bar for terroristic threat charges. That case, too, is problematic, but for the opposite reason. Even a First Amendment organization like our own is agog at that ruling.
The New Jersey high court ruled on Jan. 16 that a man who told police during a domestic disturbance call to “worry about a head shot” if they enter his property. He also posted online that he knew where the officers lived and the cars they drove. The court ruled that prosecutors had failed to prove that such statements were credible threats that “instill fear of injury in a reasonable person in the victim’s position” and are not merely “political dissent or angry hyperbole.”
While we are pleased to see that Ciardi has been released, we disagree with the New Jersey court’s new precedent. In the case of the “head shot,” the threat was made against the putative target, the police. Unless the comment was made in an obviously sarcastic way or in some manner that indicated its insincerity, law enforcement should be able to take such claims seriously.
There must be an easy line to be drawn between arresting a woman for her Facebook posts about a pending trial and a true threat of violence against law enforcement officers. We look forward to further developments in this latest case.
In the closing days of 2023, Elon Musk and X Corp lost the first round of their bid in a state court to overturn a California law that would require social media platforms to disclose their content moderation policies. The law in question came into effect in 2022 and was advertised as a way to tamp down on hate speech, disinformation, harassment, and extremism.
The suit alleged that that the law’s real purpose was to coerce social media platforms into censoring content deemed problematic by the state. While District Judge William Shubb ruled that the law does impose a substantial compliance burden, he found it does not unjustifiably infringe on First Amendment rights.
Protect The 1st believes X has a strong basis to appeal under settled precedent.
For example, in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio (1985), the U.S. Supreme Court found that states can require an advertiser to disclose information without violating the advertiser's First Amendment free speech protections. But the disclosure requirements must be reasonably related to the state’s interest in preventing deception of consumers. This is not a case of selling gummies and advertising them as cures for cancer.
It is reasonable to assert that some social media companies might do themselves a favor by releasing simple, clear content moderation policies to the public. But we should never forget that these policies are confidential, proprietary information. Requiring their forced disclosure could tip the scales in favor of state-enforced censorship of social media, which at least one federal judge believes is already occurring on a mass scale. Worse, the California law violates the First Amendment by compelling speech on the part of the companies themselves.
Protect The 1st expects X to appeal with good prospects to overturn this ruling.
Censorship controversies made many headlines throughout 2023. We’ve seen revelations about heavy-handed content moderation by the government and social media companies, and the looming U.S. Supreme Court decisions on Florida and Texas laws to restrict social media. Behind these policies and laws is a surprising level of public support. A Pew Research poll offers a skeleton key for understanding the trend.
According to Pew, a majority of Americans now believe that the government and technology companies should make more concerted efforts to restrict false information online. Fifty-five percent of Pew respondents support the federal government removal of false information, up from only 39 percent in 2018. Some 65 percent of respondents support tech companies editing the false information flow, up from 56 percent in 2018.
Most alarming of all, Americans adults are now more likely to value content moderation over freedom of information. In 2018, that preference was flipped, with Americans more inclined to prioritize freedom of information over restricting false information – 58 percent vs. 39 percent.
Pew doesn’t editorialize when it posts its findings. For our part, these results reveal a disturbing slide in Americans’ appreciation for First Amendment principles. Online “noise” from social media trolls is annoying, to be sure, but sacrificing freedom of information for a reduction in bad information is anathema to the very notion of a free exchange of ideas. What is needed, instead, is better media literacy – not to mention a better understanding of what actually constitutes false information, as opposed to opinions with which one may simply disagree.
Still, the poll goes a long way toward explaining some of the perplexing attitudes we’re seeing on college campuses, where polls show college students lack a basic understanding of the First Amendment and increasingly support the heckler’s veto. These poll results also speak to the increasing predilection of public officials to simply block constituents with whom they disagree. And it perhaps explains some of the push-and-pull we’re seeing between big, blue social media platforms and big, red states like Florida and Texas, where one side purports to protect free speech by infringing on the speech rights of others.
While these results are interesting from an academic perspective, the suggested remedies raise major red flags. Americans want private technology companies to be the arbiters of truth. A lesser but still significant percentage wants the federal government to serve that role. Any institution comprised of human beings is bound to fail at such a task.
Ultimately, if we want to protect the free exchange of information, that role must necessarily fall to each of us as discerning consumers of news. The extent to which we are unable to differentiate between factual and false information is an indictment of our educational system. And, as far as content moderation policies are concerned, they must be clear, standardized, and include some form of due process for those subjected to censorship decisions.
More than anything, Americans need to relearn that if we open the door to a private or public sector “Ministry of Truth,” we will eviscerate the First Amendment as we know it. You might be on the winning side initially, but eventually we all lose.
A federal judge in Texas has upheld the state’s TikTok ban on devices used for government business. It’s the right ruling – a correct response to a precise law which undergirds the state’s legitimate interest in prohibiting the use of a potentially harmful social media app in official settings.
TikTok is a Chinese company with user data stored on servers in the PRC. It holds inordinate sway over young people in the US, with 67% of teens using the platform with some regularity, according to Pew. Yet, there is now credible public evidence that China’s officials enjoy open access to personal data on the platform, using it to spy on pro-democracy protestors. An employee of ByteDance, the corporate owner of TikTok, has made that claim.
The Coalition for Independent Technology Research filed the lawsuit in July, arguing that the Texas ban compromises academic freedom. One teacher from the University of North Texas even suggested that they cannot sufficiently assign work without use of the app.
Texas’ law specifically disallows the use of TikTok on state-owned, official devices. That’s in contrast to Montana’s outright ban on the app – for everyone. There, U.S. District Judge Molloy asserted that Montana’s law infringed on free speech rights and exceeded the bounds of state authority. He was right, too, and it was a significant affirmation of the importance of safeguarding fundamental rights in the digital age, particularly within the context of online platforms that serve as crucial arenas for expression.
This court split exemplifies the balance we must strike between protecting user freedoms and enabling a safe digital environment without compromising free expression.
States have every right to prohibit use of a foreign-controlled app on government owned phones. At the same time, blanket banning of TikTok is neither a constitutional nor reasonable response. Americans can speak freely and freely associate, even if they are unaware of the implications in doing so. State officials and employees, by contrast, are subject to different rules. But they are welcome to use TikTok on their personal phones.
As Judge Robert L. Pitman correctly asserts, state universities constitute a “non-public” forum – the touchstone of which is whether “[restrictions] are reasonable in light of the purpose which the forum at issue serves.” Here, “Texas is providing a restriction on state-owned and -managed devices, which constitute property under Texas’s governmental control….” It is both viewpoint neutral and reasonable – which is all that is needed in such cases.
Whether TikTok itself is viewpoint neutral is a question for another day.
The Cato Institute’s recent amicus brief making the case that social media laws passed by the states of Texas and Florida are unconstitutional also takes aim at a precedent from 1980, PruneYard Shopping Center v. Robins. Cato’s brief raises the question: Does it make sense to analogize the speech rights of those who own a physical property with those who own a social media company?
In PruneYard, the U.S. Supreme Court held that the California Constitution protected reasonably exercised speech on the privately owned PruneYard shopping center against the owner’s wishes. The Court noted the California Constitution has broader protections for speech than the Bill of Rights. The Court correctly reasoned that states can have greater and positive protections for speech than the negatively defined rights of the First Amendment, which forbids government censorship and curtailments of speech rights.
Based on this singular insight, the Court’s opinion established that the shopping center could not prevent outsiders from protesting or soliciting for political purposes on its private property.
In its brief, Cato argues that the Supreme Court should at the proper time address this odd ruling and hold that forcing private property owners to accommodate on their premises speech they do not support is a violation of the property owners’ First Amendment rights. Cato also argues that social media platforms should similarly be protected from being forced to carry the speech of others. While Protect The 1st agrees with Cato that the Texas and Florida laws are unconstitutional, the analogy to PruneYard is flawed. Cato’s comparison with real property, however, remains useful, offering an illuminating look at what is unique about social media.
As Protect The 1st previously reported, the Florida law would prohibit social media platforms from removing the posts of political candidates, while the Texas law would bar companies from removing posts based on a poster’s political ideology. The former law was struck down by the Eleventh Circuit, while the latter was upheld by the Fifth Circuit. Both cases are now headed to what promises to be a landmark digital speech review by the Supreme Court.
But is the extension of the critiques of the PruneYard applicable to social media? This seems inapt because property owners who allow outsiders to mount politically-charged events on their premises might face liability for that speech, just as newspapers can be sued for speech contained in letters-to-the-editor. Social media is different. Section 230 of the Communications Decency Act is a government grant of immunity to social media platforms for third-party speech, while allowing some discretion for the platforms to moderate content.
Despite frustrations over actual content management by social media companies, and government involvement in it, Section 230 has allowed a thriving online world to develop – along, of course, with all the attendant psychic garbage. This is utterly unlike shopping centers, which don’t enjoy any such government immunity and could be held legally accountable for the speech that occurs on their property.
The two state laws have obvious First Amendment flaws and striking them down doesn’t require revising precedents.
The authors of the Texas and Florida laws, concerned about the manipulation of the online debate, would further intrude government meddling into social media content moderation. This power would likely extend far beyond what these politicians imagine (and perhaps even to their specific detriment). We suggest the Supreme Court take a more straightforward analysis of the Florida and Texas laws as it invalidates them under the First Amendment.
U.S. District Judge Donald Molloy recently blocked Montana's ban of the Chinese-owned social media platform TikTok, standing up for free speech but leaving a host of issues for policymakers to resolve. Montana’s ban, which was slated to take effect at the beginning of 2024, made it the first U.S. state to take such a measure against the popular video sharing app.
Judge Molloy asserted that Montana’s law infringed on free speech rights and exceeded the bounds of state authority. This decision is a significant affirmation of the importance of safeguarding fundamental rights in the digital age, particularly within the context of online platforms that serve as crucial arenas for expression.
While celebrating this victory for free speech, it remains essential to acknowledge legitimate concerns over national security and data privacy regarding social media platforms answerable to a malevolent foreign government. TikTok's ownership by China's ByteDance raises pertinent questions about safeguarding user data and its potential exploitation by foreign entities. So worrying were the reports that the FBI opened an investigation into ByteDance in March. The need for robust measures to protect against data scraping, digital surveillance, and misuse of personal information is a valid concern.
This case prompts reflection on the broader social welfare implications of platform regulation. TikTok's substantial user base, particularly youth, holds significant sway over American culture. Striking a balance between protecting user freedoms and privacy enables a safer digital environment without compromising free expression.
Even storing Americans’ data in the United States might not be enough to lessen the danger that the regime in Beijing might override any firewalls. A better solution could be to incentivize China's ByteDance to divest TikTok's ownership to American ownership. This move would alleviate worries about data security by placing the platform under the oversight and governance of a company within the United States, subject to American laws and regulations.
Ultimately, Judge Molloy's ruling upholds the sanctity of free speech in the digital realm. It should fuel constructive dialogues on the complex challenges to the United States posed by TikTok, particularly to the tension between individual liberties, national security imperatives in the face of a hostile regime, and the responsibility of digital platforms. Finding a delicate equilibrium among these facets remains an ongoing challenge that requires creative solutions, not restrictions on speech.
A recent Federalist Society debate between NYU law professor Richard Epstein and the Cato Institute’s Clark Neily offered an illuminating preview of an urgent legal question soon to be addressed by the U.S. Supreme Court: can states constitutionally regulate the content moderation policies of social media platforms like Facebook and X (Twitter)?
Florida and Texas say “yes.” A Florida law bars social media companies from banning political candidates and removing anything posted by a “journalistic enterprise” based on its content. A Texas law prohibits platforms with at least 50 million active users from downgrading, removing, or demonetizing content based on a user’s views. Both bills are a response to legislative perceptions of tech censorship against conservative speakers.
These two laws are based on the premise that states can regulate online platforms. But two federal courts came to two entirely different conclusions on that point. In 2022, the U.S. Court of Appeals for the Eleventh Circuit struck down the Florida law, finding “that it is substantially likely that social-media companies – even the biggest ones – are ‘private actors’ whose rights the First Amendment protects ...” Also in 2022, the Fifth Circuit Court of Appeals ruled for Texas, allowing the state law to stand.
In the FedSoc debate, Epstein and Neily agreed about many of the problems some have with social media platforms but diverged – radically – on the remedies.
Epstein argued that social media companies should be regulated like “common carriers,” fee-based public transportation businesses and entities offering communication transmission services such as phone companies. Under federal law, common carriers are required to provide their services indiscriminately; they cannot refuse service to someone based on their political views. Epstein – who himself was deplatformed from YouTube for offering contrarian views on Covid-19 policy – believes this is an appropriate requirement for social media platforms, too.
Epstein cited a number of examples that he classifies as bad behavior by social media companies (collusion with government, acquiescence to government coercion, effective defamation of the deplatformed) which, in his view, compound an underlying censorship concern. He said:
“…[I]t’s a relatively low system of intervention to apply a non-discrimination principle which is as much a part of the constitutional law of the United States as is the freedom of expression principle….”
Neily, by contrast, took the Eleventh Circuit’s perspective, arguing that social media platforms are private companies that make constitutionally protected editorial decisions in order to curate a specific experience for their users. Neily said:
“Even the torrent of Richard’s erudition cannot change three immutable facts. First, social media platforms are private property. There are some countries where that doesn’t matter, and we’re not one of them. Second, these are not just any private companies. These are private companies in the business of speech – of facilitating it and of curating it. That means providing a particular kind of experience. And third, you simply cannot take the very large and very square peg of the social media industry and pound it into the very round hole of common carrier doctrine or monopoly theory or regulated utilities ….”
One of the examples of the bad behavior to which Epstein alludes is presently being litigated in Missouri v. Biden. In that case, it is alleged that the government coerced social media platforms into downgrading or removing content that did not comport with the government’s efforts to ensure the provision of accurate information to the public regarding the Covid-19 pandemic, such as the effectiveness of vaccines. And while coercion is certainly reprehensible, we again agree with Neily as to how it should be addressed – through existing legal remedies. Said Neily: “What we should be doing instead [of regulating] is identifying the officials who engaged in this conduct and going after them with a meat axe.”
When platforms engage in content moderation practices that are aggressive, they risk compromising their status as mere hosts of other’s content to become publishers of the content. The threat of losing the liability protections of Section 230 in these cases would serve as a useful deterrent to egregious content modification.
Meat axes and other hyperboles aside, what we need most is an articulable roadmap for distinguishing between coercion and legitimate government interaction with tech platforms. Advocates of the common carrier argument tend to accurately diagnose the problem but overprescribe the solution. The preponderance of new issues that would arise if we transformed platforms into common carriers is staggering. Shareholder value would plummet, and retirement plans would suffer.
And then there’s the problem of deciding which particular bureaucrats should be entrusted with overseeing these thriving, innovative, bleeding-edge technology companies, and the social media townhall. It’s unlikely the federal bench is that deep.
We cannot seamlessly apply common carrier doctrine to social media platforms, nor should we nullify their constitutional rights just because of their success.
As Neily said: “The idea that somehow you begin to lose your First Amendment rights just because you create some new way of affecting public discourse or even the political process, just because you hit it big … That is utterly alien to our tradition.”
In a recent Fox News interview, presidential candidate Nikki Haley drew a lot of raspberries when she called online anonymous posting a “national security threat.” She proposed that social media platforms require identity verification for all users to stop foreign disinformation campaigns.
Nikki Haley is legitimately concerned with real online dangers. But such a requirement would chill speech, stifle the free flow of ideas and information, harm journalists and their sources, and land several American Founders in internet jail.
Anonymity serves as a shield for individuals to freely express opinions without fear of retribution or persecution. For marginalized communities, victims of abuse, or those living under oppressive regimes, online anonymity can be a lifeline, allowing them to voice their concerns and opinions without risking personal safety. Banning anonymity would silence these voices.
Furthermore, this proposal fails to acknowledge the pivotal role played by anonymous sources in investigative journalism. Banning online anonymity could place an insurmountable barrier for journalists to protect their sources, impeding the public's right to know about crucial matters of public interest.
Platforms that allow anonymity often become safe spaces for open discussions on sensitive topics, mental health, or personal struggles. Removing this protective veil might discourage individuals from seeking help or sharing their experiences, ultimately stifling lifesaving conversations.
Rather than enhancing security, enforced identification online could create an environment ripe for censorship and surveillance, where individuals feel compelled to self-censor out of fear. It may also pave the way for increased government intrusion into private online spaces, eroding the very freedoms the First Amendment aims to protect.
Anonymity plays a vital role in many areas of American life, not just online speech. Since the landmark Supreme Court ruling in 1958, NAACP v. Alabama, the anonymity of donors has been recognized as critical to the protection of speech and the flourishing of the First Amendment. Perhaps that is why civil liberties groups on both the left and right have united to challenge laws that seek to expose donors, given such laws’ history with coercion, discrimination, and surveillance.
Perhaps most important of all, this great country and our freedoms might not exist if not for anonymity. Friends of history will know that America’s Founders and pivotal figures made generous use of anonymity. Alexander Hamilton, John Jay, and James Madison wrote under the pseudonym “Publius” when they drafted The Federalist Papers. So too did their opponents, who published the Anti-Federalist Papers anonymously under multiple pseudonyms like “Brutus,” “Cato,” and “Federalist Farmer.” Thomas Paine published Common Sense anonymously. Less monumental in scope, Benjamin Franklin wrote under the name “Mrs. Silence Dogood” for the New-England Courant when his brother, the founder and publisher of the newspaper, refused to publish his letters under Benjamin’s real name. Were it not for anonymity, American history would look very different.
To be sure, online anonymity has an ugly side. Social media platforms such as Facebook and Linked In have a First Amendment right to restrict anonymity and do so for sound business and public policy reasons. A personal attack or bombastic ideological statement without an identifiable author, however, inherently lacks credibility. We believe Americans have become savvier in their online judgments about online graffito than many experts believe.
Instead of advocating for the elimination of anonymity, we should focus instead on promoting responsible online behavior, fostering digital literacy, and developing mechanisms that balance security concerns with the preservation of free speech rights.
We’ve said it before – it would be a pointless victory to combat Russian disinformation if we become Russia.