The 26 Words that Ended the Internet: The Bipartisan Dilemma Over Section 230

Lauded as the “26 words that created the internet,” Section 230 has allowed internet platforms the flexibility they required to grow into the platforms we know and use today. However, bipartisan calls to reform, or simply repeal, Section 230 provide no consensus on the status of internet freedom of speech and liability moving forward. Photo by Patrizia Grandicelli.

Section 230 of the Communications Decency Act of 1996 reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It has been lauded as the “26 words that created the internet.” While it may have allowed internet service providers (ISPs) the flexibility they required to develop into the platforms we know and use today, members of Congress on both sides of the aisle are now suggesting that Section 230 should be reformed or repealed. However, there is no consensus on how exactly to amend it. Republicans would prefer to hold platforms liable for censorship of conservative ideology and voices. Democrats would like Section 230 to better encourage tech companies to combat hate speech, extremism, and misinformation on their respective platforms. 

At its core, Section 230 provides immunity to ISPs and social media companies, such as Facebook and Twitter, to grow without being sued for moderating user-generated content. In this case, those companies are considered “distributors,” not “publishers.” Publishers, under common law, can be held liable for defamatory statements made in the content they distribute. For example, if a newspaper publishes a defamatory statement, the publisher can be sued for libel along with the statement’s author. Distributors like social media companies do not typically exercise much control over the content they publish. This lack of editorial control means that the user-generated content is not subject to a distributor's scrutiny or censorship beyond basic legal compliance requirements, such as the agreed upon terms of conduct found on many websites.

The make-up of the internet today is very different from when Section 230 became law nearly 30 years ago. In 1996, the social media platforms and ISPs that are the focus of most current liability conversations, like Twitter and Facebook, were barely emerging. Independent blogs aggregated by user-curated feeds provided more freedom for open discussion of viewpoints, rather than algorithm-curated feeds that show users what most aligns with their interests. Online forums such as AOL and CompuServe functioned more like newspapers blindly publishing Letters to the Editor. Once relieved from the risk of facing libel lawsuits by Section 230, Facebook, Twitter, Reddit, and other online spaces were able to develop into unrestricted arenas where billions of individuals felt at liberty to express themselves without constraint. 

This newfound freedom and anonymity also allowed for easier organization and access to virtual communities often criticized for political radicalization and the dissemination of disinformation and falsehoods. Still, the structure of the early internet forums served as a quasi-internet “public square,” a site of open discussion and civic participation without fear of government retaliation. Essentially, a virtual “marketplace of ideas.” This desire to apply freedom of speech values from the First Amendment to the internet was emphasized in Reno v. American Civil Liberties Union (1997), when the Supreme Court acknowledged it did not want to place restrictions that would possibly curtail the distribution of protected speech rather than encourage it.

Nevertheless, freedom for companies to practice a laissez-faire approach to content moderation has resulted in conflicts through the internet’s expansion. Most recently, President Donald Trump was a vocal critic of Section 230. He claimed the provision led to undue censorship of conservative beliefs in particular and tweeted “REPEAL SECTION 230!!!” from his personal account during his presidency. The president blamed Section 230 for Twitter’s ability to put warning labels on his tweets. For example, during the 2020 election, some of President Trump’s tweets were labeled “disputed” and “misleading” after he declared his own victory in several states before the ballot results were finalized. President Trump’s ire regarding internet platforms’ moderation powers may have peaked when Twitter permanently suspended him from the platform in 2021. The company cited a desire to prevent “further incitement of violence” following the January 6th insurrection at the Capitol. 

Privileges granted by the First Amendment are commonly misattributed to Section 230 by both politicians and journalists. In a 2019 article titled “Why Hate Speech on the Internet is a Never-Ending Problem,” The New York Times printed the iconic 26 words and wrote that the “law shields” internet companies from being liable for hate speech in third-party content. However, the Times later had to publish a correction noting that they had “incorrectly described the law that protects hate speech on the internet. The First Amendment, not Section 230, protects it.” 

Jeff Kosseff, the Section 230 expert who coined the iconic “26 words” moniker, notes the clear link between Section 230 and the First Amendment. He believes that conflating the two can lead to misconceptions about what will happen if change is actually made to the provision. Kosseff writes, “If you’re saying that there should be more moderation and less harmful content, the big problem with that is that we still have the First Amendment…There’s a lot of stuff that is lawful, but awful, and the government can’t regulate that away with or without Section 230.” If the provision were repealed, as some Republicans wish, platforms would not automatically be liable for user-generated content that is hate speech or disinformation. Those types of speech are covered by the First Amendment.

President Trump is not the only Republican politician with an anti-Section 230 perspective. In 2021, Twitter banned Representative Marjorie Taylor Greene for spreading misinformation about COVID-19. Then-House Minority Leader Kevin McCarthy took to Twitter in support of Rep. Greene, referencing Section 230 and stating, “Twitter (all big tech), if you shut down constitutionally protected speech (not lewd and obscene) you should lose 230 protection. Acting as publisher and censorship regime should mean shutting down the business model you rely on today, and I will work to make that happen.” By claiming biased censorship, Republicans want to regulate platforms’ ability to moderate and remove user-generated content. The Protecting Constitutional Rights from Online Platform Censorship Act, introduced in the House by Representative Scott DesJarlais (R-TN-4) in January 2021, intended to overarchingly prohibit platforms from undertaking any measures that curtail access to, or availability of, user-generated content protected by the First Amendment. 

On May 28, 2020, President Trump issued an executive order that ignored prior judicial interpretations of Section 230 by restricting internet providers’ liability protections. Trump’s proposal also broadened liability by redefining “information content provider” to include anyone who comments on, affirms, or marginally contributes to any material. Republicans seem to be coming at Section 230 from two angles: reducing liability protections and preventing platforms from moderating content on their sites. The two strategies for amending Section 230 limit a company’s ability to function and grow with the freedom the provision intended and play a role in Republicans’ legislative actions fueled by their more broad resentment of “big tech.”

The Center for Democracy & Technology filed a lawsuit on June 2, 2020 against President Trump’s “Executive Order on Preventing Online Censorship.” According to the suit, the executive order violated the First Amendment by “curtailing and chilling the constitutionally protected speech of online platforms and individuals.” Essentially, the plaintiffs argue that free speech would be deterred because people would be less likely to utilize their free speech abilities due to fear of retaliation from the government. The “chilling effect” doctrine is used frequently in cases related to First Amendment freedom of speech rights. Kate Ruane, the ACLU’s senior legislative counsel, noted that, actually, “Donald Trump is a big beneficiary of Section 230. If platforms were not immune under the law, then they would not risk the legal liability that could come with hosting Donald Trump’s lies, defamation and threats.” 

Based on precedent, to receive liability protection ensured by Section 230 companies must make decisions to remove content in “good faith” and based on “objectively reasonable” beliefs. Decisions must also follow the ISP or media platform’s terms of service and provide specific reasoning for action. If this is not done in “good faith,” they risk liability exposure.

Somewhat surprisingly, there is bipartisan support for changing Section 230. Unsurprisingly, though, each political parties’ reasoning is entirely distinct. President Joe Biden and Democrats are concerned about the impact of user-generated content that spreads misinformation and other harmful messaging while also allowing internet companies to gain profit. While on the campaign trail at the start of 2020, then-candidate Biden called for revoking the tech liability shield for companies like Facebook. Since his election, Biden’s comments about Section 230 have been looped into policy targeting overarching reforms for large tech companies.

In September 2022, the White House released a plan to take on “Big Tech.” Somewhat similar to the statements from the Trump administration, the actionable items were incredibly vague. President Biden’s plan calls to “[r]emove special legal protections for large tech platforms” and says, “the President has long called for fundamental reforms to Section 230.” While the ideas appear founded in reform, there is no clear process for implementation. In a Wall Street Journal opinion piece that President Biden authored to garner support for the plan, he clarified his stance, noting that he wants “Big Tech” to take responsibility for the content spread by their algorithms. Democrats desire for ISPs to do more to moderate content for harmful speech and harassment, the opposite of why President Trump wanted to repeal Section 230. 

As expressed in precedent, it would be difficult to apply overarching reform because of the sheer amount of content posted on platforms. However, Section 230 may not be the best option to create the kind of change Democrats, or even Republicans, want. The Supreme Court has consistently affirmed that the First Amendment protects a significant portion of what may be considered hate speech. Forms of speech with less protection that may fall into the category of speech Democrats wish to curtail include “fighting words,” obscenity, and speech inciting imminent lawless action.

Repetition and inaction by both political parties leave something to be desired if actual change is truly the goal, especially since the Supreme Court granted writ of certiorari to two cases that reference Section 230 during the 2023 Supreme Court session. Democratic Senator Ron Wyden (D-OR), the creator of Section 230, has criticized how tech companies have applied the freedom granted by the provision. He originally intended Section 230 to be not only a defensive measure but also an offensive tool, enabling tech companies to actively combat objectionable content on their platforms. However, Wyden has expressed disappointment in the industry’s reluctance to self-moderate, cautioning that if this trend continues, there may be a push for stricter regulations and oversight. In the Senator’s own words, “You don’t use the sword, there are going to be people coming for your shield.” 

Outside of rhetoric from both sides of the political spectrum, what actually would happen if Section 230 were changed or repealed? According to The Internet Association, a lobbying group that represents various internet companies, the “best of the internet would disappear.” Yet, frankly, we do not know what would happen. Currently, if a user posts a defamatory comment on Twitter, one can sue the user, not Twitter. One possible outcome of a Section 230 repeal would be a widespread crackdown on all sorts of speech just out of fear that it could potentially make companies liable. While that would bolster Democrats’ wishes to mitigate harassment, it would potentially only increase Republicans' claims of ideological censorship.

If Section 230 was changed and online platforms lost their protections as distributors, platforms could be liable if they knew or had reason to know of defamatory or otherwise unlawful user content. There seems to be no compromise between the two political parties. For critics who already think there is too much moderation, increased liability might block content that would otherwise be protected. For Democrats, Section 230 removes incentives for content moderation that should be happening. Perhaps the solution does not fall within the parameters of a provision to protect companies; they have already reached the point of growth Section 230 wanted to create. If President Biden really wants to make members of “Big Tech” accountable to their users and the government, it may be beneficial to look at whether it is Section 230 granting these protections or if the answers can be found in the rights simply granted by the First Amendment to the people. 

Claire Schnatterbeck is a summer staff writer for CPR. She is a rising senior (CC ‘24) studying political science. When she’s not searching for an open seat in Butler Library she can be found listening to a podcast, strolling through Riverside park looking for dogs, or discussing her favorite Simon and Garfunkel song. She has roots in Illinois and Wyoming.