Is Section 230 necessary? Does it harm free speech? Does it enable misinformation? How should we change it, if at all?

Explore all perspectives, stances, and arguments for and against Section 230 and the regulation of online platforms with AllStances™ by AllSides.

Reform or Revoke Section 230 to prevent bias against conservatives Social media platforms have an anti-conservative bias, leading to unfair moderation practices.
Reform Section 230 to protect free speech The moderation practices of online platforms are no longer consistent with the spirit of Section 230, which was designed to promote free speech. 
Leave Section 230 As Is The impact of changing Section 230 is unpredictable and will have chaotic outcomes.
Reform or Revoke Section 230 to prevent misinformation Section 230 immunities have allowed platforms like Facebook and Instagram to become hotbeds of misinformation, demonstrating the need for reform. 

Section 230 of the Communications Decency Act (CDA) protects online companies from liability arising from what is posted on their platforms. It allows web hosts, such as Twitter and Facebook, to allow its users to post practically any content without making the web host liable. It also permits the web host to edit and restrict access to content posted by users and third parties, even if the material would be constitutionally protected. As tech giants like Google, Facebook, and Twitter have risen to prominence and now choose what speech and content to allow or ban, the clause has become highly controversial.

Enacted in 1996, The Communications Decency Act was originally created because at the dawn of the internet, many lawmakers were concerned about the internet’s ability to promote “filth,” like child pornography and violence. 

Senator Ron Wyden (D-OR) and Representative Chris Cox (R-CA) ensured Section 230 of the CDA was added at the last minute in 1996 “to protect the unhindered growth of the internet and free speech.” They were particularly concerned about Stratton Oakmont, Inc v. Prodigy Servs. Co., a 1995 court case which found an online bulletin board operator liable for content posted by users. To circumvent the court’s decision so that online content platforms would not be required to moderate all content posted by their customers and publishers, Section 230 freed online service providers from the legal responsibility of publishers.

According to the Congressional Research Service, “Courts have interpreted Section 230 to foreclose a wide variety of lawsuits and to preempt laws that would make providers and users liable for third-party content.” Section 230 immunity allows service providers, like Twitter and Facebook, to “act in good faith” to restrict content that provokes violence, is obscene, or is overly explicit. 

Nearly 20 years later, Section 230 is widely viewed as one of the most impactful and influential pieces of legislation ever created when it comes to the media and the U.S. information economy.  

Platforms’ unregulated ability to moderate content strikes at the crux of an accelerating debate between legislators, corporations, and now, billionaires. Mainstream rhetoric on both sides of the aisle in Washington calls to reform or abolish Section 230, but for very different reasons. 

Explore all the arguments, stances and perspectives around Section 230 and the regulation of online platforms. Keep in mind that stances aren't mutually exclusive — some people might have viewpoints that align with multiple stances.

The Stances

Stance #1: Reform or Revoke Section 230 to Prevent Bias Against Conservatives

Core Argument: Social media platforms have an anti-conservative bias, leading to unfair moderation practices. They ban, suspend, or otherwise limit conservative speech far more than left-wing speech, creating an imbalance in political discourse. 

  1. Google, Facebook and Twitter have vague policies that limit accountability. We should remove the immunity Big Tech companies receive under Section 230 unless they submit to an external audit that proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.
  2. Big Tech platforms like Google, Facebook and Twitter are hypocritical and operate under an “open internet for thee but not for thee” principle — they advocate for an open and free internet with no restrictive gatekeepers who would block or throttle disfavored content, while also moderating certain content and speech on their platforms.
  3. Facebook, Twitter, and Google receive liability relief for the messages they carry, just like a telephone or electrical utility, but with none of the duties of nondiscrimination. Their near monopoly power allows the leaders of these private companies to indulge their personal preferences, imposing them on the country’s political discourse.
  4. Section 230 needs to be reformed to prevent platforms from covering up or preventing the spread of important news stories that are not favored by the left-dominated press, like the Hunter Biden laptop story
  5. Legislators must deliberately overturn precedent set by the Spy Phone Labs LLC v. Google Inc case that allows the legal targeted censorship of any group by online platforms. 

Stance #2: Reform Section 230 to Protect Free Speech 

Core Argument: The moderation practices of online platforms are no longer consistent with the spirit of Section 230, which was designed to promote free speech. 

  1. Guaranteeing free speech and nondiscrimination on dominant internet platforms will not crush online innovation; reasonable controls will protect free speech and allow our political culture to flourish.
  2. By engaging in censorship, Big Tech companies are behaving more like publishers than like platforms protected by Section 230. When social media companies behave as publishers, using their own editorial opinion of what should be seen or censored or adding their own comments, they should not be allowed legal protections under Section 230; they should be treated like newspapers or other publications.
  3. Legislators should replace or clarify vague terminology in (C)(2) of Section 230, like “otherwise objectionable” and “good faith,” which allows platforms free reign to remove any content under blanket Section 230 protections.
  4. Internet platforms have gotten so big and influential that they’re resembling governments without any checks and balances to defend the freedom of speech. 
  5. Big Tech’s politicization of what information Americans can access and when they can access it must be stopped.
  6. Reform would lead to more transparency and accountability. 
  7. Those that manage the “public square” of free speech should be beholden to the same laws and stipulations as state entities under the First Amendment. 
  8. Designating Web platforms as common carriers would prevent them from exerting unnecessary preference or advantage to particular political, religious, or ethnic groups. Just as cell phone service providers and airlines cannot kick people off their platforms or planes on the basis of political views, neither should internet platforms.
  9. Practical reform involves narrowing Section 230 immunities so that egregious censorship once again becomes a bad choice for social media companies; we can limit social media’s power to suppress voices without growing government.
  10. By engaging in censorship, Big Tech companies are behaving more like publishers than like platforms protected by Section 230. If social media companies are going to behave as publishers, they should not be allowed legal protections under Section 230; they should be treated like newspapers or other publications.

Stance #3: Leave Section 230 As Is

Core Argument: The impact of changing Section 230 is unpredictable and will have chaotic outcomes.

  1. Section 230 has allowed internet platforms to grow exponentially, contributing money and jobs to the economy and driving innovation
  2. Weakening Section 230 would cause platforms to be fearful of lawsuits so they would over-moderate, limiting free speech. Comment sections on many sites would close for fear of what users might put on them, and there would be frequent outages of services people use on a daily basis – especially social media – while companies attempt to find ways to minimize risk to themselves. Content moderators might be overzealous in their decision-making process on what to leave up and what should be taken down.
  3. Section 230 fosters free speech, and any attempt at changing it would raise First Amendment rights issues. 
  4. Revoking Section 230 would open up online platforms to an onslaught of litigation
  5. Content moderation needs to be handled platform by platform, and online communities need to establish rules and community standards — the government will not be competent at addressing complex political questions.
  6. Revoking Section 230 would have a disproportionate impact on smaller or medium-sized platforms and apps that cannot afford to police their sites to stay compliant or bear the brunt of long, drawn-out, expensive lawsuits. 
  7. If a federal statute does not implement a replacement policy, the vacuum left by revoking Section 230 would lead to a confusing patchwork of state-level liability protections. 
  8. Concerns about platforms today are more about their scale than the nature of the platforms, and revoking Section 230 is not the answer to fix these issues. 
  9. Revoking Section 230 puts platforms in an impossible position in which they either allow any kind of offensive content onto their platforms or face liability for offensive content that slips through their moderation system. 
  10. Designating online platforms as “common carriers” might make offensive content worse and would prevent platforms from at least making an effort to moderate content.
  11. Claims that platforms are biased towards conservatives have been debunked by multiple studies
  12. Limiting content moderation protections could hinder platforms’ ability to limit bullying and harassment, which can create “real-world dangers” for LGBTQ people and other minorities.

Stance #4: Reform or Revoke Section 230 to prevent misinformation 

Core Argument: Section 230 immunities have allowed platforms like Facebook and Instagram to become hotbeds of misinformation, demonstrating the need for reform. 

  1. Platforms have allowed falsehoods surrounding coronavirus, the 2020 election, and the Ukraine war to be shared exponentially on their sites without noticeable improvement in moderation. 
  2. Beyond misinformation, lack of platform accountability under Section 230 protections has led to atrocities. According to the UN, Facebook misinformation had a “determining role” in the Genocide of the Rohingya in Myanmar. In addition, it allows horrible things like revenge porn to go unchecked.
  3. Misinformation gets more clicks than real news on Facebook, and teens who spend more time on platforms with Section 230 immunities are more likely to be depressed
  4. Section 230 has emboldened online platforms to moderate content, arrange ads, and implement algorithms that have threatened the health and safety of its users.
  5. Section 230 allows internet companies to operate in a “regulation-free” zone. To fix this, it must be weakened to remove some immunities. 
  6. The government should subject online platforms to intense scrutiny of their algorithms, ensuring that they protect the interests of the consumer. 
  7. “Carve-outs” should be created to prevent particularly egregious behavior from occurring unchecked on these sites. Particularly, a “Bad Samaritan” carve out would prevent platforms from promoting illegal activity.

Are we missing a stance or perspective? Email us!


Writers: 

Ethan Horowitz, News Assistant (Lean Right bias)

Reviewers:

Henry A. Brechter, Managing Editor (Center bias)

Julie Mastrine, Director of Marketing and Media Bias Ratings (Lean Right bias)

Joseph Ratliff, Daily News Editor (Lean Left bias)

John Gable, CEO (Lean Right bias)