Rethinking Section 230 After Moody: Platform Editorial Speech Should Not Be Immune

Rethinking Section 230 After Moody: Platform Editorial Speech Should Not Be Immune

The Supreme Court's landmark NetChoice decision from Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton, 603 U.S. 707 (2024) fundamentally reshapes how we must think about platform liability and Section 230 immunity. While much of the initial commentary has focused on the First Amendment implications for state regulation, the Court's recognition of platform content moderation as protected editorial speech has far more sweeping implications for platform immunity under Section 230.

The New Legal Landscape

In Moody, the Supreme Court explicitly recognized that when platforms like Facebook and YouTube make content moderation decisions - including algorithmic amplification, content ranking, and removal choices - they are engaging in constitutionally protected editorial speech. This isn't just about hosting third-party content anymore, as in Section 230; the Court acknowledges that platforms are creating their own expressive products through editorial discretion.

This new reality demands a fundamental reconsideration of Section 230's scope. If platform content moderation constitutes editorial speech (as Moody holds), then why should platforms be immune from liability when that speech causes harm? The answer is simple: they shouldn't be.

Section 230's Original Purpose vs. Modern Reality

When Section 230 was enacted in 1996, social media platforms didn't exist. The law's purpose was to protect platforms from liability for simply hosting third-party content - essentially being a bulletin board. The intent behind Section 230 was to allow for technological evolution without hindering it’s growth, persuading Technology Companies to moderate their own content through their own policies. But today's platforms are far more than passive hosts. They employ sophisticated algorithmic systems to curate, amplify, and shape content in ways that the Supreme Court now recognizes as editorial speech.

The fundamental question is this: If a platform's algorithmic amplification of harmful content reflects its own editorial judgment (as Moody holds), why should Section 230 shield it from responsibility for the consequences of those editorial choices?

A New Framework for Platform Liability

The solution is straightforward: Courts should distinguish between:

1. Basic hosting of third-party content (protected by Section 230)

2. Platform editorial decisions about content amplification and moderation (not protected when they cause harm)

This framework preserves Section 230's core purpose while recognizing that platforms should be accountable for their own editorial speech. The good faith exception in Section 230(c)(2) would still provide protection when platforms can demonstrate they took reasonable steps to prevent harm through their content moderation practices.

The Human Cost of Platform Editorial Choices

The real-world consequences of platform editorial decisions are not mere abstractions - they destroy lives:

Teen Suicides from Algorithmic Amplification of Cyberbullying

The deadly impact of platform amplification of harassment was tragically demonstrated in the case of 14-year-old Molly Russell, who died by suicide after Instagram's algorithms repeatedly pushed self-harm and suicide content to her feed. An inquest concluded that the content "contributed to her death in a more than minimal way." The platform's recommendation systems had recognized her vulnerability and continued promoting harmful content despite it.? See https://www.bloomberg.com/news/articles/2022-09-30/social-media-played-a-role-in-uk-teenager-s-death-judge-says

Similarly, in 2021, Meta faced scrutiny after whistleblower Frances Haugen revealed internal research showing Instagram's algorithms were knowingly promoting content that made teenage girls' body image issues worse, with some reporting suicidal thoughts tied directly to platform-recommended content.? See https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739.

Reckless Promotion of Dangerous Products

Amazon was at the heart of a case alleging that they sold “suicide kits” to vulnerable children, causing the lost lives of many as a result. Amazon sold a chemical Sodium Nitrite at a strength of 98-99%, with no known household use on its platform. When individuals were purchasing the harmful chemical, Amazon is alleged to have displayed advertisements of a nausea suppressant medication, a suicide instruction handbook, and a scale for measuring out the lethal amount.

https://www.kgun9.com/news/local-news/suicide-kits-tucson-parents-among-several-suing-amazon-for-selling-lethal-chemical-to-their-at-risk-teens

App Features Encouraging Dangerous Behavior

Snapchat was under fire recently for numerous harmful features ultimately allegedly leading to the death of various users. In 2022, a driver was recklessly speeding with passengers in the car, alleging she was seeking to hit 100 mph through the Snapchat “speed filter” offered through the app. The car crashed, causing death of the innocent passenger.

https://www.reuters.com/legal/litigation/crash-victim-gets-new-chance-prove-snap-speed-filter-caused-accident-2022-03-17/

Amplifying Terrorist Recruitment

In 2023, Google’s Youtube was in question when the platform was alleged to have aided and abetted terrorism by hosting and amplifying content created by ISIS to recruit terrorists, ultimately leading to the harm of 416, and death of 130, including Nohemi Gonzalez. Isis took responsibility for the crimes while releasing a video on Youtube.

https://www.sir.advancedleadership.harvard.edu/articles/supreme-court-spoken-gonzalez-v-google-now-congresss-turn-section-230

App Features Used to Further Drug Crimes

Another Snapchat case alleged that Snapchat connected drug dealers to children, through features such as the “Quick Add” and “geolocation” helping frugs find and target children to sell drugs to. Ultimately, the drugs were laced with fentanyl and numerous children had lost their lives as a result.?

https://www.reuters.com/legal/litigation/crash-victim-gets-new-chance-prove-snap-speed-filter-caused-accident-2022-03-17/

Murder Following Algorithmic Doxxing and Harassment

The fatal consequences of platform amplification of doxxing were horrifically illustrated in the case of Bianca Devins, who was murdered after Discord and other platforms' algorithms helped her killer stalk her and then amplified photos of her murder to thousands of users.? See https://www.nbcnews.com/news/us-news/man-custody-after-allegedly-killing-teen-posting-photos-her-body-n1030181

In another tragic case, a stalker used information amplified through social media algorithms to locate and murder Alissa Blanton, despite her having obtained a restraining order. The platforms' recommendation systems had continued promoting her personal information through "people you may know" and location-based features. See https://www.washingtonpost.com/world/2021/11/24/online-abuse-surged-during-pandemic-laws-havent-kept-up-activists-say/

Devastating Financial Scams

Meta's algorithms have systematically promoted cryptocurrency scams that have devastated victims, with the FBI reporting over $2 billion in crypto fraud losses in 2022 alone. One horrific example involved elderly victims losing millions after Facebook's algorithms aggressively promoted fraudulent investment schemes through targeted ads and recommendations.? Hundreds if not thousands of individuals have lost their life savings from crypto scams like this.? https://go.gale.com/ps/i.do?id=GALE%7CA694520730&sid=googleScholar&v=2.1&it=r&linkaccess=abs&issn=22699740&p=AONE&sw=w

In a particularly egregious case, Meta's algorithms were found to be actively promoting scam advertisements to users who had already been victimized, with internal documents showing the company was aware its systems were amplifying fraud but continued to profit from it.? See https://www.reuters.com/technology/australia-watchdog-sues-facebook-owner-meta-over-false-cryptocurrency-ads-2022-03-17/

Coordinated Harassment Through Platform Systems

A 2023 study by the Center for Countering Digital Hate found that Instagram's and TikTok's algorithms actively amplified misogynistic content to young male users, with recommendation systems promoting harassment content to millions of viewers. See

https://crimsoc.hull.ac.uk/2024/10/04/tate-tiktok-and-toxic-masculinity-is-social-media-to-blame-for-this-generations-violence-against-women/

The deadly impact of algorithmic amplification of harassment was demonstrated in the case of Rana Ayyub, where Twitter's algorithms helped transform isolated threats into a massive coordinated harassment campaign that included death and rape threats. Despite court orders, the platform's recommendation systems continued promoting the harassing content to new users. See https://www.ecpmf.eu/safeguarding-women-journalists-in-the-digital-age/

Under current interpretations, platforms claim Section 230 immunity for all of these situations. But post-Moody, these should be treated as consequences of the platform's own editorial speech. When that speech causes harm, platforms should bear responsibility just like any other speaker.

Platform Responsibility for Real Harm

These aren't just tragic anecdotes - they represent systematic failures of platform editorial judgment. In each case, the harm wasn't simply from the original third-party content, but from the platforms' own editorial decisions to:

- Amplify harmful content through recommendation algorithms

- Connect abusers with victims through "similar content", or “quick add” features

- Prioritize engagement over user safety in content ranking and distribution

- Ignore clear patterns and reports of coordinated harassment

- Profit from the amplification of scams and harmful content

When a platform's algorithms actively promote content that leads to death, violence, or massive financial fraud, that's not just hosting third-party speech - it's the platform making editorial choices that directly enable and magnify harm. Section 230 was never meant to immunize such choices.

And we shouldn’t have to wait until the intentional, reckless and grossly negligent choices of these platforms results in catastrophic harm in order to take action.? The Legislatures and Courts must recognize that these harms are foreseeable and preventable.? Either they proactively regulate these platforms or they should stand clear from regulating the conduct of those individuals that seek to bring justice to the harmed parties from the actions of these platforms, as Section 230 currently does.

Opponents of this position will use the Slippery Slope fallacy to justify why these social media platforms should be entitled to Section 230 protections, stating that making these platforms liable will destroy individuals’ ability to express themselves freely and cause these platforms to shut down and stifle innovation.? This is the same logic that opponents to the Fight Online Trafficking Act of 2017 (FOSTA) exceptions to Section 230 argued.? The problem is the evidence proves otherwise. reasonable regulation and incentivizing platforms to protect their consumers will NOT stop the platforms from making incredible profits.? The likely outcome is that they will profit more and innovation will continue to move at breakneck speed, as evidenced in the past seven years since the FOSTA exceptions went into effect.

Benefits of This Approach

This framework would:

1. Create proper incentives for responsible content moderation

2. Provide recourse for victims of algorithmic harm

3. Better align with how modern social media works

4. Preserve protection for good faith moderation efforts

5. Hold platforms accountable for their editorial choices, when harmful or against their own policies

Moving Forward

The Supreme Court has given us the theoretical foundation - platform editorial choices are speech. Now courts must take the next logical step: recognizing that when that speech causes harm, platforms should not be able to hide behind Section 230 immunity designed for a different era and different purpose.

Moody changes everything. It's time for courts to catch up and recognize that platform editorial decisions deserve First Amendment protection AND accountability - not blanket immunity for the harms they cause.

About The Author?

Gabriel Vincent Tese is a seasoned trial attorney and Founding Partner at Cyber Law Firm, specializing in technology and cyber law. With extensive litigation experience in federal and state courts, Vince has successfully handled cases spanning contract, corporate, criminal, and cyber law. A retired Army Officer with over 20 years of service in Military Intelligence and the Judge Advocate General’s Corps, he has been entrusted with high-profile prosecutions, security operations, and government contract advisement. His expertise in AI, robotics, strategic litigation, and cybercrime defense makes him a formidable force in the legal field.

Rachel MacCratic

Pursuing Human Rights

3 天前

At the risk of tagging you two in too many things while thinking of use cases: Edward Caja Katalin Bártfai-Walcott

回复
Star Kashman

Founding Partner of Cyber Law Firm | Technology & Cyber Law Chair of Gotham | Published Legal Scholar | Cybersecurity Law, Privacy Law, Internet Law | Intelligence Community Center for Academic Excellence (ICCAE) Scholar

3 周

Awesome article Gabe!!!

要查看或添加评论,请登录

Gabriel Vincent Tese的更多文章