GenAI and the First Amendment

GenAI and the First Amendment

A few months ago, we, along with Jena Hwang, another AI researcher, authored a paper that emphatically and categorically rejects the notion that outputs from generative AI (like the responses ChatGPT provides to user prompts) should be protected by the First Amendment. In fact, outputs from GenAI aren't speech at all! This means some legal luminaries like Mark Lemley, Eugene Volokh, and Cass Sunstein are wrong when they argue that the First Amendment protections stretch to GenAI outputs.[1]

The implications of how far the First Amendment reaches are HUGE. As we note in our paper, which you can find on SSRN (and here is a ~10-minute AI-generated podcast of the paper) and which will be further refined and published in the First Amendment Law Review in the spring:

The repercussions of doing so would be nontrivial as legal scholars Karl Manheim and Jeffrey Atik explain:

Furthermore, such protections would severely limit the types of regulations that are necessary to keep democracy afloat. Perhaps you’ve heard the lies zipping around the internet following the hurricanes that decimated portions of Florida, North Carolina, and nearby states. Among bonkers claims of FEMA confiscating property and the government being able to control the weather, there were also many AI-generated images passed off as authentic.

Our paper argues that while the use of GenAI outputs may be protected speech, the model’s generation of outputs is not speech and is not protected by the First Amendment. Therefore, our democratic system could decide to pass laws that curtail GenAI in ways that disincentivize, creating technology that can undermine democracy and prevent foreseeable harms. Absent such powers, the country would essentially be tying its hands behind its back, unable to defend itself.

To be clear, we are strong advocates of free speech for humans. However, not everyone thinks the Constitution’s speech rights should apply only to humans. Furthermore, the idea that the solution to harmful speech is countering it with accurate/helpful speech is meaningless in a world where anyone can create infinite outputs in an instant for free.

Eugene Volokh, mentioned above, authored a paper in 2016 arguing for First Amendment protections on behalf of Google for the content it displays. Social media platforms have since used these same arguments to fend off regulations. Raise your hand if you think social media has led to an improved exchange of ideas and a more informed public globally.

In case you’re on the fence, perhaps it’s worth recalling how social media directly contributed to the widespread belief that the safe and effective COVID vaccines were actually an attempt by the government to control the population, leading to the harassment of medical professionals and probably at least hundreds of thousands of preventable deaths. Or look at the widespread and false belief that the 2020 election was rigged. Or perhaps consider the genocide in Malaysia. Social media companies aided each of these tragedies by spreading the messages (likes, shares, reposts, reshares, hearts, emoji reactions, etc.) and promoting the speed and breadth of how they spread. Our government was largely powerless to stop it because courts decided to grant algorithms First Amendment protections even when humans have virtually no involvement in the algorithm’s actions or understanding of why any particular piece of content was displayed in the manner it was displayed. An algorithm with no awareness of context and with limited capabilities for translation, was set in motion to optimize for engagement and was largely untouchable regardless of the consequences. Society picked up the tab for any destruction it wrought, not the social media companies.

Harvard law professor Lawrence Lessig authored a paper in 2021 arguing that when software can act with so little human involvement, it makes little sense to extend the full shield of the First Amendment to the algorithms precisely because it can lead to undermining the US Constitution.

A year later, ChatGPT came out, allowing people to generate content that appeared human-created at a speed and scale previously inaccessible to the masses. Recall that COVID-19, the 2020 election, and the massacre of Rohingya Muslims all occurred prior to GenAI’s big debut.

Now, the world is scrambling to understand how to mitigate risks from GenAI. A huge hurdle most legislation would face is overcoming free speech rights arguments. If GenAI outputs are speech and are therefore entitled to free speech protections, then regulations and legislation would likely have to overcome a court’s intermediate or strict scrutiny, which is notoriously difficult in a country that values (human) speech. Trying to mitigate harms around privacy, the potential to damage critical infrastructure, harms to artists and authors, harms caused by non-consensual deepfakes, harms from medical misinformation, and so on, would likely face First Amendment challenges.

To be sure, some regulations and laws could be over-burdensome, unjust, and unfair. In such cases, courts should strike them down. For instance, a law that said chatbots can only say wonderful things about one political party and only terrible things about another should be struck down. Importantly, courts don’t need the First Amendment to prevent such laws. The First Amendment is a protection from the government. It’s not what gives the courts the power to act judiciously. Any argument that society must allow an absolute free-for-all or risk dystopian and draconian speech prohibitions is nonsense.

These are just a handful of reasons why settling the First Amendment issue early is vitally important. It may be the single biggest legal hurdle any new regulations must overcome, and taking the unprecedented step of extending free speech protections to non-humans and non-speech could have consequences that reverberate throughout the country, the world, and history.

To learn more, please read and share our paper: Intentionally Unintentional: GenAI Exceptionalism and the First Amendment


[1] Perhaps not coincidentally, Lemley and Volokh have been paid by some of the largest AI corporations in the world to make legal arguments in favor of the companies.

The abstract from our paper:

This paper challenges the assumption that courts should grant outputs from large generative AI models, such as GPT-4 and Gemini, First Amendment protections. We argue that because these models lack intentionality, their outputs do not constitute speech as understood in the context of established legal precedent, so there can be no speech to protect. Furthermore, if the model outputs are not speech, users cannot claim a First Amendment right to receive the outputs. We also argue that extending First Amendment rights to AI models would not serve the fundamental purposes of free speech, such as promoting a marketplace of ideas, facilitating self-governance, or fostering self-expression. In fact, granting First Amendment protections to AI models would be detrimental to society because it would hinder the government’s ability to regulate these powerful technologies effectively, potentially leading to the unchecked spread of misinformation and other harms.?

要查看或添加评论,请登录

David Atkinson的更多文章

  • K-12 Education and GenAI Don’t Mix

    K-12 Education and GenAI Don’t Mix

    One of my least popular opinions is that the rush to cram GenAI into K-12 curricula is a bad idea. This post will lay…

    3 条评论
  • GenAI Questions Too Often Overlooked

    GenAI Questions Too Often Overlooked

    Jacob Morrison and I wrote a relatively short law review article exploring the thorny gray areas of the law swirling…

    2 条评论
  • GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    If you want to understand the legal risks of generative AI, you can’t go wrong by first understanding the ongoing…

  • GenAIuflecting

    GenAIuflecting

    Lately, a surprising number of people have asked my thoughts on the intersection of law and generative AI (GenAI)…

  • The Risks of Alternative Language Models

    The Risks of Alternative Language Models

    There is something like "the enemy of my enemy is my friend" going on in the AI space, with people despising OpenAI…

  • The Surrender of Autonomy

    The Surrender of Autonomy

    Autonomy in the Age of AI There are dozens, or, when atomized into their constituent parts, hundreds of risks posed by…

  • Humans and AI

    Humans and AI

    Part 3 of our miniseries on how human contractors contribute to AI. Poor Working Conditions and Human Error While tech…

  • AI and Its Human Annotators

    AI and Its Human Annotators

    Part 2 of our miniseries on the role of humans in creating AI. Pluralism In AI Unlike most traditional AI, where you…

  • RLHF and Human Feedback

    RLHF and Human Feedback

    Part 1 of our miniseries on RLHF and the role humans play in making AI. RLHF puts a friendly face on an alien…

  • Some Concluding Thoughts on GenAI and the Workforce

    Some Concluding Thoughts on GenAI and the Workforce

    This is Part 4 of our bite-sized series on GenAI and the workforce. The Reality: For Now, Human Labor Is Still More…

社区洞察

其他会员也浏览了