GenAI Questions Too Often Overlooked

GenAI Questions Too Often Overlooked

Jacob Morrison and I wrote a relatively short law review article exploring the thorny gray areas of the law swirling around GenAI. The final version was published by the Journal of Law and Technology at Texas, a UT-Austin law school publication, last month. You can find the full article here.??

The article is an excellent (is an author allowed to say that?) introduction to the topic of open legal questions. As far as I know, nobody else has collected all these open questions in a single place, so it’s a handy reference if you’re looking for lawsuit or research paper ideas. The point of the paper isn’t to say the issues currently litigated aren’t important (they are!); it’s to point out that we must consider several other equally important issues. Like, asap. The paper doesn’t even take a position on how the courts or society should decide any given question; we just believe the questions should be addressed earlier rather than later.

Anyhoo, the paper at issue identified dozens of open questions. Here’s a high-level overview. For more details and citations, please read the paper (it’s an easy breezy read).

As you can imagine, there is a lot of nuance missing when one tries to funnel a 14-page paper into a 2-page table, so please forgive me. The table is just meant to serve as an appetizer.?

While plaintiffs have filed dozens of different claims (see my previous newsletter for the breakdown), few have even scratched the surface on most of the issues above. It would only take a few of the questions above to be found in the plaintiffs’ favor to undermine most GenAI models. The laundering question is my personal favorite because its outcome could make or break entities like Hugging Face and the open-source movement more generally.?

The point is that society has a lot of thinking to do about whether and how the law should adapt to accommodate technology that CEOs claim will perform as well as humans on most tasks in just a couple of years. Supposing it can (a humongous supposition, but the CEOs bring this on themselves by comparing GenAI to other incredibly consequential technological advances from the past, like the discovery of fire and the taming of electricity), doesn’t it feel a little weird to be like “eh, let’s just treat it like a book for tort law purposes” or “let’s not make it possible for systems with superintelligence, or their developers, to be criminally liable for their actions.”?

(Also, as a side note: LLMs are as consequential as electricity? Seriously? Raise your hand if you think you’d experience the same adverse effects from just losing access to a chatbot tomorrow that you’d feel if you lost access to electricity. Nobody? That’s what I thought…)

FWIW, I’d argue that if CEOs think GenAI is comparable to fire, steam power, and electricity, all of which are regulated (or was I the only one blocked from installing an electric power plant in my backyard?), then they should absolutely be open to being at least as closely regulated as each of those technologies. But the devil is in the details. (And in Georgia, participating in a fiddle competition.) ?

Oren E.

HR Leader | [nice] Lawyer | Start-Ups | Data-Obsessed | I build cultures and systems that scale, leverage talent, and foster excellence

2 周

At this rate, it looks like all law is now unsettled.

回复

要查看或添加评论,请登录

David Atkinson的更多文章

  • K-12 Education and GenAI Don’t Mix

    K-12 Education and GenAI Don’t Mix

    One of my least popular opinions is that the rush to cram GenAI into K-12 curricula is a bad idea. This post will lay…

    3 条评论
  • GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    If you want to understand the legal risks of generative AI, you can’t go wrong by first understanding the ongoing…

  • GenAIuflecting

    GenAIuflecting

    Lately, a surprising number of people have asked my thoughts on the intersection of law and generative AI (GenAI)…

  • The Risks of Alternative Language Models

    The Risks of Alternative Language Models

    There is something like "the enemy of my enemy is my friend" going on in the AI space, with people despising OpenAI…

  • The Surrender of Autonomy

    The Surrender of Autonomy

    Autonomy in the Age of AI There are dozens, or, when atomized into their constituent parts, hundreds of risks posed by…

  • Humans and AI

    Humans and AI

    Part 3 of our miniseries on how human contractors contribute to AI. Poor Working Conditions and Human Error While tech…

  • AI and Its Human Annotators

    AI and Its Human Annotators

    Part 2 of our miniseries on the role of humans in creating AI. Pluralism In AI Unlike most traditional AI, where you…

  • RLHF and Human Feedback

    RLHF and Human Feedback

    Part 1 of our miniseries on RLHF and the role humans play in making AI. RLHF puts a friendly face on an alien…

  • Some Concluding Thoughts on GenAI and the Workforce

    Some Concluding Thoughts on GenAI and the Workforce

    This is Part 4 of our bite-sized series on GenAI and the workforce. The Reality: For Now, Human Labor Is Still More…

  • UBI? My, Oh, My

    UBI? My, Oh, My

    Part 3 of our bit-sized series on GenAI’s potential impact on the workforce. Economic Impact If many people lose their…

社区洞察