GenAI Questions Too Often Overlooked
David Atkinson
AI Legal Counsel | A.I. Ethics and Law | University Lecturer | Veteran
Jacob Morrison and I wrote a relatively short law review article exploring the thorny gray areas of the law swirling around GenAI. The final version was published by the Journal of Law and Technology at Texas, a UT-Austin law school publication, last month. You can find the full article here.??
The article is an excellent (is an author allowed to say that?) introduction to the topic of open legal questions. As far as I know, nobody else has collected all these open questions in a single place, so it’s a handy reference if you’re looking for lawsuit or research paper ideas. The point of the paper isn’t to say the issues currently litigated aren’t important (they are!); it’s to point out that we must consider several other equally important issues. Like, asap. The paper doesn’t even take a position on how the courts or society should decide any given question; we just believe the questions should be addressed earlier rather than later.
Anyhoo, the paper at issue identified dozens of open questions. Here’s a high-level overview. For more details and citations, please read the paper (it’s an easy breezy read).
As you can imagine, there is a lot of nuance missing when one tries to funnel a 14-page paper into a 2-page table, so please forgive me. The table is just meant to serve as an appetizer.?
While plaintiffs have filed dozens of different claims (see my previous newsletter for the breakdown), few have even scratched the surface on most of the issues above. It would only take a few of the questions above to be found in the plaintiffs’ favor to undermine most GenAI models. The laundering question is my personal favorite because its outcome could make or break entities like Hugging Face and the open-source movement more generally.?
The point is that society has a lot of thinking to do about whether and how the law should adapt to accommodate technology that CEOs claim will perform as well as humans on most tasks in just a couple of years. Supposing it can (a humongous supposition, but the CEOs bring this on themselves by comparing GenAI to other incredibly consequential technological advances from the past, like the discovery of fire and the taming of electricity), doesn’t it feel a little weird to be like “eh, let’s just treat it like a book for tort law purposes” or “let’s not make it possible for systems with superintelligence, or their developers, to be criminally liable for their actions.”?
(Also, as a side note: LLMs are as consequential as electricity? Seriously? Raise your hand if you think you’d experience the same adverse effects from just losing access to a chatbot tomorrow that you’d feel if you lost access to electricity. Nobody? That’s what I thought…)
FWIW, I’d argue that if CEOs think GenAI is comparable to fire, steam power, and electricity, all of which are regulated (or was I the only one blocked from installing an electric power plant in my backyard?), then they should absolutely be open to being at least as closely regulated as each of those technologies. But the devil is in the details. (And in Georgia, participating in a fiddle competition.) ?
HR Leader | [nice] Lawyer | Start-Ups | Data-Obsessed | I build cultures and systems that scale, leverage talent, and foster excellence
2 周At this rate, it looks like all law is now unsettled.