IP for AI: A Tragedy of the Commons?

IP for AI: A Tragedy of the Commons?

Current intellectual property law is surprisingly inimical to AI-generated innovation. This posture is likely to become untenable in the near future.

I. The State of the Law

A. Patent Law

In the 2022 case of Thaler v. Vidal, the Court of Appeals for the Federal Circuit (the specialist court that hears most appellate patent cases) expressly held that an AI-generated invention was ineligible for patent protection, as only a human being can qualify as an “inventor” under the patent law. In February of 2024, the Patent and Trademark Office (“USPTO”) issued a notice entitled “Inventorship Guidance for AI-Assisted Inventions” (the “Notice”) that further elucidates the Thaler decision. The Notice takes pains to clarify that the mere fact that a human inventor uses AI as a tool does not, in and of itself, invalidate the invention, as long as the human’s contribution still rises to the level of inventorship. But the Notice also emphasizes that the human must truly contribute significant inventive inputs:

Merely recognizing a problem or having a general goal or research plan to pursue does not rise to the level of conception. A natural person who only presents a problem to an AI system may not be a proper inventor or joint inventor of an invention identified from the output of the AI system ... A natural person who merely recognizes and appreciates the output of an AI system as an invention, particularly when the properties and utility of the output are apparent to those of ordinary skill, is not necessarily an inventor ... Maintaining “intellectual domination” over an AI system does not, on its own, make a person an inventor of any inventions created through the use of the AI system. Therefore, a person simply owning or overseeing an AI system that is used in the creation of an invention, without providing a significant contribution to the conception of the invention, does not make that person an inventor.”

So, an invention developed by a human who is incidentally using AI as a tool (the way he might use, say, an electron microscope), but who independently conceives of the invention, is eligible for patent protection. But an invention conceived by the AI itself is not eligible for such protection.


B. Copyright Law

The current treatment of copyrights is even more hostile to AI creators than that of patents:? Whereas a human inventor can use AI as a tool, as long as he still makes a substantial inventive contribution, a human writer or artist who uses AI, even as a tool, must disclaim all portions of his ultimate work that were generated by AI. For example, an author who created a graphic novel using Midjourney was only granted copyright to the text and layout of the story, not the images themselves – despite evidence that a large portion of the author’s time had been devoted to refining Midjourney prompts to produce exactly the images she wanted. The Copyright Office has memorialized this anti-AI stance in a Statement of Policy published in early 2023.?

While graphic novels may be of limited interest to the readership of this post, copyright is the principal form of IP protection for software. As more software engineers use coding assistants (Copilot, Claude, etc.), it is likely that broad swaths of corporate software could become ineligible for copyright protection. This is a harsher outcome than in the patent context, where a human who uses various forms of AI support would still be eligible for patent protection, as long as the human provides significant inventive contributions.

But although patent and copyright differ in how they treat the usage of AI as a tool, both regimes seem to be largely blind to the possibility that AI might ultimately transcend the role of tool, and become a first-class creator in its own right. And it is this blindspot that is likely to become particularly problematic in the relatively near future.

II. The State of the Art

A. Studies

Due to the novelty of the technology, there have been relatively few high-quality empirical studies of AI usage in the research setting. But those which have been published are somewhat startling.

One recent study followed 1,018 materials scientists in the R&D lab of a large U.S. firm over four years, during the introduction of a deep learning model trained to propose promising molecular and structural configurations for novel materials. The study found that AI-assisted scientists discovered 44% more materials, and that “these compounds possess superior properties, revealing that the model also improves quality”. The AI model automated 57% of the idea-generation phase of the research, with the human scientists being largely relegated to assessing and testing candidate compounds. Perhaps unsurprisingly, “82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization.”

In an unrelated field, another recent paper examines the ability of AI to perform sophisticated medical diagnoses. This study had three arms:

????? i.???????? Doctors with no AI support

???? ii.???????? AI with no support from doctors

??? iii.???????? Doctor and AI working as a team

The study authors initially hypothesized that the hybrid doctor/AI team would outperform either working alone. Instead, the AI-only arm decisively outperformed both arms with doctors in the loop:? While the doctor-only arm correctly diagnosed 74% of the cases, and the hybrid arm 76%, the AI-only arm had a diagnostic accuracy of 95%.? The inferior performance of the hybrid team was apparently due to the doctors over-ruling the AI in those cases where the doctors were mistaken, while the AI had proposed the correct diagnosis.

Given the lag in publication time, both of these studies used AI models that are now several generations behind the frontier (a custom Graph Neural Network trained over 3 years ago and GPT-4, respectively). To get a sense of the current state of the art, we can look to formal benchmarks, which tend to be reported quite promptly.

?

B. Benchmarks

Towards the end of 2024, OpenAI announced its o3 model, which arguably represents the current state of the art in LLMs. While o3 has not been extensively tested yet by third parties, OpenAI’s self-reported performance on various benchmarks was quite impressive:

·????? GPQA: This benchmark uses google-proof questions to test PhD-level knowledge of biology, physics, and chemistry. o3 achieved an overall score of 88%; this compares to human PhDs, who average 70% when tested only in their respective fields.

·????? SWE-bench: o3 achieved 72% accuracy on this collection of real-world software engineering tasks.?

·????? Codeforces: This is a chess-style Elo rating of competitive coders. o3 currently has an Elo score of 2727, placing it among the top 175 human competitive coders in the world.

III. The State of Play

As AI systems begin to generate an economically meaningful share of new inventions and works of authorship (collectively “inventions”, for ease of reference), the absence of a robust private property regime to protect these inventions is likely to create a tragedy of the commons: Firms will under-invest, both in creating new inventions and in developing and building out the inventions once created. The most innovative firms will divert otherwise productive resources to MacGyvering awkward work-arounds for the blindspots in IP law, and capital will flow to less innovative firms that are willing to misappropriate the work of others.

If current rates of progress in AI persist, this tragedy of the commons will impose unprecedented stresses on the legal system. Historically, the frontier labs have required roughly two years to improve their models by one order of magnitude (“OOM”) (although the rate of improvement may be accelerating – it’s difficult to tell at this stage). It is entirely possible that the portion of inventions created by AI, as a share of the total innovation economy, will shift from 0% (where it is currently, and will probably remain for at least another 1-2 years) to 50% or more in as little as 3-5 years after the first truly autonomous AI inventors hit the market. While five years is enough time for 2 OOMs of progress in AI development, it is barely enough time to try a single patent case at the District Court level. And any appeal to the Circuit Court – the level at which new law is developed – could easily require another 12-18 months after that. And the Circuit Courts rarely get it right on the first try.

So basically, we have an impedance mismatch:? The lag time from the moment that the common law system gets the first signal that it needs to adapt to a new environment, to the earliest possible adaptation, is long enough to allow several OOMs of improvement in AI technology. Theoretically, Congress can make statutory changes faster than the common law can react, but given Congress’s recent behavior, I would characterize that as a strictly theoretical outcome. Finally, federal agencies – in this case, the USPTO or the Copyright Office – have some discretion to tweak the law at the margins. But any policy changes in this area could have huge societal impacts:? Who knows, perhaps the requirement for a human-in-the-loop will be one of the last barriers to mass unemployment of white-collar workers?? And given recent Supreme Court hostility towards agency discretion, it seems highly unlikely that either USPTO or the Copyright Office will be willing to take the lead on such consequential policy making.

IV. The State of the Future

So how will these dynamics ultimately play out?? On the one hand, IP law is likely to adapt, eventually, to developments in AI: Once a non-trivial fraction of innovation is being driven by non-human actors, it simply won’t be tenable to restrict intellectual property rights to humans. On the other hand, this adaptation is almost certain to lag the actual rate of AI innovation, so we’re in for an interregnum where things could get pretty weird before they ultimately sort themselves out.

Buckle up.

?


Jonathan Bain, Partner

Marlborough Street Partners

?

About Marlborough Street Partners

Marlborough Street Partners is not a consulting firm. We are a team of senior operating executives that works with venture and PE firms on behalf of portfolio companies and directly with senior management teams to address the Inflection Points they face- from strategic challenges to operational dysfunction to capitalization issues. Our blend of fresh perspective and long experience Turns Inflection Points into Breakthroughs.? www.MarlboroughST.com

?

要查看或添加评论,请登录

Jonathan B.的更多文章

  • Non-Competes: More Carrot, Less Stick

    Non-Competes: More Carrot, Less Stick

    Non-competes have become ubiquitous. Once reserved for key engineering talent and senior executives, and largely…

社区洞察

其他会员也浏览了