AI & LAW
"AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which." Stephen Hawking

AI & LAW

Welcome to the first edition of my newsletter on the intersection of artificial intelligence (AI) and law. As AI technology rapidly evolves, often outperforming human tasks cheaper, faster and better than people, its integration into various aspects of society presents both opportunities and challenges.

AI stands at the cusp of creating unprecedented economic value by reshaping the future of work.

The purpose of this bi-weekly newsletter is to provide insights into the legal ramifications of artificial intelligence and foster a discussion of its governance and societal influence, along with practical tips for addressing AI-related legal issues that may arise from time to time, based on my hope that these tips will prove useful to general counsel.

Let's start with trying to define artificial intelligence.

Defining an artificially intelligent program is challenging. To explore the concept, let’s consider the meanings of “intelligence” and “artificial program.” The word “intelligence” derives from the Latin "inter" (between) and "legere" (to choose or read), suggesting the ability to differentiate, to draw distinctions between things, to understand, to grasp one's self-awareness, and to interpret our surroundings. An “artificial program” assumes that this human ability to understand and differentiate and draw distinctions can be replicated or surpassed by building computer programs that are as good or even better at these things.


This is the definition that the EU AI Act uses for AI


These programs have a broad range of abilities, from doing simple math to complex activities like strategic gameplay or simulating advanced medical procedures.

For example, in 2017, New York Times reported that “the world’s best player of what might be humankind’s most complicated board game was defeated by a Google computer program.” DeepMind Technologies, a subsidiary of Google, developed the AlphaGo Master program, which achieved the seemingly impossible by defeating the world champion in Go—a milestone that was believed to be at least a decade away, given the game's profound complexity.

While AlphaGo 's victory was an impressive technical accomplishment, it did not have a direct impact on society. But it was a signal that AI has a potential to take over a wider variety of tasks, maybe even earlier than we thought.

Let’s be clear, Google did not invest in AlphaGo just to conquer the world of board games. The real treasure is in AI's ability to recognize patterns, which goes beyond just playing games. For example, if AI can spot complex patterns in Go, it can also identify medical conditions in MRI scans or pick out pedestrians on the street.

In fact, in 2018, DeepMind's AlphaFold won against 98 other contenders in a competition to predict protein-folding, a crucial task for drug discovery.

In 2018, DeepMind accurately identified patients with over 50 different eye diseases for specialist care in 94% of cases, basically demonstrating a level of expertise comparable to experienced clinicians. The next year DeepMind predicted the onset of acute kidney injury up to 48 hours earlier than human doctors. Clearly, this shows how AI will enhance early diagnosis and improving patient outcomes.

AI will definitely revolutionize many professional sectors. ?AI has already revolutionized physical labor, such as in automobile manufacturing, but it is now reaching into mental labor, automating tasks that were once exclusive to highly trained professionals like doctors, lawyers, and scientists.

IBM's AI, Watson , became known for its Jeopardy win in 2011 and now works in various healthcare areas. Under the Watson brand, AI systems help analyze genetic data of cancer patients to suggest appropriate drug treatments. What used to take a team of experts around 160 hours can now be done by Watson in just 10 minutes, as per a 2017 study.

Besides Watson, many companies now say their AI systems can surpass human doctors in specific medical domains. This is not a surprise, considering machines can store and remember every piece of medical literature ever written and draw from an extensive pool of practical experience, and they can do this without needing to sleep or taking a lunch break.

In 2017, a Chinese company announced that its AI robot, Xiao Yi , had passed the national medical licensing exam, necessary to practice medicine in China. Xiao Yi was equipped with a vast knowledge base, including medical textbooks, millions of medical records, and a vast array of articles. In a similar vein, IBM reported that Watson, informally passed the equivalent U.S. exam, although it was not allowed to take the test.

Having said all this, just because AI passed some tests doesn't mean it's ready to wear the white coat. Sure, it might be great at diagnosing or even surgery, but it won't replace doctors completely. What it does show is that AI can handle parts of the job, this means that in the future we might have fewer doctors, but they will be supercharged with AI help.

?

How does law relate to Artificially Intelligent Programs?

Our laws are traditionally crafted to govern human behavior, not machines. While it may seem like a good idea to assume that AI will adapt to the existing legal structures, regulations tailored for human behavior can lead to unexpected and negative consequences when applied to the actions of machines.

So far, the development of AI-specific legislation has been slow, in part due to concerns that strict regulations could inhibit innovation. Nonetheless, AI is already subject to a degree of regulation by existing laws that address privacy, security, and fair competition, established well before the emergence of modern AI technologies.

A legal framework for AI does not depend on the quantity of laws, but rather on its quality and relevance to meet the unique challenges of our time. In 1925, the former US Supreme Court Justice Benjamin N. Cardozo said that “the law should evolve and adapt to the changing needs of society.” This advice is important when it comes to the regulation of AI, which requires a legal system that is finely tuned to its requirements. But, it's great to see that in the past few years, there's been a real push to set up rules and best practices for AI.

Governments, think tanks, and the industry are all working to make AI trustworthy and sustainable. The OECD came up with some AI principles in May 2019, and then the G20 followed with their own human-centered AI principles in June 2019, based on the OECD's work. In 2021, the European Commission proposed an EU AI Act , the final text of which was published in the Official Journal on July 12, 2024, it will enter into force on August 1st, 2024. This year, Council of Europe (Argentina, Australia, Canada, Costa Rica, European Union, Israel, Japan, Mexico, Peru, Uruguay and USA) adopted the Framework Convention on Artificial Intelligence. This shows we are making progress and building rules that keep up with AI as it grows.

?

Three key legal aspects of AI that warrant attention

1. Bias and Discrimination in AI Programs

One major concern is the potential for AI to reflect the biases of its creators. While human biases are inevitable, AI can amplify these biases on a much larger scale, because bias can be found in the training data, the algorithms and the output of AI. Unless bias is addressed properly, certain people will not be able to contribute to the economy or society.

According to McKinsey article "What AI can and can’t do (yet) for your business ," Michael Chui, James Manyika, and Mehdi Miremadi of McKinsey note, “Such biases have a tendency to stay embedded because recognizing them, and taking steps to address them, requires a deep mastery of data-science techniques, as well as a more meta-understanding of existing social forces, including data collection. In all, debiasing is proving to be among the most daunting obstacles, and certainly the most socially fraught, to date.”

For instance, biased AI systems in law enforcement could lead to systematic discrimination based on ethnicity. The legal community must remain vigilant about these risks and work towards mitigating them. Understanding the discriminatory potential of AI is crucial to ensuring fair and equitable outcomes.

According to IBM, in order to avoid bias, companies need to "drilling down into datasets, machine learning algorithms and other elements of AI systems to identify sources of potential bias ."

2. Accountability and Supervision of AI

We talk of people being ‘accountable’ or ‘answerable’ to other people and mean nothing more by it than that the people to whom there is accountability are in a position to inflict punishment on those being held accountable should they deem them guilty of misconduct […] what might be called a ‘coercive’ or rather ‘purely coercive’ variety that can operate quite independently of the informative and has, if anything, a better claim to the title of ‘accountability’” (Kaler 2002, 329 ).

Determining accountability for AI decisions is complex. Unlike traditional machines, the behavior of advanced AI programs can be unpredictable and difficult to trace. "AIs are neither mere artifacts nor traditional social systems: technological properties often make the outcome of AIs opaque and unpredictable, hindering the detection of causes and reasons for unintended outcomes" (Tsamados et al. 2022 ).

In fields like healthcare, AI algorithms might recommend treatments without a clear basis for their decisions. Legally, this raises questions about how to supervise AI programs and who should be held responsible when errors occur. Establishing clear guidelines for accountability is essential to navigate these challenges.

3. IP Protection of AI

In order to simplify, let's divide Intellectual property (IP) issues related to AI into two categories for now. In the near future, I will produce a separate newsletter dedicated to AI and IP.

  • Input:

When AI tools are prompted by users, they occasionally produce exact excerpts from copyrighted materials, such as books or articles. This raises the question of whether AI systems can legally use or refine the data owned by others.

AI tool owners are being sued for copyright violations by copyright holders in a number of ongoing lawsuits. Here are some examples of such lawsuits.

NY Times v. OpenAI and Microsoft, 2023

The New York Times sued OpenAI and Microsoft on December 27, 2023 for copyright infringement, claiming that two companies built their AI models by “copying and using millions of The Times's copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to-guides, and more ” and now “directly compete” with its content as a result.

A group of eight US newspapers sued OpenAI and Microsoft, 2024

A group of eight American newspapers sued OpenAI and Microsoft, claiming that the tech companies have been unlawfully appropriating millions of copyrighted news pieces without authorization or compensation for the purpose of training their AI-driven chat services. “We’ve spent billions of dollars gathering information and reporting news at our publications, and we can’t allow OpenAI and Microsoft to expand the Big Tech playbook of stealing our work to build their own businesses at our expense,” said the executive editor for the MediaNews Group and Tribune Publishing. The lawsuit alleges that Microsoft and OpenAI's AI systems reproduce the newspapers' copyrighted material "verbatim."

Authors Guild vs. Open AI and Microsoft, 2023

The Authors Guild organized a copyright action against OpenAI and Microsoft seeking redress for the defendants' infringement of the plaintiffs' registered copyrights in written works of fiction. The plaintiffs alleged that the defendants "copied [the works] wholesale, without permission or consideration. Defendants then fed Plaintiffs' copyrighted works into their "large language models" or "LLMs" algorithms designed to output human-seeming text responses to users' prompts and queries."


These cases underscore the importance of ensuring AI developers have the appropriate licenses and permissions to use the data they input into their algorithms, in order to avoid potential legal disputes over copyright infringement.

  • Output:

Is IP protection available for the AI-generated material? Several precedents have established that non-humans cannot generate intellectual property.

Here are two examples:

Naruto v. Slater, 888 F.3d 418

Naruto , a macaque residing in Indonesia, allegedly captured several self-portraits, including the famous "Monkey Selfie," using David Slater's camera around 2011. People for the Ethical Treatment of Animals Inc (PETA) filed a lawsuit against Mr. Slater on behalf of Naruto, contending that Naruto should be recognized as the author and the owner of these selfies, not Mr. Slater. The case raised questions about whether or not a non-human could sue to protect IP rights or authorship under the Copyright Act in the US. The court decided that Naruto lacked statutory standing because the Copyright Act does not expressly authorize animals to file copyright infringement suits.


"Monkey steals the camera to snap himself"


Zarya of the Dawn (number VAu001480196), 2023

The comic book Zarya of the Dawn, published by Kris Kashtanova, was entirely illustrated using Midjourney software. Kashtanova filed for the comic's copyright protection with the US Copyright Office, without mentioning that the illustrations were created using Midjourney, an AI image generator. A copyright protection was granted for the comic, but the Copyright Office revoked the protection after discovering the involvement of AI.


Cover of the 2002 comic book


The Copyright Office stated that "the images [were not] the product of human authorship." and that copyrightable works require human authorship.


These cases highlight the ongoing legal complexities surrounding AI and IP.


Looking Ahead

I will try to explore various aspects of AI, from its technical foundations to its societal impacts, and provide insights into how our existing legal frameworks can adapt to these advancements.

In the next issue, I will talk about the hardware on which AI operates. Understanding the technical infrastructure supporting AI is essential for understanding its full implications and ensuring robust regulatory frameworks.

Thank you for joining me on this exploration of AI and law. Stay tuned for more in-depth analyses and discussions in my upcoming newsletters. Let's navigate this exciting and challenging landscape together.

Connect with me

I welcome your thoughts and feedback on this newsletter. Connect with me on LinkedIn to continue the conversation and stay updated on the latest developments in AI and law.

Disclaimer

The views and opinions expressed in this newsletter are solely my own and do not reflect the official policy or position of #Cognizant. This newsletter is an independent publication and has no affiliation with #Cognizant.

Kim McConville

Legal Counsel at Cognizant

4 个月

Great update, Laura - look forward to reading these ??

Charlene Brownlee

Director - Commercial Privacy Lead at Cognizant

4 个月

Excellent initiative! Let me know if you need any guest contributors - I’m currently looking at managing service provider risk in updating MSA terms to address unique challenges in AI.

回复
Ibrahim ElOraby

Senior Curation Performance Executive - Content Operations @ Shahid | Personalization | Data | AI

4 个月

Subscribed, and ready for your input, if I can be a bit naive here, but do you think it’s as simple as feeding the machines standard regulatory legislations that it must abide by ..

回复
Jason Holmers

eDiscoveryAI.com

4 个月

Impressive

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了