What Legal Challenges in Harnessing Generative Artificial Intelligence?

What Legal Challenges in Harnessing Generative Artificial Intelligence?

Generative artificial intelligence, with pioneers like Chat-GPT at the helm, is reshaping our daily lives and work dynamics; as this AI revolution surges, delving into its legal challenges has never been more crucial.

The growth of generative artificial intelligence and its legal issues

During the last months, everyone has been talking about GPT-4 created by Open AI and its application Chat-GPT, which can generate natural-sounding text, enabling the development of more realistic chatbots and language-based AI applications.

GPT-4 exploits the so-called generative artificial intelligence (AI), also known as Generative Adversarial Networks (GANs), a type of artificial intelligence that can generate new content, such as images, videos, and text, without human input.?It does this by training on a large dataset of examples and learning to create new, similar examples that are different from the ones it has seen before.?It can be used in any field to create content, improve the efficiency of businesses and industries, such as healthcare, finance, and manufacturing, and create synthetic data to train other AI models and improve their performance.

From the intellectual property rights of AI-generated works to the liability of generative AI, several legal issues need to be navigated.?But with these challenges come a world of possibilities, revolutionizing industries and improving the efficiency of businesses.?The key is to strike a balance between embracing the potential of generative AI and addressing the legal issues that arise.

This article is meant to give some snippets of the potential legal issues of generative AI that will be addressed in more detail in future articles.

Intellectual property rights of AI-generated works: Who owns the rights created by artificial intelligence systems?

One of the crucial legal issues surrounding generative AI is the ownership of intellectual property rights.?The ability of AI systems to create new works without human input raises questions about who owns the rights to these creations and how to protect them from infringement.?Some argue that AI-generated works should be considered “orphan works” and not subject to copyright protection since the creator cannot be identified.?Likewise, there have been some court cases where it was held that an AI could be the inventor under patent law.

This scenario raises concerns about the lack of protection for such works and the potential for their misuse.?To address this issue, one potential solution is to create a new category of intellectual property rights specifically for AI-generated works.?But that would make an additional layer of intellectual property rights that need to be attributable to someone and might limit the exploitation of artificial intelligence.

The same applies to synthetic data, which are artificially generated by AI systems rather than collected from real-world sources.?For example, who owns the rights to synthetic images if an AI system generates synthetic images based on real-world images??This raises questions about the rights of the creators of the original data, as well as the rights of the creators of the synthetic data.?And the matter is already leading to disputes.

On the topic, you can find interesting the article “Can generative artificial intelligence rely on the copyright text and data mining (TDM) exception for its training?“.

Legal liability of artificial intelligence systems: Who is responsible when AI-generated decisions are wrong?

As AI systems become more advanced, they will exponentially make decisions that can have significant consequences, such as diagnosing medical conditions, approving loans, and even driving our vehicles.?There are some regulatory restrictions, and, for instance, in the healthcare sector, an AI system is not “licensed” to provide medical treatment and, therefore, can only support practitioners.?Likewise, street regulations are framed to ensure that drivers always are in control of their vehicles, even though some accidents of the last years prove that someone is abusing the potential of self-driving cars.

And privacy-related regulations provide that individuals have the right not to be subject to automated decisions and, in any case, to seek the review of such decisions by a human being, which has been amplified in Italy for AI systems used to monitor workers with the so-called Transparency Decree.

In case of mistakes due to errors in the AI system, questions arise about who is responsible for that. Some argue that the creators of the AI system should be held liable, while others suggest that the AI system itself should be held accountable.

A specific liability regime for artificial intelligence is pivotal to fostering its exploitation.?Consequently, the European Commission is tackling the matter with a proposal for EU Directive.?The question is whether such proposals reach the right balance between making companies accountable for mistakes caused by artificial intelligence and avoiding burdensome obligations that might hinder the growth of AI.

On the topic, you can find interesting the article “What civil liability regime for artificial intelligence?“.

Transparency in AI systems and how to limit potential biases in AI models?

As generative AI systems become more advanced, they become harder to understand and explain, making it difficult to determine why a particular decision was made.?This can make it challenging to hold anyone accountable if something goes wrong.?Likewise, a bias could be incorporated into the AI models, leading to unfair treatment of certain groups.?This could occur if the data used to train the AI model is biased, leading the AI system to make decisions that discriminate against specific individuals or groups.

Biases in generative AI can lead to legal issues and ethical issues for the potential risk of discrimination.

The European Commission is trying to address such an aspect of artificial intelligence with the AI Act that has now been adopted by the European Council, and which is supposed to introduce different levels of certification and regimes for different types of AI, depending on the potential risks arising from them.

On the topic, you can find interesting the article “Artificial intelligence: the risk of a potential bias and the legal issues of potential discriminations“.

Privacy and cybersecurity issues of generative AI: How to protect individuals?

With the ability of AI systems to generate realistic images and videos, there are concerns about how this technology could be used to invade individuals’ privacy and security.?Additionally, the use of generative AI in deepfake content, which can be used to create fake videos and images, raises further concerns about trust in information and media.

Generative artificial intelligence in deepfake content can be used to create fake videos and images, which can be used to spread misinformation, harass or blackmail individuals leading to major legal issues.?This technology can also create realistic images of people, which can be used for malicious purposes such as catfishing and identity theft.?Additionally, the use of generative AI in creating synthetic data, which can be used to train other AI models, raises concerns about the privacy of individuals who may be represented in the data.

As a further point of concern, generative AI systems can be used to identify bugs in codes facilitating potential cyber-attacks, including ransomware attacks.?Likewise, generative AI can reproduce the voice and/or the image of CEOs and C-Levels to perform the so-called fake CEO cyber attacks.

On the topic, you can find interesting the article “The Italian case on ChatGPT benchmarks generative AI’s privacy compliance?“.

We will undoubtedly discuss much more in the future regarding the legal issues of generative artificial intelligence.?In the meantime, please read the blog posts at the links above and follow us for other articles on the topic.

On a similar issue, you can read the article “What the vote of the EU Parliament on the AI Act means?“.


How artificial intelligence is exploited in the food and beverages sector and the legal issues to consider

Artificial intelligence is revolutionizing the food and beverages sector, but what are the real potentials and what are the legal issues to be considered??Read more

The Data Dominance vs. Privacy Compliance Battle: Unraveling the Google and Meta Saga its Antitrust Implications

In a world where data is hailed as the new oil, fueling the engines of the digital economy, recent cases involving Google and Meta have brought to light a concerning intersection between privacy violations and potential antitrust breaches.?Read more


Prisca AI Compliance

Prisca AI Compliance is turn-key solution to assess the maturity of artificial intelligence systems against the main regulations and technical standards providing a score of compliance and identifying corrective actions to be undertaken. Read more

Transfer - DLA Piper legal tech solution to support Transfer Impact Assessments

This presentation shows DLA Piper legal tech tool named "Transfer" to support our clients to perform a transfer impact assessment after the Schrems II case. Read more

DLA Piper Turnkey solution on NFT and Metaverse projects

You can have a look at DLA Piper capabilities and areas for NFT and Metaverse projects. Read more

要查看或添加评论,请登录

社区洞察

其他会员也浏览了