Legal Issues Gen AI: Creation Tool or Generic Output
Mariia Shcherbakova
AI and IPR | Event Support Consultant @ ITU | MS in AI | Master of Law | Alumni of WIPO-UNIGE S’21
Artificial Intelligence (AI) is not just a buzzword today, but a fundamental tool that is revolutionising almost every aspect of our daily lives and work. From simple tasks like filtering email spam to complex tasks like diagnosing diseases and autonomous driving, AI is showing the potential to completely change the way we interact with the world around us.
(Skip if you are familiar with the Russian doll-like structure from AI to LLMs)
The first steps in AI were made with expert systems - programmes capable of simulating decisions and tasks requiring human intervention (Chuvikov D. A., 2017). Since then, AI has undergone huge changes, especially with the development of neural networks, which mimic the human brain and are trained to recognise patterns and perform tasks without explicit programming for each one.
Neural networks are algorithms inspired by the structure and function of the biological neural networks of the human brain. They are a system of interconnected nodes, or "neurons", that work together to process information, similar to the way neurons in the brain communicate through synapses (Ksenofontov V. V., 2020).
Each artificial neuron in the network receives input data, performs some processing and passes the output data to the next neuron. The strength of the connection, i.e. the 'weight', between neurons is determined during training and optimised through a process known as 'learning with a teacher', where the network learns from provided examples, or 'learning without a teacher', where the network finds structures in the data on its own.
Neural networks are capable of learning and can adapt to new data without reprogramming. This is achieved through a process known as "error back propagation", where the system adjusts its weights based on errors in predictions.
One of the key advantages of neural networks is their ability to recognise complex patterns and perform 'deep learning', where they use multi-layer architectures to explore data at different levels of abstraction. This allows neural networks to identify complex patterns in large amounts of data, making them particularly useful in areas such as image recognition, natural language processing and predictive analytics (Galanov, Selyukova, 2019).
Generative Artificial Intelligence (AI) and large language models such as OpenAI's GPT-4 have brought a real revolution to business and professional fields, transforming traditional approaches to work and process management.
Generative AI refers to a type of algorithm that can create new content - text, images, music, code and even design suggestions - based on learning from large datasets. These systems use an understanding of context and structures to synthesise new material that did not previously exist, often with a level of persuasiveness and creativity comparable to what a human can create (Konstantinova L. V., 2023).
Large language models, such as Google's Blackbox, are powerful tools for natural language processing and generation. They can write articles, answer questions, generate programming code, translate texts, and perform many other language tasks. Trained on giant text corpora, these models understand and generate language by analysing the context and nuances of human content.
(until here)
These technologies offer businesses the tools to significantly improve and enhance workflows, increase efficiency and innovation, but it is also important to consider the potential ethical and legal issues they may raise.
The emergence of generative artificial intelligence systems has raised a host of legal issues, particularly with respect to the data they consume and the results they produce. As these systems learn to create content that reflects human creativity, they often rely on vast amounts of data that may contain proprietary, personal or confidential information. This reliance poses serious challenges in terms of intellectual property rights, privacy laws and data protection.?
Generative AI & Copyright
Current legal frameworks have not been designed with AI in mind, leaving a grey area of ownership and accountability. Moreover, AI's ability to replicate and distribute data-driven content nationwide increases the potential for legal violations, making it necessary to reconsider the boundaries of use and liability. As stakeholders address these challenges, the call for a new legal framework that balances innovation with the protection of individual and corporate rights is growing louder.?
Legal Cases
There are serious legal challenges to the development of generative AI systems. For example, in early 2023, Getty Images sued Stability AI for training without consent on millions of its images (Korn J., 2023). Artists and illustrators have raised similar concerns against the same company. In the summer of 2023, it was discovered that around 190,000 works, including 170,000 books, were used in the Books3 dataset to train artificial intelligence models without permission (Heath N., 2023). Few months ago, writers Sarah Silverman, Richard Kadrey and Christopher Golden filed a lawsuit in California alleging that Meta had violated copyright laws and used their works to train the LLaMA AI model (Devis W., 2023). And Hollywood went on strike last summer, fighting against, among other things, AI intrusion into their work. Other organisations, such as the Grammys, have sided with the federal courts on this issue: 'Only human' musicians are eligible? (Frank J., 2023). There are more and more cases each month.
These challenges highlight the need for clear rules and standards governing the use of data to train generative AI systems to ensure compliance with intellectual property, privacy and data protection laws.
The study of the legal aspects of the use of AI for content creation is a new and dynamic field in jurisprudence. On the one hand, legal scholars seek to determine how traditional intellectual property concepts apply to works created using AI and how these rights can be protected or transferred. On the other hand, there is a need to adapt or completely rewrite existing law to reflect the unique aspects of such content (Filipova I. A., 2022).?
Legal Aspects
The study of the legal aspects of content creation using AI and the issues of its protection as objects of copyright touches upon several key topics:?
These issues emphasise the complexity and multidimensionality of the problem of protection of the results of intellectual activity created with the use of AI and require a comprehensive approach in their solution.
The example of Kristina Kashtanova's Midjourney "Dawn of Dawn" can be cited in this contact. The U.S. Copyright Office has cancelled a copyright decision on images in Kristina Kashtanova's comic book Dawn of Dawn, created using the Midjourney image generator (Lawler R., 2023). Although Kashtanova is recognised as the author of the text and the layout of the elements of the work, the images are not considered to be the product of human authorship. This decision contrasts with the approach to objects such as photography, where the photographer's creative choices are protected. This points to the existing legal distinction between AI-created works and photographs, as well as other human-created illustrations using new technologies.
The first requirement for output data to be recognised as protected subject matter is originality. This criteria traditionally requires the presence of a "share of creativity" and a material form of expression. In the context of works created by AI, the question arises as to who or what is the author. The law generally recognises human authors, so the challenge is to determine whether the result of the AI work reflects sufficient human intervention to be considered a creative work.
Second, the question of whether content created by AI can reflect personality and creative individuality is crucial to legal protection. Individuality, expressed in the author's choices and creative flair, is the cornerstone of copyright. In the case of AI, the creative process is different in that it is driven by algorithms and may lack a "human touch". However, if the human operator makes significant creative choices, such as selecting topics, inputting data, and directing the AI, it could be argued that AI is merely a tool used to express the personality of the human operator (Zolkhoeva, 2023).
The use of such tools opens up new frontiers of creativity. "The Frost", a short film created by Waymark, represents one of the first films to be entirely generated by artificial intelligence (Heaven W. D., 2023). Each frame of the film was created using OpenAI's DALL-E 2 image generation model, and the AI tool D-ID was used to add motion and animate static scenes. This approach to film creation not only opens new frontiers in film production, offering an innovative and cost-effective method of content production, but also raises discussions about legal aspects, as there is the possibility of creating the same images with similar requests from different users. In addition to whether the film is a complex object and whether the AI user is the right holder of the audiovisual work, here it is necessary to analyse the degree of human involvement in the creative process. If humans made significant creative decisions in the making of the film, such as writing the script, directing scenes or editing, then, despite the involvement of AI, the human factor may be sufficient for copyright protection.
Generative AI & Inventions
In the emerging field of artificial intelligence, tools for creating new inventions are changing traditional approaches to research and development. AI is not just an assistant, but an accomplice in the inventive process, capable of analysing vast amounts of data to identify patterns and opportunities that may not be available to humans.
Use Cases
One of the most prominent examples of AI's capabilities is in drug development. AI platforms like the one used at InSilico Medicine can predict the biological activity of compounds and model disease pathways to help identify potential new drugs. InSilico Medicine's development of compound INS018-055 demonstrates the ability of AI to generate hypotheses for new molecules with therapeutic potential, significantly accelerating the pace of drug discovery (Insilico).
领英推荐
AI's involvement is not limited to hypothesis generation. Tools like AI-Descartes bring a new dimension to problem solving. Named after philosopher René Descartes, a proponent of the scientific method, AI-Descartes embodies a systematic approach to invention. It can review scientific literature, identify gaps in current knowledge and technology, suggest experimental designs, and even anticipate potential problems during the design phase.
In the broader context of invention, AI tools are being used in industries ranging from materials science to environmental technology. They are being used to design new alloys, create more efficient solar cells and develop sustainable manufacturing processes. By integrating AI into the inventive process, companies and researchers can move from incremental improvements to disruptive innovations (Morhat P. M., 2020).
Patent Issues
The issue of patent ownership in the context of inventions created by artificial intelligence is a complex and evolving legal problem. The essence of the patent system is to recognise the inventor, traditionally considered to be a human being. However, with the increasing role of artificial intelligence in the process of creating inventions, the boundaries between human and machine contributions are blurring.
In many jurisdictions, the patentee is the person or entity that employed the inventor or alienated the rights to the inventor. If the AI plays an important role in the development of the invention, the exclusive right can logically pass to the person who supervised the AI or to the organisation responsible for the AI's work, provided they can prove a tangible contribution to the inventive process.
However, this interpretation is not without controversy. Some argue that if AI autonomously generates inventions, recognising a person as an inventor solely on the basis of ownership or exploitation of AI may call into question the principle that patents are granted on a person's inventive talent. The European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) have recently addressed this issue, and both offices now believe that an individual should be the inventor.
Where AI plays a significant role in the process of making an invention, the patent application may need to articulate the human inventor's contribution in order to distinguish it from the computational processes of the AI. This may mean detailing the steps in solving a problem, formulating a hypothesis, or selecting variables and parameters that led to the invention.
Whether a human is the author of the work will depend on the level of human involvement in the creation of the work: a human uses the software to provide technical assistance in achieving a certain specified goal, or a human simply provides data to the system for training and outlines the field of activity and the artificial intelligence, based on its own algorithm, creates the work.
Legal Background
So, when we talk about creation of works with the help of artificial intelligence, in such a case it is obvious that artificial intelligence as a special kind of computer programme is an instrument, and the author is a human being. Can artificial intelligence be compared to a guitar on which a person writes music, a brush with which a person paints a picture, or a camera with which a person creates an artistic photo?
For example, in 1884, the U.S. Supreme Court in the case "Burrow-Giles Lithographic Co. v. Sarony" for the first time copyrighted a photograph (Supreme Court, 1884), and the camera accordingly served as an auxiliary tool in the creation of the work. But when it comes to the creation of works by artificial intelligence, not everything is so clear-cut.
So who can be an author under the current legislation?
The provisions of the Berne Convention for the Protection of Literary and Artistic Works (hereinafter referred to as the Berne Convention), the Universal Copyright Convention although they do not directly indicate that only a person (citizen) can be an author, but this can be understood by interpreting the provisions of these normative acts, also the concepts and criteria of creative work are not disclosed in these international acts (Universal Copyright Convention, 1971).
The author of objects of intellectual property rights under Article 1228 of the Civil Code of the Russian Federation is a citizen who has created a work by his creative labour (Civil Code of the Russian Federation (Part Four), 2006).
According to the U.S. Code, a legal entity may also be considered the author of a work (when creating a work on commission, the employer owns personal property and non-property rights) (17 U.S. Code § 201).
As can be seen, in the continental system of law, to which Russia belongs, only a person can be recognised as an author (while the Anglo-Saxon system of law also allows recognition of authorship for a legal entity in the case of creation of an official work). But in the case we are considering we cannot say that the author is a human being, because another important feature for the recognition of authorship is creative contribution. But when the work is created by artificial intelligence, and the results of its work are unpredictable, we can hardly say that this work was created by human creative labour. Besides, even if we admit that the author in such cases is a physical person, we face the problem of defining this person. Who is this person: the developer of artificial intelligence, the owner of the system, the user or someone else.
There is also a question of the possibility of recognition of the author of artificial intelligence, and for this purpose, respectively, it is necessary to determine the legal status of artificial intelligence in general and to determine whether the work of the artificial intelligence system, given its nature, studied by us, can be called a creative work.
Legal Challenges
Given the ongoing legal debate, it is likely that legislative and judicial clarification regarding the role of AI in inventions will emerge in the future. Until then, the safest approach for organisations using AI to facilitate invention is to ensure significant human input into the inventive process, which is consistent with existing patent law that emphasises human inventiveness.
The integration of AI as a tool in various industries raises serious problems related to liability and the lack of regulation of the creation. The main problem is that AI systems, although capable of performing tasks beyond human capabilities, do not have legal personality and therefore cannot be held liable in the traditional sense.
One of the main problems with AI liability arises in the context of malfunctions or errors that cause harm. For example, if a car driven by artificial intelligence is involved in an accident, it is unclear whether the designer, manufacturer, owner or the AI system itself should be held liable. The existing legal framework is ill-equipped to deal with such situations, as it generally requires the responsible party to be an organisation capable of legal liability and sanctions.
In addition, artificial intelligence systems can evolve and learn in ways that are unpredictable and not fully understood by their creators. This feature, known as machine learning, complicates the allocation of responsibility. If an artificial intelligence system deviates from its original programming during the learning process and causes harm, determining who is at fault is a complex issue. Creators may argue that they could not have foreseen the autonomous development of the AI, while victims will claim responsibility for the harm caused.
Another challenge is the regulation of the creation and use of AI, especially with respect to the data on which these systems are trained. Big data is a critical component of AI development because machine learning algorithms require large data sets for effective learning. However, the use of big data is fraught with privacy concerns, intellectual property rights, and the possibility of biased or discriminatory results if the data is not representative or contains biased information.
Regulatory issues also arise from the global nature of AI development and deployment. AI systems can be developed in one country, trained on data from different jurisdictions and deployed around the world. This raises questions about which country's laws apply, how to ensure compliance with various regulatory requirements, and how to protect against misuse of AI outside the country.
In response to these challenges, some jurisdictions are considering creating a legal status for AI systems, which may facilitate the allocation of liability. However, this approach is controversial and raises philosophical questions about the nature of personality and liability. In addition, international cooperation and harmonisation of legislation are needed to effectively regulate the use of AI and protect individual rights while fostering innovation.
The debate about the liability and regulation of AI continues, with academics, policymakers and industry representatives exploring different models for addressing these issues. As AI takes root in society, the need for a clear and consistent legal framework to address these issues is likely to increase.
Conclusion
The exploration of artificial intelligence in the realm of content creation and its legal implications has unveiled a complex and dynamic landscape. This paper has underscored the transformative impact of AI technologies, such as neural networks and large language models, in reshaping traditional paradigms of creativity, authorship, and intellectual property.
While AI continues to redefine the boundaries of creativity and innovation, it is imperative for legal and ethical frameworks to evolve in tandem. Addressing the multifaceted challenges presented by AI in content creation and invention requires a holistic approach, encompassing legal, ethical, and societal dimensions. As AI's integration into various industries deepens, the call for a clear, consistent, and forward-thinking legal framework becomes increasingly vital. The journey towards achieving this balance between fostering innovation and protecting rights will be pivotal in shaping the future of AI in our society.
Bibliography
Fascinating intersection! Balancing AI innovation with legal integrity in content creation is crucial.
Thank you for sharing this insightful and fascinating article, Maria! These are exciting times for sure and we're so happy to be part of the revolution :)
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
10 个月It's fascinating to see how AI, especially models like GPT-4 and Google's Blackbox, is revolutionizing content creation within the legal framework. This evolution brings to mind the advent of the printing press in the 15th century, which drastically changed the dissemination of information. Just as legal implications arose then, we now face questions about intellectual property and authorship in the age of AI-generated content. Considering the parallels with historical shifts in media and technology, I'm curious about the ethical nuances involved. When AI incorporates human input, where do we draw the line between machine-generated and human-authored content? How do we ensure proper attribution and protect intellectual property in this evolving landscape? Your insights as a subject matter expert would be greatly appreciated in shedding light on these complex dynamics.