WHEN AI MEETS LAW: THE LEGAL IMPLICATIONS OF THE USE OF AI SYSTEMS
– TIJESUNI IJITONA ESQ. & OLUWATOMISIN WAREKUROMO

WHEN AI MEETS LAW: THE LEGAL IMPLICATIONS OF THE USE OF AI SYSTEMS

Introduction

Since the launch of ChatGPT by OpenAI in 2022, the concept of artificial intelligence (AI) has become a subject of mainstream use, commentary, and criticism. From the use of Siri or Alexa on your smartphone to scrolling through social media pages, AI has and continues to, significantly impact our everyday lives in various forms. AI is not as novel as many people think, although it continually undergoes diverse stages of evolution.

Artificial intelligence refers to a machine's capacity to execute cognitive functions typically associated with human minds, such as providing solutions to problems. The term ‘Artificial Intelligence’ was coined by John McCarthy in 1955 and was defined by him as “the science and engineering of making intelligent machines”. The World Intellectual Property Organization has defined AI as “a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence, with limited or no human intervention.”

AI in the Legal Field

AI's prevalence in everyday life is consistently increasing, and though it may go unnoticed, it plays a major role in many aspects of human life ranging from communication, transportation, education, and many more.

Just as AI is revolutionizing various sectors, its impact is also profoundly felt in the legal field. From enhancing legal research and case analysis to streamlining contract review and automating mundane tasks, AI is reshaping how legal professionals deliver services and interact with their clients.

AI-powered legal research platforms have become essential tools for lawyers. By efficiently analyzing vast volumes of legal documents and case law, AI offers comprehensive insights, helping lawyers access critical precedents and make well-informed decisions.

In corporate law practices, AI tools are now available to be deployed for streamlining contract review and due diligence processes. Through AI-driven contract review tools, lawyers can quickly scan and analyze contracts, highlighting potential risks and essential information with remarkable accuracy.

Moreover, AI is also being used for the automation of repetitive tasks, such as document generation and administrative duties, freeing up time for legal professionals to focus on higher-value legal work.

Additionally, AI's predictive capabilities aid lawyers in assessing case strengths and weaknesses, empowering them to offer strategic counsel to clients. By analyzing historical case data and legal outcomes, AI algorithms predict the likelihood of success in a given legal scenario.

Legal Framework for the Regulation of AI in Nigeria

Section 18(2) of the 1999 Constitution of the Federal Republic of Nigeria (as altered) provides that the Government shall direct itself in its policy to promote science and technology. Currently, there are no direct legislation in Nigeria specifically addressing AI. Nevertheless, the National Information Technology Development Agency (NITDA) has the responsibility under Section 6 of the NITDA Act 2007 to develop a framework and guidelines for the IT sector in Nigeria.

In compliance with its statutory mandate, NITDA has invited stakeholders and the public to contribute to the development of the National Artificial Intelligence Policy (NAIP). Interestingly, NITDA is also working with the National Centre for Artificial Intelligence and Robotics (NCAIR) for the promotion of research and development on?emerging technologies and their practical application in areas of Nigerian national interest.

Implications of the Use of AI Systems in Nigeria

In the absence of an elaborate framework for the regulation of AI in Nigeria

1.?????Data Privacy: AI relies heavily on data for training and operation, and the handling of this data may raise major privacy and security concerns. The collection, storage, processing, and control of personal information must of necessary requirement comply with data protection regulations.

On the 14th of June 2023, President Bola Tinubu signed the Data Protection Act 2023 into law, and the objectives of the Act include safeguarding the rights and freedom of data subjects as guaranteed by the Constitution. Privacy in simple terms is the right not to be observed. Key data privacy risks in the use of AI systems include, but are not limited to unauthorized collection of data, data breaches, cross-domain data sharing, lack of proper data storage, etc. To mitigate these data privacy risks, the provision of the recent Nigerian Data Protection Act 2023 (the “Act”) becomes applicable and relevant. The Act prohibits the unlawful processing of personal information, which consists of personal data and sensitive personal data of natural persons. For the purposes of the Act, "personal data" means any information relating directly or indirectly to an identified or identifiable individual, by reference to an identifier such as a name, an identification number, location data, an online identifier, or one or more factors specific to the physical, physiological, genetic, psychological, cultural, social, or economic identity of that individual. The Act also defines "sensitive personal data" as personal data relating to an individual's genetic and biometric data, for the purpose of uniquely identifying a natural person; race or ethnic origin; religious or similar beliefs; health status; sex life; political opinions or affiliations; trade union memberships; and other information which may be prescribed by the Commission as sensitive personal data.

Therefore, businesses and organisations that collect or process data for AI training, or utilize AI systems in their operations have a duty of care to reasonably protect the data such data within the ambit of the Law.

2.?????Intellectual Property: AI-generated works, whether they be art, music, or written content, raise questions about ownership and copyright. Determining the legal status of AI-generated works and their protection under intellectual property laws can be complex.

AI-generated works, inventions, or innovations could raise issues related to intellectual property rights. There might be questions about the ownership and protection of AI-generated creations, inventions, and patents.

In today’s world of advanced technology, generative AI has more advanced abilities compared to narrow AI. For instance, AI can now generate creative works such as art, music, and written content, and while a completely autonomous generative AI is yet to be created, we must nevertheless give consideration to the subject of ownership of the “intellectual” property rights of works created by AI.

Indeed Intellectual property laws were made for the protection of human creativity, and it can be said that lawmakers did not envisage that AI and machines would make inventions; hence most jurisdictions do not have legislations which addresses the ownership of intellectual property rights of AI-generated works.

Another notable intellectual property implication of AI presents itself in situations where generative AI creates a derivative work. Traditionally, in such a situation, the appropriate action to take would be to seek the consent of the original owner of the creative work before creating the derivative work, however, with a lot of AI systems having access to the limitless information readily available on the surface internet as a source of training data, this consent requirement is largely dispensed with. Recently, legal actions have been taken by original owners against AI companies, with one of the most popular instances being that of author and comedian, Sarah Silverman, along with two other authors instituting a lawsuit against OpenAI and Meta for alleged copyright infringement, claiming that the companies' AI models used their work in their training data without permission.[1]

These intellectual property implications are not untraceable to the lack of legislation capable of tackling the incidences of intellectual property ownership of AI-generated works in many jurisdictions today. Therefore, a clear solution is the creation of a more robust legislation which not only addresses these issues but also goes further to make a necessary classification of AI albeit its real or perceived “intelligence” as either a tool or an entity to be ascribed legal personality for the sake of ownership of creative works. However, as of today, in the absence of any legislation stating otherwise, can we safely conclude that AI-generated works belong to the user of the AI and not the AI itself?

3.?????Liability and Responsibility: With partially autonomous AI systems such as self-driving cars, which can make decisions and take actions based on real-time data and analytics, determining liability becomes challenging in cases where these systems cause harm, accidents occur, or incorrect decisions are made. Traditional legal frameworks are often designed for human accountability and may not sufficiently address situations involving AI use. This raises questions about whether responsibility should lie with the AI developers, operators/users, or the AI itself. Once again, it may be necessary for a legal framework to allocate accountability in such scenarios

4.?????Transparency and Explainability: AI models are often seen as "black boxes" because their decision-making processes are not easily explainable. In other words, it can be difficult to understand why an AI system made a particular decision. This lack of transparency can raise concerns about accountability and the right to know the reasons behind certain decisions, particularly in critical applications like healthcare or finance where the stakes are usually high. For example, if an AI system is used to make decisions about patient care, it is important for doctors and patients to be able to understand why the system made a particular recommendation. Similarly, if an AI system is used to make decisions about financial transactions, it is important for investors and other stakeholders to be able to understand how the system works.

There are a number of different ways to make AI systems more transparent and explainable. One approach is to use interpretable machine learning models, which are models that can be explained in a way that is understandable to humans. Another approach is to use post-hoc explainability techniques, which are methods for explaining the decisions of black box models after they have been made.

As AI systems become more sophisticated, it is increasingly important to be able to understand how they work and why they make the decisions they do. This will help to ensure that AI systems are used safely and responsibly and that they are accountable for their actions.

Additionally, transparency and explainability in AI systems play a vital role in fostering trust. When people comprehend the inner workings of an AI system, the fear of AI “taking over the world” can be easily dispelled, and they are more inclined to trust it and adopt its usage. Furthermore, this transparency aids in recognizing and addressing bias within AI systems, as individuals can readily identify biases when they understand the system's operations. Finally, transparency and explainability contribute to the enhancement of AI system performance, as developers can make informed adjustments to optimize its capabilities.

5.?????Bias and Fairness: Bias refers to the systematic and unfair favouritism or discrimination towards certain individuals or groups, based on characteristics such as race, gender, age, or socioeconomic status. Fairness, on the other hand, refers to the absence of bias or discrimination in decision-making.

Section 42 of the 1999 Constitution of the Federal Republic of Nigeria (as altered) guarantees as a fundamental right the freedom from discrimination, however, since AI requires a huge amount of data for its training, AI algorithms may perpetuate biases present in the data used to train them. In other words, if the data is biased, the AI system will learn to be biased as well and this can lead to discriminatory outcomes in areas such as hiring, lending, etc. For example, an AI system that is trained on data that shows that men are more likely to be hired for certain jobs may be more likely to recommend men for those jobs, even if women are equally qualified. Addressing bias and ensuring fairness in AI decision-making is crucial, and legal measures may be needed to enforce accountability and promote fairness by requiring AI systems to be transparent and explainable, or by altogether prohibiting the use of biased data in training AI systems.

6.?????Consumer Protection: AI-driven products and services can have significant impacts on consumers, including their decision-making. For example, AI systems can be used to personalize advertising, recommend products and services, and even make financial decisions on behalf of consumers. Ensuring that AI systems are safe, reliable, and transparent is essential for protecting consumers from harm and misleading information.

7.?????Employment and Labor Laws: The adoption of AI in the workplace can lead to job displacement and changes in employment dynamics. This means that some jobs may be lost as AI systems are able to perform them more efficiently or effectively, and other jobs may also be created as new?AI-enabled industries and businesses emerge.

Labour laws may need to be updated to address issues related to job loss, retraining of workers, and the impact of automation on labour rights. For example, laws may need to be updated to ensure that workers who are displaced by AI are able to receive retraining or other assistance to help them find new jobs. Additionally, laws may need to be updated to protect the labour rights of workers who are employed by AI systems.

8.?????Competition and Antitrust: The use of AI in business practices could lead to competition and antitrust issues, particularly if AI systems are used to collude or manipulate markets. For example, AI systems could be used to share pricing information or to coordinate production decisions, which could lead to higher prices or less innovation for consumers.

Antitrust laws are designed to protect competition and prevent businesses from engaging in unfair trade and anti-competitive practices such as price collusion, market manipulation, and predatory pricing, amongst others. These laws may need to be amended to address the specific challenges posed by AI. For example, it may be difficult to prove that AI systems are colluding if they are not able to communicate with each other directly.

It is important to consider the potential impact of AI on competition and antitrust in order to ensure that these laws are able to protect consumers and ensure that the benefits of AI are shared equitably.

9.?????Regulatory Compliance: AI applications frequently operate within regulated sectors like healthcare and finance, presenting a formidable challenge in maintaining industry-specific compliance. The continuous evolution of AI systems amplifies this complexity, compounded by the fact that existing regulations may lack specificity to address emerging AI-related concerns. The diverse applications of AI could necessitate adherence to distinct industry regulations – for instance, healthcare (for medical diagnosis) or finance (in algorithmic trading), among others. The dynamic landscape of AI compels a vigilant approach to compliance that harmonizes technological innovation with sector-specific obligations.

10.?Regulation and Governance: The rapid pace of AI development may outpace the establishment of appropriate regulations and governance frameworks. This could lead to a number of risks, including the misuse of AI for harmful purposes, the spread of bias and discrimination, and the erosion of privacy.

Ensuring that AI technologies are developed and used responsibly requires collaboration among policymakers, researchers, and industry stakeholders. Policymakers need to develop regulations that are clear, comprehensive, and enforceable; researchers need to develop AI systems that are transparent, explainable, and fair; and industry stakeholders need to adopt responsible practices for the development and use of AI.

Conclusion

The world is intricately interwoven with the thread of technology, encompassing its myriad advancements, risks, benefits, and more. The emergence of AI stands as a pivotal testament to this technological journey. As the landscape of innovation continues to evolve, so must the framework that guides and safeguards it – the law. The essence of law lies in its adaptability, perpetually mirroring the rapid evolution of society.

Presently, AI remains largely unregulated in numerous jurisdictions, with debates centred around the careful balance between regulation and innovation. However, the dynamism of technology prompts an ongoing dialogue on the best course of action. Addressing the legal implications of AI necessitates a multifaceted approach encompassing well-defined legal frameworks, industry standards, ethical guidelines, and persistent collaboration between legal experts, technologists, and policymakers.

As AI journeys further into uncharted realms, the watchful eye on its legal ramifications becomes paramount. Navigating these waters requires a commitment to shaping a future where AI's integration is not only responsible but also holds the promise of benefiting society as a whole.

#FunmiRobertsLP #FRCArticle Tijesuni Ijitona Tomisin Warekuromo

Follow FRC on Social Media:?LinkedIn?|?Facebook?|?Twitter?|?Instagram

Send FRC a mail via: [email protected]

Call: Lagos: +234-902-079-0815 || Ibadan: +234-803-806-3543

[1] Sarah Silverman sues OpenAI and Meta claiming AI training infringed copyright https://www.theguardian.com/technology/2023/jul/10/sarah-silverman-sues-openai-meta-copyright-infringement accessed 07 August 2023.?



Suliat Olamide Salawudeen - The Eagle

Legal Practitioner//ADR Advocate//Accredited ADR Expert/ADR Trainer//Mentor//Optimistic Learner//

1 年

A very brilliant piece. Good job by Tijesuni Ijitona and Funmi Roberts and Co.

回复
Funmi Roberts

Independent Director & Dispute Resolution Experts

1 年

Well articulated! Good job

要查看或添加评论,请登录

社区洞察

其他会员也浏览了