LEGAL LIABILITY OF ARTIFICIAL INTELLIGENCE (AI) IN NIGERIA
Morohunfola Aisha

LEGAL LIABILITY OF ARTIFICIAL INTELLIGENCE (AI) IN NIGERIA


The rapid development and immense growth of AI all over the world have tested the limits of the law, thereby creating legal uncertainties. Artificial Intelligence (AI), commonly executed by computer systems or robots using software programs and algorithms, studies intelligence in machines. AI was established on the basis that human intelligence can be so precisely described that a machine can be made to simulate it. The growth of AI has been seen in many sectors of Nigeria, such as health sectors, banks, and industries, among others. Among the advantages of using AI is that it increases efficiency, improves workflows, and lowers human error rates.

The website Inside Tech Law defines AI as a characteristic not simply that it can act and make decisions automatically but autonomously [1]. The question that arises is: if AI possesses the human ability to execute human tasks, can any mistakes that lead to harming a person by an AI lead to holding the AI responsible for the damage caused, thereby seeking redress in court? Can the AI be sued or be held vicariously liable for any harm caused?

While recognizing the importance of AI, this article focuses on the legal liability of artificial intelligence in Nigeria and, in retrospect, whether an AI can sue or be sued in Nigeria.

What is Artificial Intelligence?

Although there is no universal definition of AI, Artificial Intelligence (AI) can simply be defined as the ability of a digital computer, computer-controlled machine, or robot to perform tasks done by intelligent beings like humans.

As seen in recent times, an organization can develop AI themselves or acquire an AI license from a third party, which sometimes leads to an agreement for purchase, lease, or license. When AI is a key component of a transaction, some parts of the agreement may raise negotiation issues, which include risk allocation, indemnification, limitation of liability, and use of data [2].

AI exists in different forms and functions in different ways. Same as humans, AI also has its flaws and imperfections, which may be a result of defects in design, programming, or manufacturing, among others, which could cause damage to third parties.

Given these flaws, countries around the world have rapidly commenced efforts to enact legislation and policies to regulate AI and other specific AI technologies.

Liability of AI in Nigeria

Here's a hypothetical situation to illustrate some of the legal uncertainties:

Para Tech Hospital, an adopter of technology in solving medical problems, uses AI instead of radiologists to interpret X-ray images. However, in the course of the scanning, the AI misses an obvious condition of pneumonia, and the patient dies. Thus, who will be held responsible and be sued for the death of the patient?

As a general rule in the case of Fawehinmi v Nigeria Bar Association (1989) 2 NWLR at 595, only natural persons and juristic persons are competent to sue or be sued in Nigeria [3]. A person who commences an action in court must be a person known to the law.

Thus, the categories of juristic persons who may sue or be sued under Nigerian laws include:

  • Natural persons, i.e., human beings;
  • Companies incorporated under the Companies and Allied Matters Act;
  • c. Unincorporated associations such as registered Trade Unions, partnerships, and sole proprietorships [4].

In Management Enterprise Ltd v. Otusanya (1987) 2 NWLR 179, it was held that the plaintiff as well as the defendant should be a juristic person or natural person existing or living at the time of the institution of the action. The name of the plaintiff or defendant must be a living person or its corporate name in the case of a non-natural legal body [5].

From the foregoing, it is a fact that currently, AI cannot be liable for any civil or criminal liability as it is being referred to as a product that has no legal personality and cannot be personally responsible for its actions. Therefore, the Para Tech Hospital or the company that manufactures the AI may be liable under the law of tort or corporate liability. For instance, section 20 of (the law reform tort) of Lagos State makes it an offense for liability caused by defective products [6].

AI autonomous decision-making may be prone to the need for civil liability. Civil liability connotes a wide spectrum of non-criminal liability. Civil liability may include strict liability; liability in tort for negligence; liability in tort for breach of statutory duty or strict liability for the "escape" of something harmful; and liability in contract. However, while the AI cannot be held personally liable for civil liability, the manufacturer or a person who acquired the AI may be held liable.

Also, AI cannot be sued in Nigeria for criminal liability. This is because Nigerian criminal law provides for two elements that must be proved or established before a person may be liable for a crime, i.e., Actus Reus (the guilty act) and Mens Rea (the guilty mind).

The rule is that once one of the elements is missing, there shall be no criminal liability, as there must be concurrence between the mens rea and the actus reus. For instance, if an AI is to be prosecuted under the Nigeria Criminal Law Act, the Mens rea, which is also known as "a state of mind that's statutorily required to convict a defendant of a particular crime," may be difficult to prove in the case of an AI [7]. The question of how the prosecutor will prove the culpable state of mind in which the AI caused an accident or harm to a person as it's not a human will arise, and the prosecutor may not be able to prove the same, as AI is opaque.

According to ORAEGBUNAM & UGURU in their article "Artificial Intelligence Entities And Criminal Liability," it is generally believed that machines are excluded from criminal liability for lack of mental capacity to know that the nature of their act would result in an offense (since it is the programming of the maker or command of the user and not theirs) or to form or have general or specific intent (since it can be argued that machines do not know good or bad except the commands given it). Hence, currently, AI cannot be criminally or civilly liable in Nigeria as there is no law yet that regulates or criminalizes it [8].

According to BGH, judgment in 2001 (VIi ZR 13/01; BGH), machines and software are also unable to act as agents as they lack the legal capacity necessary. Instead, the person responsible for the machine or software is deemed to act with a general awareness to declare intent, which is then attributed to the actions made by the machine/software instead of the person responsible for the machine being deemed to act with a general awareness to declare intent, which is then attributed to the actions made by the machine [9].

As AI grows more sophisticated and ubiquitous, the warning against its current and future pitfalls grows louder. For example, in a February 2018 paper titled "The Malicious Use of Artificial Intelligence: forecasting, prevention band Mitigation," 26 researchers from 14 institutions (academic, civil, and industry) enumerated a host of dangers that could cause serious harm in less than five years. Malicious use of AI could threaten digital security, physical security, and political security.

Yet, as more businesses in Nigeria invest in AI, the evolution of technology progresses without any legal restrictions and regulations.

On the stand of AI in Nigeria today, in terms of regulation, the closest laws that regulate it are; the Nigeria Data Protection Regulation (NDPR), and the Nigeria Data Protection Act (NDPA), which regulate the use of data. The NDPR and NDPA did not generally apply to AI as it only applies to the use of data without addressing issues like programming errors, testing protocols, and much more.

There's currently no precedent that regulates AI in Nigeria, and this calls for the need to enact laws and regulations that will deal with emerging technologies as most of the Nigerian laws are archaic.

There should be a policy that will expressly cover the advancing technologies emerging as seen in developed countries such as Brazil, the UK, the US, Canada, and the EU, among others.

According to an article published by simplylaw.com, it was stated that surviving the future, however, depends on bringing technologists and policymakers together because AI is one of the technologies that need policy oversight [10].

Citations:

[1] Inside Tech Law, "What is Artificial Intelligence (AI)?" https://www.insidetechlaw.com/artificial-intelligence

[2] Ibid.

[3] Fawehinmi v Nigeria Bar Association (1989) 2 NWLR at 595

[4] Management Enterprise Ltd v. Otusanya (1987) 2 NWLR 179

[5] Ibid.

[6] Law Reform Tort of Lagos State, Section 20

[7] The Nigeria Criminal Law Act

[8] ORAEGBUNAM & UGURU, "Artificial Intelligence Entities And Criminal Liability"

[9] judgment in 2001 (BGH VIII ZR 13/01 VIII. Civil Senate)

[10] [https://accesspartnership.com/tech-policy-trends-2024-eu-act-future-of-ai-regulation/], "The Future of AI Regulation: Bringing Tech and Policy Together"



ABOUT THE AUTHOR

Aisha Morohunfola, a Nigerian legal practitioner, brings extensive expertise and insightful analysis to her work as an author in the fields of technology and energy law. Her legal background has honed her skills in navigating the complexities of these legal frameworks.

Morohunfola's dedication to the field is evident in her active engagement with legal matters, ensuring she remains abreast of the latest developments and trends shaping the technological landscape.


?

Ofonime Enoh

Partner at Lincoln Associates LLP

2 个月

Very insightful article. Useful as reference

Toheeb Jamiu

Maítre Jammy

3 个月

With the acknowledged risks posed by AI systems, one of the ways to mitigate the risks according to High-Level Expert Group on AI is to ensure that there is human oversight and control in the operation of an AI system. Again, AI should not only be about the pecuniary gains for Africans and Nigerian in particular, stakeholders should consider its implications on human rights and enact a regional legislation to arrest the risks. EU currently has the most comprehensible Regulation on AI, it is unnecessarily verbose.

Karolyne Hahn

?? KI Strategin | KI & Automatisierung | Beratung - Workshops - Kurse | Free Community??

6 个月

AI's unpredictable nature raises liability concerns - clear regulations needed. Aisha Morohunfola

AI's rapid progress raises complex questions about accountability and oversight. A proactive legal framework regulating AI ethics and liability is crucial. Aisha Morohunfola

要查看或添加评论,请登录

社区洞察

其他会员也浏览了