The Extraterritoriality of the EU AI Act: How the EU AI Act and its liability regimes apply to the UAE and KSA
By: Delara Emami
1.1. Introduction.
The rapid development of artificial intelligence (AI) has emerged as one of the most transformative technological advancements of the twenty-first (21st) century, reshaping industries and redefining the way we interact with the world. From healthcare to finance, AI-based goods and services (AI systems)[1] are being increasingly integrated into everyday operations, enhancing efficiency, accuracy, and decision-making. As organizations harness the power of machine learning and data analytics, the potential for AI to drive innovation and economic growth becomes increasingly evident. Globally, governments and businesses are investing heavily in AI research and development, recognizing its capacity to address complex challenges, such as climate change, public health crises, and economic inequality.
As AI continues to evolve, its importance is underscored not only by its economic and pragmatic implications, but also by ethical considerations surrounding data privacy, accountability, and the future of work. It is imperative to note that a key characteristic distinguishing an AI system from general software under the EU AI Act (defined below) is 'an AI system’s capability to infer'.
An AI system is defined as a machine-based system that is designed to operate with varying levels of autonomy that may exhibit adaptiveness after its deployment, and that for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments [2]. The capability of an AI system to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions that can influence physical and virtual environments, and to derive models or algorithms, or both, from inputs or data. The adaptiveness that an AI system may exhibit after its deployment, refers to its self-learning capabilities, allowing the system to change with its use’.[3]
Similarly, the UAE’s AI Ethics Principles and Guidelines[4] (referenced and discussed further below) defines an AI system as, “a product, service, process, or decision-making methodology whose operation or outcome is materially influenced by artificially intelligent functions”[5]. In parallel with the EU AI’s definition of an AI system, the key feature of an AI system highlighted under the UAE AI Ethics Principles and Guidelines is its ability to learn behaviour and rules in which it is not explicitly programmed. The KSA AI Ethics Principles (also referenced and discussed in further detail below) define AI as “a collection of technologies that can enable a machine or system to sense, comprehend, act, and learn”[6].
All three regulatory directives highlight and emphasize an AI system operating as a digitally powered technological transformative tool possessing self-learning capabilities and the ability to make inferences. The EU AI Act and the UAE AI Ethics Principles also recognize an AI system as a product. This recognition will be important when determining damages associated with AI systems.
1.2 Purpose.
This article seeks to explore the significance of the EU AI Act (defined below) as a legal framework shaping and influencing the continually and rapidly evolving regulations governing artificial intelligence in the United Arab Emirates (UAE) and the Kingdom of Saudi Arabia (KSA), including, as it concerns the issue of liability resulting from the use of AI systems.
Both the UAE and KSA governments have made substantial investments to attract AI advancements and position themselves as leading global centers for AI development and deployment, which this article will seek to highlight.
2. General Overview of the EU AI Act and its AI Risk-Based Classification System.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008 (the EU AI Act)[7] provides a comprehensive legal framework for the regulation of artificial intelligence within the European Union (EU). The EU AI Act has been in force since 1 August 2024 and will be applicable and effective from 2 August 2026. Certain prohibitions contained in the EU AI Act (including its general provisions) will be applicable from 2 February 2025. The obligations set forth in the EU AI Act for ‘providers’[8] of general-purpose AI models[9] will apply from 2 February 2025. The purpose of the EU AI Act, as provided in recitals (3 and (7) of its preamble, is to establish a set of common, uniform, and non-discriminatory rules to (A) ensure a consistent and high level of protection of public interests concerning health, safety, and fundamental rights throughout the EU, consistent with the values enshrined and upheld in the EU Charter of Fundamental Rights (the EU Charter), including democracy, the rule of law, and environmental protection, and (B) to achieve human-centric and trustworthy AI, while also protecting against the harmful effects of AI systems and potential divergences that may result in the hampering of free circulation, innovation, deployment, and uptake of AI systems and related products and services with the internal market.
The EU AI Act thus aims to strike a balance between mitigating any associated risks for affected persons[10] located in the EU by having established a set of rules for the development of AI systems applicable to AI systems providers (whether established or located within the EU or a third country, and authorized representatives[11] of such providers not established in the EU), operators[12], deployers (whether established in the EU or a third country, where the output produced thereby is utilized in the EU), importers[13], distributors[14], and product manufacturers (together with their product under their own name or trademark), and promoting its free and cross-border movement and use, including, (i) ‘placing on the market’ (i.e. the first making available of an AI system or a general-purpose AI model on the EU market); (ii) ‘making available on the market’ (i.e. the supply of an AI system or a general-purpose AI model for distribution or use on the EU market in the course of commercial activity); (iii) ‘putting into service’ (i.e. supply of an AI system for first use directly to the ‘deployer[15]’ or for own use in the EU market for its intended purpose).
The EU AI Act, however, does not apply to AI systems that are marketed, put into service, or used for military, defense, or national security purposes, regardless of whether the entity involved is public or private. It also excludes AI systems or models, including their outputs, that are specifically developed and deployed solely for scientific research and development.[16].
To effectively regulate AI systems and provide guidance on their governance while determining applicable compliance requirements, the EU AI Act introduces a three-tiered, risk-based classification system based on the level of risk these systems pose to the health, safety, and fundamental rights of EU citizens.
The EU AI Act defines ‘risk’ as the combination of the probability of an occurrence of harm and the severity of that harm. The risk-based classification system is summarized as follows: Unacceptable Risk AI Systems are strictly prohibited under the EU AI Act because they pose a clear threat to safety, livelihoods, or rights. As such, they are deemed unacceptable. Examples of unacceptable risk AI systems include for example, AI systems and practices, that, (i) deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make informed decisions; (ii) exploits the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective, or the effective of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; (iii) for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour (i.e. social scoring), leading to detrimental or unfavourable treatment of certain natural persons or groups, that is unjustified or disproportionate to their social behaviour or its gravity; (iv) to make risk assessments of natural persons to assess or predict the risk of a natural person committing a criminal offence solely based on the profiling of such natural persons or on assessing such natural persons personality traits and characteristics (to the exclusion of AI systems used to support the human assessment of the involvement of a person in a criminal activity already based on objective and verifiable facts directly linked to a criminal activity); (v) that create or expand facial recognition databases through the untargeted scaping of official images from the internet or CCTV footage); (vi) to infer the emotions of a natural person in the areas of workplace and educational institutions, with the exception of their use for medical or safety reasons; (vii) and (viii) to categorize natural persons based on their biometric data for the purposes of deducing or infer their race, political opinions, trade union members, religious or philosophical beliefs, sex life or sexual orientation, and (ix) for the use of real-time identification in public spaces for the purposes of law enforcement unless strictly necessary to, (a) conduct targeted searches for specific victims of abduction, missing persons, human trafficking, or exploitation; (b) for the prevention imminent or substantial threat to the life or physical safety of natural persons; or (c) the identification of a person suspected of a criminal offence or conducting a criminal investigation, punishable by a detention order.
The EU AI Act imposes regulatory compliance and conformity verification duties on providers, importers, and distributors of High-Risk AI systems, and general-purpose AI models. For instance, importers may be held liable if an imported AI system does not meet the regulatory standards, and importers can be held liable for damages resulting from its use. Distributors may be liable for damages if they distribute non-compliant AI systems or fail to act upon knowledge of non-compliance. Providers must monitor the performance and safety of their AI systems after deployment and report any incidents or malfunctions.
The EU AI Act provides that while its risk-based approach serves as a basis for a proportionate and effective set of binding rules, and without prejudice to the legally binding requirements contained in the EU AI Act, contributable, weight should be given to the principles outlined in the Ethics Guidelines for Trustworthy AI (prepared by the High-Level Expert Group on AI (AI HLEG) established as an independent expert group by the European Commission in June 2018), namely (i) human agency and oversight (i.e. AI systems should be developed and used as a tool that serves people, respects human dignity and personal autonomy, and should function in a way that can be appropriately controlled or monitored by humans), (ii) technical robustness and safety (i.e. AI systems should be developed and used in a way that allows robustness to minimize unintended harm in the event problems and resilience arise from attempts to alter the use or performance of the AI system, including for unlawful purposes by third parties), (iii) privacy and data governance (i.e. AI systems should be developed and used in accordance with privacy and data protection rules, ensuring integrity and privacy of processed data), (iv) transparency (AI systems should be developed and used in a manner that promotes traceability by imposing obligations on deployers to inform and make affected persons aware of the fact that they are interacting with such systems, as well as their rights; (v) diversity, non-discrimination and fairness (i.e. AI systems should be developed in an inclusive manner to promote equality and diversity amongst cultures and genders, and should avoid being developed in a manner that would otherwise result in discriminatory practices and unfair bias, as prohibited under applicable laws); and (vi) societal and environmental well-being and accountability (i.e. AI systems should be developed and used in ways that promote sustainability and environmental responsibility).
The EU AI Act further stipulates that, whenever feasible, the foregoing principles should be integrated into the design and implementation of AI systems and models, and should serve as a guiding tool for the drafting of codes of conduct and best practices in respect of the deployment and utilization of AI systems. Consequently and concurrent with the principles set forth in the Ethics Guidelines for Trustworthy AI, the EU AI Act’s rules are designed to align with the EU General Data Protection Regulation (EU) 2016/679 (GDPR), the Product Liability Directive (EU) 85/374/EEC, EU Regulation No 765/2008, and Council Decision No 786/2008/EC, the foregoing which aim to ensure consumer protection for products in the EU market by establishing a uniform accreditation system, market surveillance, and mutual conformity of acceptance and transparency. Additionally, these rules are consistent with EU Regulation 2023/988/EU, which replaces Directive 2001/95/EC (the General Product Safety Regulation).
3. The EU AI Liability Framework: Acknowledging AI Systems as Products and Defining Liability, Damages, and Remedies.
The EU Commission has acknowledged that existing product liability regimes within the EU inadequately provide claimants with means to seek remedies for damages caused by AI systems. To address this the European Parliament introduced a new liability regime in March 2024, amending the existing Product Liability Directive[18]. This initiative includes the adoption of the EU Commission’s 2022 directive on Liability for Defective Products (2022/0302 (COD)[19] to replace the current directive, alongside the establishment of the 2019 EU AI Liability Directive.
Adopted in March 2024, the aim of the New Product Liability Directive is to ensure that the EU's product liability framework allows for a no-fault liability system for defective products. Under this directive, ‘product’ includes "all movables, including software, including when they are integrated into other moveables or installed into immovables”[20], subject to the exclusion of free and open-source software developed or supplied outside the course of commercial activity. As such, AI system providers (including manufacturers, authorised representatives of manufacturers, and importers) may be held liable for ‘damages’ caused by any defective AI products or goods placed on the market or put into service (i.e. defined as the supply of a product for distribution, consumption, or use in the EU in the course of a commercial activity, whether for profit or free of charge).
While this new directive excludes open-source software, manufacturers that integrate or cause the interconnectedness of any open-source software into their products may be held liable for any resulting defects found in such AI system. The New Product Liability Directive expands the criteria of ‘defectiveness’ to include AI systems. According to Article 6 of the New Product Liability Directive, “a product is defective if it does not provide the safety that a person is entitled to expect or that is required under applicable EU or otherwise national laws”[21].
In assessing the defectiveness of a product, certain circumstances will be taken into consideration such as (i) the characteristics of a product (including its labelling, design, technical features, packaging, and instructions for assembly, installation, use, and maintenance), (ii) the product’s reasonably foreseeable use and effect on other products that can be used together with the product, including by means of interconnectedness; (iii) the effect on the product of any ability to continue to learn or acquire new features after it is placed on the market or put into service (i.e. AI products); (iv) the moment in time when the product was placed on the market or put into service); (v) relevant product safety requirements, including safety-relevant cybersecurity requirements; (vi) any recall of the product; or (vii) any failure of the product to fulfill its intended purpose, where such product is used to prevent damage. Products may be considered ‘defective’ where vulnerabilities arise from cybersecurity failures. The New Product Liability Directive provides for the recovery of material losses resulting from a ‘defective’ product, whilst maintaining that non-material losses may be claimed under the laws of member states.
The AI Liability Directive seeks to complement the New Product Liability Directive, noting the lack of coherence of liability regulations with the EU AI Act. The EU AI Liability Directive aims to:
(i) establish legal certainty for both claimants and businesses concerning product liability for new technologies (inclusive of AI);
(ii) address the rights of claimants to damages related to such new technologies, offering claimants a level of protection irrespective of the technology involved, and at the same time,
(iii) promote the circulation, deployment, and use of such new technologies[22].
Claims under the AI Liability Directive are limited to claims where the damage is either caused by an AI system or caused by the failure of an AI system to produce a specific output. The New Product Liability Directive and the AI Liability Directive are founded on fundamentally different principles. The New Product Liability Directive establishes a strict liability framework for manufacturers (and where the manufacturer is established outside of the EU, the shifting of liability to the importer, and authorised representative of the manufacturer) holding them accountable for defective products regardless of fault. This no-fault liability mandates that defendants disclose relevant evidence upon request. If a defendant fails to provide such evidence, a presumption of product defectiveness arises.
Additionally, if it is shown that a product is defective and the resulting damage aligns with that defect, the causal relationship is also presumed, placing the burden on the defendant to counter these assumptions. Conversely, the AI Liability Directive aims to reform national fault-based liability systems by altering the burden of proof. In the existing framework, claimants must establish fault, damage, and causation.
The AI Liability Directive, however, empowers claimants to request court-ordered disclosure of information related to high-risk AI systems. If the defendant does not comply with this order, it is presumed that they failed to meet the relevant duty of care. Furthermore, the AI Liability Directive introduces a rebuttable presumption of a causal link between the defendants fault and the AI systems output (or lack thereof). Nonetheless, claimants are still required to demonstrate that the AI's output—or its failure to produce an output—caused the damage.
The New Product Liability Directive and the AI Liability Directive differ significantly regarding the types of damages claimants can pursue. Under the New Product Liability Directive, claimants can seek damages from manufacturers if a defective product causes death, personal injury (including recognized psychological harm), property damage, or data loss. In contrast, the AI Liability Directive does not impose such limitations on the types of harm that can be claimed. Instead, it allows claimants to seek damages for any form of harm, provided it is recognized under relevant national law. This could encompass damages related to discrimination or violations of fundamental rights, such as privacy.
The New Product Liability Directive specifically addresses liability concerning defective products, defining them as items that fail to meet the safety expectations of consumers or the standards set by the EU, or national law. This includes safety requirements outlined in the EU AI Act. The AI Liability Directive, on the other hand, centers on breaches of duty of care without necessitating that such breaches lead to a defective product. In other words, if a court finds it excessively difficult for a claimant to establish a causal link, the AI Liability Directive allows for a presumption of causation between the defendants fault and the output produced by the AI system (or its failure to produce an output). And despite their differences, the European Commission, with the New Product Liability Directive and the AI Liability Directive, aims to establish and develop a harmonious set of rules in line with the EU AI Act, for the establishment of strict liability for defective products on the one hand, and non-contractual civil liability claims. What implications does this have for the applicability of the EU AI Act and its liability regime to providers, operators, deployers, importers, manufacturers, and suppliers of AI systems in the UAE and KSA? We address this question below.
4. Overview of AI Regulatory Developments in the UAE and KSA.
As MENA’s most active tech hubs, the UAE and the KSA have made major investments and policy commitments to position themselves as leaders in AI innovation. According to Magnitt’s 2023 Saudi Arabia Venture Capital Report[23], the UAE and KSA account for an estimated 80% of the share of venture capital financing in the MENA region. It is also estimated that 75 and 50 AI-driven startups are operating in the UAE and KSA, respectively. In recent years, the UAE government introduced several initiatives and programs to foster the growth of AI initiatives, including the UAE AI Strategy and the establishment of specialized AI research centers. The Dubai International Financial Centre (DIFC) Innovation Hub[24] recently launched a largely subsidized AI commercial licensing regime (AI License), with the aim to provide developers and entrepreneurs looking to launch their businesses in the MENA region, with advanced growth opportunities from its state-of-the-art AI Campus, a fully immersive and globally connected AI hub which estimated to attract over USD 300mn in collective funds, 500+ global AI start-ups, and create 3000+ jobs by 2025.[25].Additionally, other major tech hubs like Dubai Internet City and the Abu Dhabi Global Market (ADGM) have attracted a growing number of AI-focused startups. The UAE is also home to several other innovation hubs, including ADGM’s Hub71, Abu Dhabi’s G42-focused AI hub, and a number of other AI-centric accelerators. As of this publication, neither the UAE nor the KSA has established dedicated legal or regulatory frameworks for AI governance.
To meet the increasing societal demands for AI technology, the UAE government has focused on developing sector-specific AI guidelines across transportation, healthcare, space, renewable energy, water, technology, education, and the environment, as outlined in its National Strategy for Artificial Intelligence 2031 (UAE AI Strategy). Launched in 2017, this strategy aims to create a comprehensive AI system that enhances the UAE's socio-economic framework, promotes responsible AI practices, addresses ethical challenges, and establishes a robust data-sharing infrastructure for AI testing.
In 2023, the UAE introduced the AI Adoption Guideline in Government Services, which highlights AI use cases and opportunities for integration within the public sector while promoting ethical considerations and transparency. More recently, in July 2024, the UAE published the UAE Charter for the Development and Use of Artificial Intelligence, outlining twelve fundamental principles that emphasize human well-being, safety, responsible development, inclusivity, and respect for individual differences. The UAE Charter also underscores the importance of data privacy, human oversight, governance, and accountability, while reaffirming the UAE's commitment to international treaties and local laws. Although the EU AI Act did not directly inform the UAE AI Charter, notable parallels exist, particularly in their shared emphasis on equality, diversity, human-centric design, and the protection of fundamental rights through transparency and accountability. In December 2022, the UAE Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications Office published the AI Ethics Principles and Guidelines[26].
Much like the UAE AI Charter, the UAE AI Ethics Principles and Guidelines set out AI ethics guidelines to ensure the responsible and ethical deployment of artificial intelligence across various sectors, as follows:
Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data (UAE Data Protection Law) is also key to the UAE’s approach to regulating AI. Data privacy legislation is particularly relevant to AI systems, as it sets guidelines for the collection, processing, and storage of personal data, ensuring that AI technologies operate in compliance with privacy standards. By emphasizing data protection principles, the law aims to enhance trust in AI applications while promoting responsible innovation in the digital landscape. The decree also outlines the rights of individuals regarding their personal data, which is crucial for fostering ethical AI development that respects users' privacy and autonomy.
The DIFC Data Protection Law No. 5 of 2000 was amended in September 2023 to include language governing the collection and processing of personal data through “autonomous and semi-autonomous systems, including AI and generative machine-learning technology (DIFC DPL),[28] where “system” refers to any machine-based system that operates either autonomously or semi-autonomously and is capable of processing personal data for purposes defined by humans, the system itself, or both. This processing results in the generation of output. Such amendments were reportedly made to the DIFC DPL to accommodate increasing deployment, dependency, and governance initiatives, such as with the new EU AI Act, and it is clear from the guidance that the definitions contained in the DIFC DPL have been influenced by such legislation. Additionally, Article 38 of the DIFC DPL grants data subjects the right to contest any decision made solely through automated processing, including profiling, that leads to legal effects or other significant impacts on them, and entitles them to be notified that their data is being processed by an AI system (Regulation 10)[29].
Among other protections, the DIFC DPL prohibits the use of an AI system unless the purpose of processing a data subject’s personal information is solely and strictly defined and approved by an individual. requires that an AI system be produced with unbiased algorithmic decisions, and uphold the principles of fairness, transparency, security, and accountability[30].
The UAE has also enacted two key pieces of legislation drawing parallels to EU legislation as it concerns AI systems, (i) Federal Law No. 15/2020 on Consumer Protection (UAE Consumer Protection Law), similar to consumer protection regulations in the EU, sets out penalties for the display, offering, promotion, or advertising of goods or services injurious to consumers, inclusive of AI systems, and (ii) the UAE Federal Law Combating Discrimination and Hatred (Anti-Discrimination Law), which applies to AI systems. The Anti-Discrimination Law addresses issues of discrimination and promotes tolerance within society. This law also has implications for AI systems, particularly in how they are designed and deployed. AI technologies must be developed and used in compliance with this law, ensuring that they do not perpetuate biases or discriminatory practices. By mandating respect for individual rights and promoting inclusivity, the Anti-Discrimination Law encourages the responsible use of AI, fostering an environment where technology serves to enhance social harmony and equality. These principles closely align with the values embodied in the EU AI Act and the EU Charter.
Various other authorities in the UAE have published other guidance on AI, including:
In 2020, the KSA launched the Saudi Data and AI Authority (SDAIA) and mandated it to develop its national strategy for data and AI compliance policies, procedures, standards, and governance controls to achieve its overarching AI objectives. In September 2022, following a public consultation, and in accordance with the Council of Ministers’ Resolution No. (292), the KSA announced the first version of its artificial intelligence ethics framework (AI Ethics Principles) applicable to all public, private, and non-profit entities engaged in the design, development, deployment, implementation, use, or affected by AI systems in the KSA[31].
The AI Ethics Principles aim to, (a) support the KSA’s national strategy concerning the adoption and deployment of AI technology and encourage research and innovation, (b) establish governance policies concerning data and AI models to limit any negative sociological, political, and economic implications of AI systems, including but not limited to the protection and privacy of data subject, and their rights concerning the collection and processing of their data; and (c) support and assist entities to adopt standards and ethics when building and developing AI-based solutions to ensure responsible use thereof[32]. AI is defined under KSA directives as ‘systems that employ methods that can gather data and use it to predict, suggest, or make decisions with varying degrees of autonomy and select the best course of action to accomplish particular objectives [33].
In October 2023, a second iteration of the AI Ethics Principles was published. The first iteration of the AI Ethics Principles provided for a limited waiver/exception to the applicability of certain parts of the framework, whereas such waiver/exception was removed in its second iteration, signaling the fundamental importance the SDAIA attributes to ethical standards and practices and non-exceptional compliance requirements to which all public, private, and non-profit AI stakeholders must be adhere in the use and deployment of AI in the KSA.
领英推荐
By also having established a tiered-risk categorisation system for the levels of risks associated with the development and/or use of AI (from little to no risk, limited risk, high risk, and unacceptable risk), the AI Ethics Principles closely align with EU AI Act’s risk-based classification system, which also categorizes the use of AI based on the level of risk they pose to users and the general public, (i) unacceptable risk, (ii) high-risk, (iii) limited risk, and (vi) minimal risk. AI systems that have minimal or no risks are not required to adhere to the AI Ethics Principles. In contrast, “high-risk” and “limited risk” classified AI systems are systems that pose risks to basic rights, and as such must undergo pre-conformity and post-conformity assessments and must adhere to the provisions thereunder, as well as applicable KSA- statutory requirements. Finally, “unacceptable risk” classified AI systems are AI systems that pose an “unacceptable risk” to people’s safety, livelihood, and rights such as those related to social profiling, exploitation of children, or distortion of behaviour, and are therefore prohibited from being developed.
Similarly, the EU AI Act provides that “high-risk AI” includes systems used in critical infrastructure, education, employment, essential private and public services, law enforcement, and migration, and as such, will be required to adhere to the strictest regulatory and statutory requirements. Providers of these systems will need to comply with extensive obligations around transparency, data quality, human oversight, and robustness, or face steep fines of up to 6% of global turnover.
Much like the UAE AI Ethics Guidelines, the KSA GenAI Guidelines hold designers, vendors, purchasers, developers, owners, and evaluators of Generative AI (GenAI) systems ethically responsible and liable for any decisions or actions that could harm individuals or communities. Implementing GenAI systems may have legal and ethical consequences that require careful consideration, including risks related to intellectual property infringement, data privacy issues, and potential violations of human rights. Therefore, developers and users should, (i) ensure data is properly acquired, classified, processed, and made accessible to facilitate human intervention and control when necessary; (ii) conduct data quality checks, clean the data, and validate its integrity to ensure accurate outcomes; (iii) build and validate models responsibly to meet intended goals; (iv) comply with relevant laws, such as Personal Data Protection and Intellectual Property Laws, to safeguard user rights; and (v) consult legal experts to identify and mitigate risks associated with the implementation of GenAI systems.[34]
5. Extraterritoriality of the EU AI Act, the EU AI Liability Framework, and Implications for Local AI-Driven Companies.
By now, it should be apparent that local principles, guidances, rules, and ordinances (binding and non-binding) that have been promulgated in the UAE and KSA closely mirror the binding provisions of the EU AI Act, the Ethics Guidelines for Trustworthy AI, the New Product Liability Directive, and the AI Liability Directive. While the UAE and KSA do not have a specific or singular legally binding regulatory framework for AI, there are clearly numerous similarities between local laws and directives and the EU AI Act, and evidence of the significant influence and impact of the EU’s AI framework on local practices.
Indeed, the provisions of the EU AI Act extend its reach to other jurisdictions and may impose liability for AI system creators, providers, importers, and distributors, irrespective of whether they may be established outside of the EU or EEA., and to deployers using the output of the AI system in the EU or EEA. So long as the AI system generates, involves, or is linked with any system, process, input, or output concerning or connected to the EU or EEA, the EU AI Act may apply. We have seen how the GDPR has impacted the way non-EU businesses govern the manner in which they collect, use, and process data to comply with privacy standards and expectations; this is shaping up to also be the case with respect to AI-driven businesses, irrespective of the fact that they may operate outside the EU. For example, the EU AI Act requires that before offering or making a high-risk or general-purpose system or model available in the EU market, providers based outside the EU must appoint an authorized representative—a natural or legal person—who is established within the EU. This requirement ensures that authorities can maintain contact with the provider to obtain all necessary information needed for compliance with regulatory obligations.
As many AI applications being developed in the UAE and KSA, from autonomous vehicles to healthcare diagnostics, would likely be classified as “high-risk” under the EU AI Act, UAE, and KSA AI-enabled businesses should, in addition to complying with local guidelines, laws, and liability regimes, also ensure, that when providing or deploying an AI system that may have implications (directly or indirectly) and interactions with affected EU and EEA persons, comply with the legal and regulatory requirements set forth in the EU AI Act, failing which liability for non-compliance shall be assigned (and where claims are made for damages, such claims may be awarded), notwithstanding their originating or operating jurisdiction.
Additionally, not only should local technology businesses with AI capabilities be aware of the extraterritorial application of the EU’s liability framework (including those set out under the GDRP) but also, of local statutes and laws that may assign liability for defects and damages caused by AI systems. By way of example, the new Product Liability Directive (EU) now recognizes that no-fault liability for defective products will apply to all movables, including software (inclusive of AI systems). Federal Law No. 5 on the Civil Transactions Law of the United Arab Emirates (1985) (UAE Civil Transactions Law) recognizes that the object of a contract can be property, moveable or immovable, corporeal, or incorporeal, a specific act or service, and any other thing that is not prohibited by law or violates public policy or morals.[36]. Certain provisions under the UAE Civil Transactions law, as discussed briefly below are likely applicable to AI systems in the UAE, and serve as a basis for a cause of action for parties harmed thereby.
Under the UAE Civil Transactions Law, a party may establish contractual liability if it is demonstrated that a party breached its contractual obligations, a loss was sustained, and the loss arose from the breach (i.e. causation). Tortious liability is borne from clause 282 of the UAE Civil Transaction Law, which provides that, “any harm done to another shall render the actor, even though not a person of discretion, liable to make good the harm.”, where ‘harm’ is either inflicted on a person or can equate to damage to property[37]. Tortious liability may be established where a party has breached its obligation under the law, a loss was sustained, and the breach resulted in the loss.
The New Product Liability Directive stipulates that consumer protection necessitates holding any manufacturer involved in the production process accountable if their product or a supplied component is defective. To ensure that injured persons have an enforceable claim for compensation where a manufacturer is established outside the EU, it should be possible to hold the importer of the product, and the authorised representative of the manufacturer liable [38].
The New Product Liability Directive further recognizes that some supply chains include economic operators whose structure or purpose does not easily align with traditional legal frameworks. This is particularly true for fulfillment service providers, which carry out many functions akin to those of importers but may not always fit the established definition of an importer under the EU AI Act. As such, these providers should be held liable; however, this liability should apply only when there is no importer or authorized representative located within the EU.
Article 316 of the Civil Transactions Law further provides that, “any person who has things under his control which require special care in order to prevent their causing damage or mechanical equipment, shall be liable for any harm done by such things or equipment, save to the extent that damage could not have been averted. The above is without prejudice to any special provisions laid down in this regard”[39]. The Civil Transactions Law makes it clear that while liability may be caused by “things”, liability will be attributed to persons (whether corporate or individuals) controlling those “things” may be held liable. While not yet tested in UAE Courts, arguably harmful acts caused by “things” can include AI systems.
Given that the EU AI Act and the UAE’s guidelines and directives emphasize the importance of AI systems being controlled by humans, claims for ‘harm’ or damages in the UAE resulting or arising from an AI system will likely be attributed to the ‘operator’ (i.e. the individual or entity in control of the AI system, and the term is defined and borrowed from the EU AI Act).
In terms of damages, the UAE Civil Transactions Law recognises the concept of direct damages, loss of profits, loss of opportunity, consequential damages, interests, and moral damages. The quantum of damages is fixed by the courts[40]. It will be interesting to see whether the concept of moral damages under the UAE Civil Transaction Law may be adaptive to damages caused by AI systems since moral damages are defined as “an infringement of the liberty, dignity, honour, reputation, social standing, or financial condition of another”[41].
It would not be far-fetched to argue that a high-risk AI system that was used in a discriminatory manner for profiling purposes, and where proven by the claimant, led to an infringement of the claimant’s infringement of its reputation or liberty. This would align with damages that may be similarly attributed to or resulting from non-compliant AI systems under the EU AI Act. Similarly, Article 120 of the Saudi Civil Transactions Code (the KSA Civil Code)[42] delineates the framework for compensation regarding tortious liability for damages, stating, "any fault that causes damage to others shall be compensated by the person who committed it."
For a party seeking damages due to breach of contract, personal injury, or loss or damage to property, it is essential to establish that the other party was at fault, that the claimant suffered harm, and that this harm was directly caused by the defendant. Notably, the emphasis lies on the individual who has control over the entity or situation that caused the harm. Further, Article 127[43] stipulates that if multiple parties are responsible for an act resulting in damage, all such parties will be jointly liable for compensating the injured party. Article 129 introduces the principle of vicarious liability, and while not yet tested, it could also be argued that vicarious liability can arise in the event an AI system is under the control of a human, resulting in damages.
In addition, reflecting similarities with the UAE Civil Transactions Law, Article 138 of the KSA Civil Code allows claimants to seek moral damages for psychological harm resulting from an offense to a person's body, freedom, honor, reputation, or social standing. The arguments concerning moral damages arising from discriminatory uses of AI under the UAE law may also find relevance in the context of the KSA Civil Code, particularly where AI systems are utilized in ways that infringe upon an individual's liberty, freedom, or reputation. Finally, Article 139 outlines that the assessment of damages will be based on proportionality to the losses suffered by the injured party due to the wrongful act. Courts are generally tasked with determining the appropriate quantum of damages as mandated by the KSA Civil Code.
_________________
[1] ‘AI System’ as the term is defined in Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (EU AI Act): https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689.
[2] EU AI Act, Article 3 (Definitions).
[3] Ibid, Preamble, recital (12).
[4] Minister of State for Artificial Intelligence, Digital Economy and Remote Work Application Office (UAE), AI Ethics Principles and Guidelines (December 2022): https://ai.gov.ae/wp-content/uploads/2023/05/MOCAI-AI-Ethics-EN.pdfgt; (AI Ethics Principles and Guidelines).
[5] Ibid, AI Ethics Principles and Guidelines, Clause 2.4.
[6] Saudi Data amp; AI Authority, AI Ethics Principles, September 2023: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://sdaia.gov.sa/en/SDAIA/about/Documents/ai-principles.pdfgt.
[7] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act): https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689.
[8] A ‘provider’ is defined as a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose model developed and places it on the market or puts the AI system into service under its own name or trademark whether for payment or free of charge (EU AI Act, Article 3).
[9] ‘General-purpose AI models’ are models that are versatile AI systems that can be adapted for a wide range of applications across various sectors, including large language models (LLMs), image recognition systems, and recommendation engines).
[10] EU AI Act, Article 2 (Scope).
[11] An ‘authorised representative’ means a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf of the obligations and procedures established by this Regulation.
[12] An ‘operator’ means a provider, product manufacturer, deployer, authorised representative, importer, or distributor.
[13] An ‘importer’ means a natural or legal person located or established in the EU that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.
[14] A ‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market.
[15] A ‘deployer’ is a natural or legal person, public authority, agency, or other body using an AI system under its authority except where the AI system is used in the course of personal non-professional activity (EU AI Act, Article 3).
[16] EU AI Act (2024), Article 3.
[17] EU AI Act (2024), Preamble, recital (27).
[18] Proposal for a Directive of the European Parliament and of the Council on Liability for Defective Products, 28 September 2022, COM/2022/495 (final), 2022/0302 (COD): https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0495gt.
[19] Proposal for a Directive of the European Parliament and of the Council on Liability for Defective Products, 28 September 2022, COM/2022/495 (final), 2022/0302 (COD): https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0495gt.
[20] Provisional Agreement Resulting from Interinstitutional Negotiations; Proposal for a directive of the European Parliament and of the Council on Liability for defective products (COM(2022)0495 – C9-0322/2022 – 2022/0302(COD), Clause (6): https://view.officeapps.live.com/op/view.aspx?src=https%3A%2F%2Fwww.europarl.europa.eu%2FRegData%2Fcommissions%2Fimco%2Finag%2F2024%2F01-24%2FCJ24_AG(2024)758731_EN.docxamp;wdOrigin=BROWSELINKgt.
[21] Provisional Agreement Resulting from Interinstitutional Negotiations; Proposal for a directive of the European Parliament and of the Council on Liability for defective products (COM(2022)0495 – C9-0322/2022 – 2022/0302(COD), Clause (6): https://view.officeapps.live.com/op/view.aspx?src=https%3A%2F%2Fwww.europarl.europa.eu%2FRegData%2Fcommissions%2Fimco%2Finag%2F2024%2F01-24%2FCJ24_AG(2024)758731_EN.docxamp;wdOrigin=BROWSELINKgt.
[24] Minister of State for Artificial Intelligence, Digital Economy and Remote Work Application Office (UAE), AI Ethics Principles and Guidelines (December 2022): https://ai.gov.ae/wp-content/uploads/2023/05/MOCAI-AI-Ethics-EN.pdfgt.
[25] chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://ai.gov.ae/wp-content/uploads/2023/03/MOCAI-AI-Ethics-EN-1.pdf.
[26] Regulation 10 on Personal Data Processed Through Autonomous and Semi-Autonomous Systems, Commissioner of Privacy, Dubai International Financial Centre: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://edge.sitecorecloud.io/dubaiintern0078-difcexperie96c5-production-3253/media/project/difcexperiences/difc/difcwebsite/documents/data-protection-pages/guidance-and-handbooks/lawful-processing/difc-dp-gl-23_rev_01_regulation_10.pdf.
[27] Ibid.
[28] chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://sdaia.gov.sa/en/SDAIA/about/Documents/ai-principles.pdf.
[29] AI Ethics Principles, September 2023, Saudi Data and AI Authority (SDAIA), lt;chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://sdaia.gov.sa/en/SDAIA/about/Documents/ai-principles.pdfgt.
[30] EU AI Act (2024), Article 22.
[31] New Product Liability Directive, recital (27) of the preamble.
Lawyer/in- house advisor to 500 Fortune and Forbes companies/ Ex- Backer Mckezie, Ex- Hogan Lovells/Sales & Acquisitions /Current advisor to HNWIs/ Building AGI /Gold Bullion Everest Global Capital Jada ai
3 个月I mentioned this some time ago Delara Emami in one of my posts. At our startup Jada Ai , we are highly conscious of the implications involved, and we are working meticulously to ensure full compliance across all jurisdictions. It is going to be interesting to see how other corporations approach this.