Navigating the EU AI Act – implications for different industries
In our previous article concerning the EU Artificial Intelligence Act (the “AIA”), we provided a general outlook on the proposed regulation, explaining what the AIA is, the type of requirements contained therein and the classification model that it is based on. If you have read the article, you will know that the AIA is a horizontal piece of legislation: it applies to all AI applications across all sectors and industries. As such, it does not in itself contain any sector-specific rules.
For this reason, you may wonder what the exact implications of the AIA are for your specific field of industry. Are there any particular matters or details that you should take into account? In the pages that follow, we will explore the implications and effects of the AIA on some key industries from a practical standpoint. Considering that the EU aims at boosting AI innovation within smaller businesses, we will also take a brief glimpse at how the proposed regulation takes small and medium-sized enterprises (SMEs) and start-ups into consideration.
If you happened to miss out on the previous article, do not worry: check it out here. We highly recommend diving into it first to gain a basic understanding of the AIA before reading this article.*
* Please keep the following in mind that after our first EU AIA article was published, the pre-final version of the EU AIA was leaked online on January 22, 2024. This document, spanning 892 pages, showcased a comparative analysis of various provisions of the EU AI Act, and was soon followed by another document, 258 pages long, providing a consolidated version of the text. The pre-final version maintains the risk-based approach as well as a majority of the other clauses et out in the previous versions of the act. However, it also includes various modifications to the previous versions and provides new details on, for instance, the addressees of the EU AIA and the obligations of such addressees. Such modifications and new approaches are considered in this current article.
Health care industry
AI technologies have already been penetrating the healthcare industry for years. It is clear that the potential benefits of AI and robotics are numerous in the industry: at its best, AI may be used to improve diagnostic accuracy, help to enhance treatment and improve communication between patients and healthcare professionals. At the same time, the use of AI in medical devices and patient care involves various challenges and ethical concerns relating to patient safety, data security, privacy, misdiagnoses, and biased outcomes.
Firstly, it is important to highlight that healthcare is one of the most regulated sectors in the EU and is regulated by a wide range of sector-specific rules. Thus, legislation such as the Medical Devices Regulation (MDR) and the In-Vitro Diagnostics Regulation (IVDR), must also be carefully complied with alongside the AIA.
How will the AIA affect the requirements set for medical devices, then? This depends on how the device is classified. Due to their nature and the risks that they bring, many medical devices are classified as high-risk. To understand this, let us revisit the conditions that the regulation outlines for classifying a system as high risk in Article 6(1). According to this article, an AI system is classified as high-risk if:
Regarding the first condition, the MDR and IVDR are both expressly included in Annex II of the AIA. With regard to the second condition, it must be first understood that the MDR also contains a risk classification system. Under the MDR, all devices in its risk class IIa or higher must undergo conformity assessment by a third party prior to placing them on the market. Only a small number of AI software as medical devices fall into a risk class that is lower than class IIa.
The above indicates that the majority of AI-integrated medical systems are likely to be designated within the high-risk category under the EU AIA. It is worth mentioning that the pre-final version of the AIA also outlines instances in which systems that would otherwise be considered high-risk, are exempt from this classification due to the absence of a significant risk associated with their operation. For instance, if a medical device is designed to only perform narrow procedural tasks, it is likely not considered high-risk. Therefore, a classification assessment should always be carried out on a case-by-case basis.
Should an AI system be designated as high-risk, it becomes incumbent upon providers, deployers, importers, distributors, and manufacturers of such systems to adhere to a comprehensive set of obligations. These responsibilities encompass risk management, thorough documentation, ensuring transparency, maintaining human oversight, as well as managing data and data governance, with the bulk of these obligations primarily resting on the shoulders of the system's provider. Furthermore, the AIA mandates that AI applications undergo a conformity assessment in alignment with sector-specific regulations. For medical devices, this entails that notified bodies will evaluate the system's compliance with the AIA in conjunction with other safety and performance criteria, ensuring a holistic assessment of conformity.
Because of the ethical concerns involved with using AI in healthcare, the AIA emphasizes the responsible use of AI in healthcare. Alongside with mandatory legislation, ethical guidelines and soft law should also be followed. Soft law includes, for instance, the Ethics Guidelines for Trustworthy Artificial Intelligence presented by the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent expert group set up by the EU Commission.
Media and entertainment
It is evident that AI has already had a significant effect on the media and entertainment industries: AI technology is already widely utilized in areas such as game development, film production, and advertising, as well as revolutionizing creative workflows across various industries.
When discussing the application of AI in these sectors, the initial thought often revolves around content creation using generative AI tools. Large generative AI tools that allow for flexible content generation and that can accommodate a wide range of distinctive tasks, will fall under the category of general-purpose AI systems (GPAIs).
The pre-final version of the regulation contains a detailed section dedicated specifically to GPAIs. This segment establishes specific obligations for providers that apply irrespective of which class the AI model would otherwise fall into. Obligations include providing information to AI system providers who intend to use the GPAI model, cooperating with the Commission and competent authorities, as well as respecting national copyright laws.
In addition to generative AI tools, there is also a wide range of other types of models used by businesses in the music, film, gaming and advertising industries. These include algorithms used for creating personalized experiences and enhancing consumer experience, as well as applications used for marketing and audience engagement. Most of these tools will fall under the categories of limited or minimal risk.
Lastly, businesses operating within media and entertainment must be especially vigilant in addressing potential biases in AI systems. This is because the outputs of such operators may significantly impact societal perceptions. Bias and fairness in AI algorithms have been significant concerns in the EU, and the proposed AIA underscores the importance of mitigating biases to ensure fair outcomes.
领英推荐
Financial services
In the financial services sector, the transformative effect of AI is also evident. AI is employed for tasks such as fraud detection, customer service through chatbots, as well as personalized financial recommendation. Given the vast range of ways that AI may be utilized, as well as the intricate nature of the financial sector, the reliance on AI is anticipated to grow significantly. Notably, processes and models integral to creditworthiness assessments or risk premium evaluations are foreseen to fall within the high-risk category of the classification system of the AI Act.
Additionally, AI systems employed in critical financial infrastructure operations, biometric identification, natural person categorization, as well as employment and employee management, are also slated for high-risk classification. It is worth noting that certain AI applications, such as those dedicated to enhancing customer experience, fraud detection, customer lifetime value predictions, and pattern analysis (without direct impact on individual decisions) are likely to contain only limited or minimal risks.
Transport
AI applications have also been slowly integrated in the mobility and transport sectors throughout the past decades. Autonomous vehicles, which have experienced various stages of development throughout the years, are evidently one of the most significant examples of AI use in the transport sector. In addition, road traffic management systems, systems for mobility-on-demand services, public transportation planning systems and safety services in machines are some other examples of application. Many of the AI models used in the sectors of transport and mobility are classified as high-risk under the AIA. This is because of their inherent potential to cause severe physical harm or death, as well as property damage.
As the AIA adopts a broad definition of AI, many applications used in the transport and mobility sectors are covered. However, as surprising as it sounds, the AIA does not apply directly to motor vehicles, including their equipment and components. This means that automotive vehicles (also “AVs”) are not directly in the scope of application of the AI. This is despite the fact that the automotive industry is the main beneficiary of AI technology in the transportation industry. This is a result of the Commission heeding the calls from industry associations which advocated for a “sectoral and light-touch approach” to AI before the introduction of the AIA.?
Nonetheless, this does not yet imply that the use of AI in AVs will be completely unregulated. The automotive sector is regulated by Regulation (EU) 2018/858 (hereafter “Type-Approval Framework Regulation”) which requires a comprehensive type-approval process demonstrating that vehicles comply with certain requirements before placing them on the market. The current form of the AIA requires the Commission to consider specific provisions of the AIA when adopting delegated acts under the Type-Approval Framework Regulation.
In other words, exempting AVs from the AIA helps to avoid frictions between the AIA and sector-specific regulation. Future delegated acts of the Commission are likely to contain rules regarding data governance, risk management and human oversight.
It is evident that apart from the motor vehicle industry, AI applications are deployed in a wide range of other products within the transport industry. The AIA will be applicable to products such as image recognition techniques used in public transport, road management, and AI powered software driving elevators, cable-ways and watercraft. Many of these will be categorized as high-risk, meaning that the relevant requirements must be complied with. In addition, ethical considerations regarding safety, privacy, and well-being in the transport sector are also underlined by the EU Commission.
SMES and small businesses
The EU’s aim is to ensure that AI-related harms are minimised without stifling innovative AI development and use. In order to foster innovation among emerging entities, the AIA contains some specific rules that concern SMEs and small businesses.
The proposed Act takes SMEs into consideration in several different ways. Firstly, Article 55 of the AIA outlines resources intended to assist SMEs in complying with the legislation, offering advice, financial support, and representation. In addition, the AIA provides an exemption for free and open-source AI components, as long as they are not integrated into high-risk systems. SMEs are also exempt from impact assessment consultations, although encouraged to carry them out where feasible. The AIA also eases documentation requirements for SMEs and start-ups, making compliance less burdensome while maintaining the same standards.
The AIA also states that member states may establish regulatory sandboxes which provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time. Member states may offer free priority access into sandboxes to eligible SMEs. The AIA also aims to ensure proportionate compliance costs for SMEs, requiring regular assessment and transparent consultations to align fees with the enterprise's size and market share.
While the AIA signifies a crucial step toward safer and fairer AI in the EU, it imposes notable considerations and costs. Because of this, the EU strives to take SMEs into consideration with specific provisions and support mechanisms, emphasizing the importance of proactive engagement in the compliance journey.
Conclusions
In conclusion, the AIA’s effect on a specific industry depends largely on the prevalent types of AI systems commonly employed in that industry. Above, we have highlighted some key elements and implications to take into account when developing or deploying AI systems in the healthcare, transport, media and entertainment and financial services sectors. A brief look is also taken at how the AIA will affect SMEs and start-ups.
It is important to note that the AIA is a horizontal law applying to all sectors and industries – as long as AI elements are involved. As the use of AI increases in all business sectors and areas of society, the imperative to acknowledge its consequential ramifications is unequivocal. Although the AIA is not designed to become effective until 2026, it is crucial that businesses and other entities prepare for the implementation well in advance. It may not be an exaggeration to say that you should prepare now, or else you may have to pay later.
Author: Sofia Wang
This article is a segment of an article series focused on the so-called Big 5 acts which have been a key part of the European Data Strategy in recent years. The next article in the series will explore the details of the Data Act. Keep an eye out for more!
This article does not constitute legal advice. If we caught your interest and you would like to know more, do not hesitate to contact us for further information.
Partner Daniel Stranius, +358 44 333 0535
Associate Sofia Wang, +358 45 230 8389