The Artificial Intelligence Act demystified
Background
Artificial Intelligence (AI) is as an emerging general-purpose technology: a highly powerful family of computer programming techniques. The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, in certain cases, the use of AI systems can create problems. The specific characteristics of certain AI systems may create new risks related to (1) safety and security and (2) fundamental rights, and accelerate the probability or intensity of the existing risks. AI systems also (3) make it hard for enforcement authorities to verify compliance with and enforce the existing rules. This set of issues in turn leads to (4) legal uncertainty for companies, (5) potentially slower uptake of AI technologies, due to the lack of trust, by businesses and citizens as well as (6) regulatory responses by national authorities to mitigate possible externalities risking to fragment the internal market.
The main objective of the EU Artificial Intelligence Act (AIA) is to ensure that AI systems within the EU are safe and comply with existing law on fundamental rights, norms and values. The AIA defines AI systems broadly by including logic- or rule-based information processing (such as expert systems), as well as probabilistic algorithms (such as machine learning). Like the GDPR, it applies to all firms wishing to operate AI systems within the EU, irrespective of whether they are based in the EU or not. The AIA adopts a risk-based approach to regulating AI systems. In terms of their perceived risk, some AI systems are banned outright, while others are not regulated at all.
PRACTICAL IMPLICATIONS:
If you are developing or using a software that is developed with one or more of these techniques, you might be in scope of the AIA:
Categories of AI systems
First, there are ‘prohibited AI practices’, which are banned outright. This includes a very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g. social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes).
Second, there are ‘high-risk AI systems’. In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high-risk depends not only on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used, such as:
Biometric identification and categorisation of natural persons
Management and operation of critical infrastructure
Education and vocational training
Employment, workers management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control management
Administration of justice and democratic processes
These also include safety components of products covered by sectorial Union legislation. They will always be high-risk when subject to third-party conformity assessment under that sectorial legislation.
Third, there are ‘limited-risk AI systems’. AI systems under this category are subject to transparency obligations to allow individuals interacting with the system to make informed decisions. This is the case for a chatbot, where transparency means letting the user know they are speaking to an AI-empowered machine. Further examples may include Spam filters, AI-enabled video and computer games, Inventory-Management Systems or Customer- and market-segmentation systems. Providers need to ensure that natural persons are informed that they are interacting with an AI system (unless this is obvious from the circumstances and the context of use).
Fourth, there are ‘low-risk AI systems’; they are low-risk because they neither use personal data nor make any predictions that influence human beings. According to the European Commission, most AI systems will fall into this category. A typical example is industrial applications in process control or predictive maintenance. Here there is little to no perceived risk, and as such no formal requirements are stipulated by the AIA.
It is important to note that the requirements stipulated in the AIA apply to all high-risk AI systems. However, the need to conduct conformity assessments only applies to ‘standalone’ AI systems. For algorithms embedded in products where sector regulations apply, such as medical devices, the requirements stipulated in the AIA will simply be incorporated into existing sectoral testing and certification procedures.
PRACTICAL IMPLICATION:
It is important to determine at an early stage what risk categories your AI System falls into. Depending on the classification there are different legal implications.
Who needs to act?
The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. It can concern both providers (e.g. a developer of a CV-screening tool) and users of high-risk AI systems (e.g. a bank buying this resume screening tool). It does not apply to private, non-professional uses.
领英推荐
In general, the AIA distinguished between the following roles:
It is important to note that this role concept is not a fixed one. There might be situations where an importer, distributor or any other third party might be considered as a provider. In essence, this means that this party than also has to follow the obligations for providers. Such a change in role takes place when:
PRACTICAL IMPLICATIONS:
First, you need to check whether your AI system affects people located in the EU.
Second, you need to check whether you are considered as a provider, importer, distributor or just a user of the AI system.
What is the risk for companies domiciled in third countries such as Switzerland?
The AI Act will likely have a significant impact on Swiss companies that provide or use AI systems, even if they do not have a legal presence in the EU. In fact, similar to the EU General Data Protection Regulation (‘GDPR’), the draft AI Act has an extraterritorial effect and thus also applies to organisations outside the EU, essentially to:
1.????????????providers placing on the market or putting into service AI systems in the EU, irrespective of whether these providers are located within the EU or in a third country (e.g. Switzerland)
2.????????????users of AI systems who are located in the EU, and
3.????????????providers and users of AI systems who are located in a third country (e.g. Switzerland), where the output produced by the AI system is used in the EU.
Consequently, the AI Act in principle applies if an AI system or its output is used within the EU. As an example, the use of a chatbot to answer enquiries from EU-based individuals regarding a credit or a use of AI systems for checks on creditworthiness regarding individuals in the EU by a Swiss bank would likely trigger the application of the AI Act.
Conformity assessments: what needs to be done?
The AIA requires providers of high-risk AI systems to conduct conformity assessments before placing their product or service on the European market. ?A conformity assessment is?a process carried out to demonstrate whether specific consumer protection and product integrity requirements are fulfilled, and if not, what if any remedial measures can be implemented to satisfy such requirements. Occasionally, such conformity assessments may need to be performed with the involvement of an independent third-party body. But for most AI systems, conformity assessments based on ‘internal control’ will be sufficient. However, while the AIA stipulates a wide range of procedural requirements for conformity assessments based on internal control, it does not provide any detailed guidance on how these requirements should be implemented in practice.
If an AI system falls under the AIA, then the actions needed are determined by the level of risk embedded in the respective system. The initial question for providers is therefore to determine that risk level in light of the types and categories set out in the AIA.
In contrast, ‘standalone’ high-risk AI systems have to undergo an AI-specific conformity assessment before they can be placed on the EU market.
How to conduct a conformity assessment
There are two ways to conduct such conformity assessments: conformity assessment based on internal controls, and in some cases, a conformity assessment of the quality management system and technical documentation conducted by a third party, referred to as a ‘notified body’. These are two fundamentally different conformity assessment procedures. The type of procedure required for a specific AI system depends on the use case, in other words the purpose for which it is employed.
In short, high-risk AI systems that use biometric identification and categorisation of national persons must conduct a third-party conformity assessment. For most high-risk AI systems, however, conformity assessment using internal controls will be sufficient.
The AIA itself does not specifically stipulate how to execute a conformity assessment based on internal control. Only the following is stated:
Providers of AI systems that interact directly with humans – chatbots, emotional recognition, biometric categorisation and content-generating (‘deepfake’) systems – are subject to further transparency obligations. In these cases, the AIA requires providers to make it clear to the users that they are interacting with an AI system and/or are being provided with artificially generated content. The purpose of this additional requirement is to allow users to make an informed choice as to whether or not to interact with an AI system and the content it may generate.
PRACTICAL IMPLICATION:
You need to check whether a conformity assessment based on internal control is sufficient for your AI system or if you need to involve an independent third-party.
What are the penalties for non-conformance?
The penalties set out in the AIA for non-conformance are very similar to those set out in the GDPR. The main thrust is for penalties to be effective, proportionate and dissuasive. The sanctions cover three main levels:
It should be noted that the enforcement of the AIA sits with the competent national authorities. Individuals adversely affected by an AI system may have direct rights of action, for example concerning privacy violations or discrimination.
?What’s next?
?It is not yet clear by when the AIA will enter into force and become applicable. However, the political discussions are already quite advanced. On 20 April 2022, the Draft Report for the Artificial Intelligence Act was published. The lead committees have been the Committee for Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE).
?The political discussions around the AIA are likely to be finalised by Q3/Q4 2022.