The Artificial Intelligence Act demystified

The Artificial Intelligence Act demystified

Background

Artificial Intelligence (AI) is as an emerging general-purpose technology: a highly powerful family of computer programming techniques. The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, in certain cases, the use of AI systems can create problems. The specific characteristics of certain AI systems may create new risks related to (1) safety and security and (2) fundamental rights, and accelerate the probability or intensity of the existing risks. AI systems also (3) make it hard for enforcement authorities to verify compliance with and enforce the existing rules. This set of issues in turn leads to (4) legal uncertainty for companies, (5) potentially slower uptake of AI technologies, due to the lack of trust, by businesses and citizens as well as (6) regulatory responses by national authorities to mitigate possible externalities risking to fragment the internal market.

The main objective of the EU Artificial Intelligence Act (AIA) is to ensure that AI systems within the EU are safe and comply with existing law on fundamental rights, norms and values. The AIA defines AI systems broadly by including logic- or rule-based information processing (such as expert systems), as well as probabilistic algorithms (such as machine learning). Like the GDPR, it applies to all firms wishing to operate AI systems within the EU, irrespective of whether they are based in the EU or not. The AIA adopts a risk-based approach to regulating AI systems. In terms of their perceived risk, some AI systems are banned outright, while others are not regulated at all.

PRACTICAL IMPLICATIONS:

If you are developing or using a software that is developed with one or more of these techniques, you might be in scope of the AIA:

  • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems
  • Statistical approaches, Bayesian estimation, search and optimisation methods

Categories of AI systems

First, there are ‘prohibited AI practices’, which are banned outright. This includes a very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g. social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes).

Second, there are ‘high-risk AI systems’. In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high-risk depends not only on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used, such as:

Biometric identification and categorisation of natural persons

  • AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons

Management and operation of critical infrastructure

  • AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity

Education and vocational training

  • AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions
  • AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions

Employment, workers management and access to self-employment

  • AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests
  • AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships

Access to and enjoyment of essential private services and public services and benefits

  • AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke or reclaim such benefits and services
  • AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small-scale providers for their own use
  • AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid

Law enforcement

  • AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences
  • AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person
  • AI systems intended to be used by law enforcement authorities to detect deep fakes
  • AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences
  • AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups
  • AI systems intended to be used by law enforcement authorities for profiling of natural persons in the course of detection, investigation or prosecution of criminal offences
  • AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data

Migration, asylum and border control management

  • AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person
  • AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State
  • AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features
  • AI systems intended to assist competent public authorities in the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status

Administration of justice and democratic processes

  • AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts

These also include safety components of products covered by sectorial Union legislation. They will always be high-risk when subject to third-party conformity assessment under that sectorial legislation.

Third, there are ‘limited-risk AI systems’. AI systems under this category are subject to transparency obligations to allow individuals interacting with the system to make informed decisions. This is the case for a chatbot, where transparency means letting the user know they are speaking to an AI-empowered machine. Further examples may include Spam filters, AI-enabled video and computer games, Inventory-Management Systems or Customer- and market-segmentation systems. Providers need to ensure that natural persons are informed that they are interacting with an AI system (unless this is obvious from the circumstances and the context of use).

Fourth, there are ‘low-risk AI systems’; they are low-risk because they neither use personal data nor make any predictions that influence human beings. According to the European Commission, most AI systems will fall into this category. A typical example is industrial applications in process control or predictive maintenance. Here there is little to no perceived risk, and as such no formal requirements are stipulated by the AIA.

It is important to note that the requirements stipulated in the AIA apply to all high-risk AI systems. However, the need to conduct conformity assessments only applies to ‘standalone’ AI systems. For algorithms embedded in products where sector regulations apply, such as medical devices, the requirements stipulated in the AIA will simply be incorporated into existing sectoral testing and certification procedures.

PRACTICAL IMPLICATION:

It is important to determine at an early stage what risk categories your AI System falls into. Depending on the classification there are different legal implications.

Who needs to act?

The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. It can concern both providers (e.g. a developer of a CV-screening tool) and users of high-risk AI systems (e.g. a bank buying this resume screening tool). It does not apply to private, non-professional uses.

In general, the AIA distinguished between the following roles:

  • Providers: any person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark
  • Importers: any person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union
  • Distributors: any person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties
  • Users: any person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity

It is important to note that this role concept is not a fixed one. There might be situations where an importer, distributor or any other third party might be considered as a provider. In essence, this means that this party than also has to follow the obligations for providers. Such a change in role takes place when:

  • they place on the market or put into service a high-risk AI system under their name or trademark
  • they modify the intended purpose of a high-risk AI system already placed on the market or put into service
  • they make a substantial modification to the high-risk AI system

PRACTICAL IMPLICATIONS:

First, you need to check whether your AI system affects people located in the EU.

Second, you need to check whether you are considered as a provider, importer, distributor or just a user of the AI system.

What is the risk for companies domiciled in third countries such as Switzerland?

The AI Act will likely have a significant impact on Swiss companies that provide or use AI systems, even if they do not have a legal presence in the EU. In fact, similar to the EU General Data Protection Regulation (‘GDPR’), the draft AI Act has an extraterritorial effect and thus also applies to organisations outside the EU, essentially to:

1.????????????providers placing on the market or putting into service AI systems in the EU, irrespective of whether these providers are located within the EU or in a third country (e.g. Switzerland)

2.????????????users of AI systems who are located in the EU, and

3.????????????providers and users of AI systems who are located in a third country (e.g. Switzerland), where the output produced by the AI system is used in the EU.

Consequently, the AI Act in principle applies if an AI system or its output is used within the EU. As an example, the use of a chatbot to answer enquiries from EU-based individuals regarding a credit or a use of AI systems for checks on creditworthiness regarding individuals in the EU by a Swiss bank would likely trigger the application of the AI Act.

Conformity assessments: what needs to be done?

The AIA requires providers of high-risk AI systems to conduct conformity assessments before placing their product or service on the European market. ?A conformity assessment is?a process carried out to demonstrate whether specific consumer protection and product integrity requirements are fulfilled, and if not, what if any remedial measures can be implemented to satisfy such requirements. Occasionally, such conformity assessments may need to be performed with the involvement of an independent third-party body. But for most AI systems, conformity assessments based on ‘internal control’ will be sufficient. However, while the AIA stipulates a wide range of procedural requirements for conformity assessments based on internal control, it does not provide any detailed guidance on how these requirements should be implemented in practice.

If an AI system falls under the AIA, then the actions needed are determined by the level of risk embedded in the respective system. The initial question for providers is therefore to determine that risk level in light of the types and categories set out in the AIA.

In contrast, ‘standalone’ high-risk AI systems have to undergo an AI-specific conformity assessment before they can be placed on the EU market.

How to conduct a conformity assessment

There are two ways to conduct such conformity assessments: conformity assessment based on internal controls, and in some cases, a conformity assessment of the quality management system and technical documentation conducted by a third party, referred to as a ‘notified body’. These are two fundamentally different conformity assessment procedures. The type of procedure required for a specific AI system depends on the use case, in other words the purpose for which it is employed.

In short, high-risk AI systems that use biometric identification and categorisation of national persons must conduct a third-party conformity assessment. For most high-risk AI systems, however, conformity assessment using internal controls will be sufficient.

The AIA itself does not specifically stipulate how to execute a conformity assessment based on internal control. Only the following is stated:

  • The provider verifies whether the established quality management system is in compliance with the AIA
  • The provider examines the information contained in the technical documentation in order to assess the compliance of the AI system with the relevant essential requirements of the AIA
  • The provider also verifies that the design and development process of the AI system and its post-market monitoring is consistent with the technical documentation.

Providers of AI systems that interact directly with humans – chatbots, emotional recognition, biometric categorisation and content-generating (‘deepfake’) systems – are subject to further transparency obligations. In these cases, the AIA requires providers to make it clear to the users that they are interacting with an AI system and/or are being provided with artificially generated content. The purpose of this additional requirement is to allow users to make an informed choice as to whether or not to interact with an AI system and the content it may generate.

PRACTICAL IMPLICATION:

You need to check whether a conformity assessment based on internal control is sufficient for your AI system or if you need to involve an independent third-party.

What are the penalties for non-conformance?

The penalties set out in the AIA for non-conformance are very similar to those set out in the GDPR. The main thrust is for penalties to be effective, proportionate and dissuasive. The sanctions cover three main levels:

  • Non-compliance with regard to prohibited AI practices, and/or the data and data governance obligations set out for high-risk AI systems can incur a penalty of up to EUR 30 m, or 6% of total worldwide turnover in the preceding financial year (whichever is higher).
  • Non-compliance of an AI system with any other requirement under the AIA than stated above can incur a penalty of up to EUR 20 m, or 4% of total worldwide turnover in the preceding financial year (whichever is higher).
  • Supply of incomplete, incorrect or false information to notified bodies and national authorities in response to a request can incur a penalty of up to EUR 10 m, or 2% of total worldwide turnover in the preceding financial year (whichever is higher).

It should be noted that the enforcement of the AIA sits with the competent national authorities. Individuals adversely affected by an AI system may have direct rights of action, for example concerning privacy violations or discrimination.

?What’s next?

?It is not yet clear by when the AIA will enter into force and become applicable. However, the political discussions are already quite advanced. On 20 April 2022, the Draft Report for the Artificial Intelligence Act was published. The lead committees have been the Committee for Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE).

?The political discussions around the AIA are likely to be finalised by Q3/Q4 2022.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了