Users of High-Risk AI Systems: What will be your obligations under the EU AI Act?

Users of High-Risk AI Systems: What will be your obligations under the EU AI Act?

The AI Act is a landmark EU proposal to regulate Artificial Intelligence based on its potential to cause harm. The European Parliament is set to finalize its position on the file by May to quickly enter into negotiations with the EU Council and Commission in the so-called trilogues.

EU AI Act's regulative load mainly focuses on the "providers" of the AI systems (mainly high-risk systems), however, there is another group that attracts a lot of attention: users of the high-risk AI systems.

In this blogpost, I will provide a short analysis and summary of how, entities who would deploy high-risk systems (users), will be regulated (or not) by the EU AI Act.

Firstly, who is a user in the EU AI Act?

Article 3 of the proposed AI Act, dating back to 21 April 2021, defines who qualifies as a "user" of an AI system as follows:

"user means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity".

It is very important at this stage to state that the users of the high-risk systems will outnumber those who produce and provide high-risk AI systems due to the complex and demanding nature of AI development.

Therefore, your work organization, your future employer, your banks and even supermarkets can qualify as users under the upcoming EU AI Act if they would want to deploy high-risk AI systems.

Little focus has been given to their responsibilities until now although they will have a big part in monitoring compliance and preventing harm

What is a high-risk AI system in the EU AI Act?

Article 6 of the proposed EU AI Act gives its methodology to regulate high-risk AI systems. High-risk AI systems are subject to a detailed certification regime, but are not deemed so fundamentally objectionable that they should be banned.

These include:

  1. In Annex II-A, AI systems intended to be used as a safety component of a product, or themselves a product, which are already regulated under the NLF (e.g. machinery, toys, medical devices) and, in Annex II-B, other categories of harmonised EU law (e.g. boats, rail, motor vehicles, aircraft, etc.).
  2. In Annex III, an exhaustive list of eight ‘new’ high-risk AI systems, comprising:
  3. Critical infrastructures?(e.g. transport), that could put the life and health of citizens at risk
  4. Biometric ID systems
  5. Educational and vocational training, that may determine the access to education and professional course of someone’s life (e.g. automated scoring of exams)
  6. Employment, workers management and access to self-employment?(e.g. automated hiring and CV triage software)
  7. Essential private and public services?(e.g. automated welfare benefit systems; private-sector credit scoring systems)
  8. Law enforcement systems that may interfere with people’s fundamental rights?(e.g. automated risk scoring for bail; ‘deepfake’ law enforcement detection software; ‘pre-crime’ detection)
  9. Migration, asylum and border control management?(e.g. verification of authenticity of travel documents; visa processing)
  10. Administration of justice and democratic processes?(e.g. robo-justice’; automated sentencing assistance).
  11. The Commission can add new?sub-areas?to Annex III by delegated act if they pose an equivalent or greater risk than systems already covered, but?cannot?add entirely new top-level categories.

Note: The list is not definitive and undergoing changes due to the ongoing legislative procedure.

It is important to note that the classification in the EU AI Act comes with confusion.

According to a study conducted by appliedAI Initiative GmbH , among 100 selected AI systems, only %18 certainly qualify as high-risk and nearly %40 are yet to be defined or subject to interpretation. However, this is a topic for a separate blog post.

No alt text provided for this image
"AI Act: Risk Classification of AI Systems from a Practical Perspective", appliedAI

Most of the high-risk AI systems in the proposed EU AI Act fall under HR, Accounting and Finance, Customer Service and Support, IT Security and Legal categories.

No alt text provided for this image
"AI Act: Risk Classification of AI Systems from a Practical Perspective", appliedAI

Users: What do they have to comply with?

It is important to note that the initial version of the EU AI Act regulated the obligations of users when a high-risk system is deployed by them, mainly in Article 29. According to the Article 29 of the proposed EU AI Act, users would have to;

  1. Use the high-risk AI systems in accordance with the instructions of use, issued by the providers,
  2. Would have to comply with sectoral legislations if there are any when deploying a high-risk AI system (e.g. a bank may have to comply with additional approvals or controls over a loan system support AI system),
  3. Ensure that input data is relevant in view of the intended purpose of the high-risk AI system, when the user controls input data,
  4. Monitor the high-risk AI system's compliance with its own terms of use and suspend/report the use when they have identified any serious incident or any malfunctioning within the meaning of Article 62,
  5. Keep the logs of the AI systems, in an automatic and documented manner,
  6. Conduct a Data Protection Impact Assessment for the use of the relevant system.

Similarly in Article 52, users are required to comply with extra transparency measures for certain AI systems:

Article 52/2: Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Article 52/3: Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.

However, the use of AI showed that preventing harm is only possible when both providers/developers and users of AI systems are working to keep the AI safe. The current version of the EU AI Act brings a few implications for the users of high-risk AI systems. There are several reasons for that:

  1. Innovation and investment incentives always come into play when talking about legal obligations, especially with the increasing regulatory load coming with the EU Strategy for Data,
  2. Brussels is a melting pot of world's lobbying scene and EU AI Act has taken its toll by this fact. A report published by Corporate Europe Observatory on 23 February 2023, has shown how several tech companies have engaged with European legislators in order to promote self-assessments and less regulatory load around AI systems.

On the other side of the spectrum, European Digital Rights has lately published a call to urge the legislators to involve more obligations for the user side when a high-risk AI system comes into play.

According to EDRi;

The AIA imposes minimal obligations on users of high-risk AI systems. Article 29 outlines the duties of users of high-risk AI: to use the system in conjunction with the providers’ ‘instructions of use’, ensuring relevant data, and monitoring of the system. However, the user is not obligated to undertake any further measures to analyse the potential impact on fundamental rights, equality, accessibility, public interest or the environment, to consult with affected groups, nor take active steps to mitigate potential harms.?

EDRi consequently calls for the following to be added in the later versions of the EU AI Act:

1. Obligation on users of high-risk AI to define affected persons
2. Include an obligation of users of high-risk AI to conduct and publish a fundamental rights impact assessment, detailing specific information to the context of the use of that system, including the intended purpose, geographic and temporal scope, assessment of the legality and fundamental rights impacts of the system, compatibility with accessibility legislation, likely direct and indirect impact on fundamental rights, any specific risk of harm likely to impact marginalised persons or those at risk of discrimination, the foreseeable impact of the use of the system on the environment, any other negative impact on the public interest; and clear steps as to how the harms identified will be mitigated, and how effective this mitigation is likely to be.?

Where do we stand today for users' obligations?

According to the news report by EURACTIV on 30 March 2023, significant developments have occurred for users of high-risk AI systems.

The latest European Parliament version of the EU AI Act will require users of high-risk systems to conduct a fundamental rights impact assessment (FRIA) to consider their potential impact on the fundamental rights of the affected person.

Similarly, some changes are expected in risk management and reporting obligations, transparency on the original purpose of the training datasets, stricter record-keeping, and transparency on the model’s energy consumption. However, for these obligations, it is yet to be seen whether users will have more responsibilities than initially proposed or not.

It is not known yet whether the FRIA will have a standardized template, or will have to be reported to any authority. It is again unclear what happens when a high-risk AI system shows great danger towards the fundamental rights of individuals who will engage with the system and therefore fails the test of FRIA.

We are at the breaking point for algorithmic transparency and safety in Europe. It is historically very important to set the scene right with the EU AI Act which will govern the safety and trust of AI systems in the EU for decades to come.

European legislators have a landmark duty to take the immense development speed of AI systems into account and imagine the potential harms that can be caused also in the future. FRIA is a good step forward and shall be kept also in the adopted version of the EU AI Act.

For more information on FRIA and EU AI Act: Recommendations for the EU AI Act - Mandating human rights impact assessments (HRIAs).

Joerg Wicik, MBA

Co-Founder & CFO @KI-4-Mittelstand. Kompetenzzentrum → Automatisierung von Finanz-, P2P- & ERP-Prozessen ?? | Connecting CFOs & CPOs with Innovators ?? | Keynote Speaker ?? |

1 å¹´

Great presentation. In my opinion we should consider eu ai Act + eu data Act + dsgvo + gdpr + … as an integrated approach from governance, Risk and efficiency Enterprise Perspektive ensururing the safe and ethical implementation of AI for all. All these consolidated in the report of the eu ai act conformity assessment - what is pretty challenging….

要查看或添加评论,请登录

Muhammed Demircan的更多文章

社区洞察

其他会员也浏览了