What is the AI risk management system required by the EU AI Act?
Giulio Coraggio
Solving Legal Challenges of the Future | Head of Intellectual Property & Technology | Partner @ DLA Piper | IT, AI, Privacy, Cyber & Gaming Lawyer
A backbone of the requirements provided by the EU AI Act for high-risk AI systems is the establishment, implementation, and documentation of an AI risk management system.?
The current ICO's case against Snap's generative AI chatbot
During the last few days, there was the?announcement ?by the UK Information Commissioner, the UK data protection authority, of their decision to issue a preliminary enforcement notice over potential failure to properly assess the privacy risks posed by Snap’s generative AI chatbot named ‘My AI’.
The ICO’s investigation provisionally found that the risk assessment Snap conducted before it launched ‘My AI’ did not adequately assess the data protection risks posed by the generative AI technology, particularly to children. The assessment of data protection risk is critical in this context, which involves using innovative technology and the processing of personal data of 13 to 17-year-old children.
This type of challenge is not new, and indeed a similar challenge was raised in February 2023 by the Italian data protection authority against the chatbot named "Replika," you can read on the topic in this article, "Artificial intelligence powered chatbot banned by the Italian privacy authority ".
What is the difference between the privacy risk assessment and the AI risk management system under the AI Act?
The ICO's case made me think about a question raised by one of my clients during a presentation I ran last week on the regulatory obligations arising from the usage of artificial intelligence, particularly the impact of the AI Act on businesses willing to exploit AI. The client asked about the difference between a data protection impact assessment (DPIA) and an AI risk management system under the upcoming EU AI Act.
The DPIA includes a privacy risk assessment, but it has a broader scope. Article 35 of the GDPR refers to it as "an assessment of the impact of the envisaged processing operations on the protection of personal data." It is a purely privacy-related assessment that only focuses on the potential impact on personal data. You can read the following article, "When and how shall a data protection impact assessment be run? ".
On the contrary, the AI risk management system?has a broader scope. Article 9 of the current version of the AI Act provides the following:
A risk management system shall be established, implemented, documented and?maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. The risk management system can be?integrated into, or a part of, already existing risk management procedures?relating to the relevant Union sectoral law insofar as it fulfils the requirements of this article.
Based on the above, the AI conformity assessment (like the DPIA) shall not be performed only on the launch of a product. However, the company shall have a procedure regulating the performance of an AI conformity assessment periodically and in any case, when any major update of the system is performed. The system must be documented since, like the GDPR, the EU AI Act is drafted, having in mind the accountability principle. Therefore, a company needs to be able to prove its compliance. In particular, since artificial intelligence systems rapidly evolve, it is necessary to regularly "review and update the risk management process, to ensure its continuing effectiveness, and documentation of any significant decisions and actions taken."
Besides, the same article provides that the AI risk management system can be integrated into a broader procedure. Indeed, there might be a (partial) overlapping between a DPIA and an AI risk management system, and businesses should avoid useless duplications that also lead to inconsistent results.
As to the contents of the AI risk management system, the same article of the EU AI Act requires that it includes the following steps
a) identification, estimation and evaluation of the known and the reasonably foreseeable risks that the high-risk AI system can pose to the?health or safety of natural persons, their fundamental rights including equal access and opportunities, democracy and rule of law or the environement?when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;
b) [deleted]
领英推荐
c) evaluation of emerging significant risks as described in point (a) and identified?based on the analysis of data gathered from the post-market monitoring system?referred to in Article 61?[i.e., the system collecting data provided by deployers or other sources on the performance of AI systems to monitor its continuous compliance];
d) adoption of?appropriate and targeted risk management measures?designed to address the risks identified pursuant to points a and b of this paragraph in accordance with the provisions of the following paragraphs.
Based on the above, establishing and implementing an AI risk management system is a much more complicated task than a DPIA. However, both assessments are aimed at ensuring that the "relevant residual risk?associated with each hazard as well as the overall residual risk of the high-risk AI systems?is reasonably judged to be acceptable, provided that the high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse" and that mitigation actions are adopted to address potential risks.
How do you run an AI risk assessment?
Article 9 of the EU AI Act refers to the testing of the AI system as part of an AI risk assessment "for the purposes of identifying the most appropriate and targeted risk management measures and weighing any such measures against the potential benefits and intended goals of the system." There needs to be a specific methodology to be followed. However,?an algorithmic impact assessment is a pivotal component of an AI risk management system.
The relevance of this requirement is why my law firm, DLA Piper, is enriching its legal tech compliance assessment tool, PRISCA AI Compliance, with an algorithmic impact assessment module so that clients will have an even better turn-key all-in-one solution to assess the maturity of their AI systems. You can read more on the topic?HERE ?and contact us.
Also, on the same topic, you can find the following article interesting "What Cybersecurity Obligations under the AI Act" .
Legal Tech Tools and Offerings
Prisca AI Compliance
Prisca AI Compliance is turn-key solution to assess the maturity of artificial intelligence systems against the main regulations and technical standards providing a score of compliance and identifying corrective actions to be undertaken. Read more
Transfer - DLA Piper legal tech solution to support Transfer Impact Assessments
This presentation shows DLA Piper legal tech tool named "Transfer" to support our clients to perform a transfer impact assessment after the Schrems II case. Read more
DLA Piper Turnkey solution on NFT and Metaverse projects
You can have a look at DLA Piper capabilities and areas for NFT and Metaverse projects. Read more