EU AI Act: Getting the Basics Straight
Aleksandr Tiulkanov
Upskilling people in the EU AI Act - link in profile | LL.M., CIPP/E, AI Governance Advisor, implementing ISO 42001, promoting AI Literacy
Last month, I ran a webinar that debunked some key misconceptions that are very prevalent with people posting about the EU AI Act and AI Governance on LinkedIn. As the webinar streamed during the summer, many have missed it, so here's the recap, with some further points added just for this newsletter.
1. "Limited risk" doesn't mean what many think it means
Many publications say that the "Limited risk" AI Systems are those referred to in the EU AI Act Chapter IV (Transparency). Chatbots, deepfake generators, and such. This is simply not true!
The only place in the law where you will find "Limited risk" mentioned is recital 53, which explains the rationale for the provisions of article 6(3). These provisions allow the - potentially high-risk - Annex III AI use cases to be exempt from otherwise mandatory requirements, if the risk assessment shows that the level of risk to health, safety and fundamental rights is limited (more on this below). This will be the case where the AI system in question only performs certain tasks that are of a narrow and limited nature, and that are expressly listed in art 6(3). This is what is considered "Limited risk" AI in the published version of the law. Any other use of the expression is misleading.
2. AI systems are not divided into isolated categories by level of risk
A lot of AI governance thought leaders posting on LinkedIn about the EU AI Act are missing a simple, basic point: a particular AI system may at the same time fall under the Chapter III High-risk (formerly Title III) and Chapter IV Transparency (formerly Title IV) provisions.
The error is glaring: if you look at Annex III (referred to in Chapter III) where all potentially high-risk AI uses are listed, you will find biometric categorisation and emotion recognition systems. But you will also find them referred to in Chapter IV.
These are examples of AI use cases that are subject to both high-risk and transparency requirements under the two above-mentioned Chapters.
And, I beg you, do not add insult to injury, do not, ever again, call the Chapter IV systems "limited risk" (see part 1 above).
3. Not all Annex III-listed systems are subject to "high-risk" requirements
So, by now, I have at least twice mentioned "potentially" when referring to high-risk AI. Some of you may be raising brows by now: why am I making this strange qualification. Aren't all AI systems that are classified as high-risk under the EU AI Act are just that, high-risk? Why the "potential" part? The devil is in the details. Really small details.
Annex III, which lists most high-risk AI use cases, has been there from the very first draft. Because of this, many people still live with the outdated idea that all AI systems listed in Annex III are subject to high-risk requirements. However, due to an amendment that made it to the final version, this idea has become a misleading half-truth.
领英推荐
More precisely, paragraph 3 of article 6 and the elements that follow describe an exemption, under which a provider may claim that his Annex III-listed (and therefore potentially high-risk) AI system does not pose a significant risk of harm to the health, safety and fundamental rights. This is only possible if the use cases for the AI system are limited to one or a combination of the following purposes (listed in the same paragraph, with examples provided in recital 53):
4. You cannot just claim your AI system is not high-risk, though
This can only be possible if:
The EU AI Office, in consultation with the EU AI Board, is yet to provide guidance on the modalities of such risk assessment and further examples of AI use cases falling under the exemption.
What is already clear from the law itself, though, is that the information in the relevant EU database will be publicly available (EU AI Act article 71, except for biometrics, law enforcement and migration and border control use cases) and the national competent authorities are authorised to request and verify the provider's documentation that supports the claim that the exemption is applicable to his AI system (EU AI Act article 6(4)).
The authorities may make such requests on their own initiative or when prompted by others, e.g. civil society and human rights activists, who find any particular provider's claim for exemption in the EU database questionable.
If the national competent authority then reveals that the provider supplied an incorrect, incomplete or misleading risk assessment, contrary to which the AI system in question does, in fact, pose a significant risk to health, safety or fundamental rights, the provider may be fined for up to 7 500 000 EUR or, for enterprises, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. (article 99(5) EU AI Act).
If you like this article, and want to improve your EU AI Act knowledge further, check out my new AI Governance course (making sure to use the NEWSLETTER coupon for a time-limited 25% off).
I have carefully curated all teaching materials and have already successfully piloted their elements as part of two AI Governance courses with teaching companies this year. You may see the reviews via the link above. The new course has revised and improved content, including practical steps to start your AI Governance journey from scratch.
Among other benefits, the course more than covers what is required under the EU AI Act article 4 - AI Literacy - which becomes a mandatory requirement from 2 February 2025 next year for all organisations that develop or deploy AI, regardless of the risk level.
The course starts in October, and as I am aiming for a small group for a better experience, the places are limited and registration may be closed early. So if that is something of interest to you, the best time to review the agenda and book your place is now.
Specialkonsulent | AI + Uddannelse | Transformering af l?ring gennem generativ kunstig intelligens
2 个月Thank you Aleksandr Tiulkanov - with your insights, we might all learn to understand AI Act in time. :)