Navigating the EU AI Act’s Classification for AI Systems: How to determine your Obligations?
Validaitor
Safety and Trust for Artificial Intelligence | "Top 100 Deeptech Startups of Europe"
By An?l Tahmiso?lu ? 03/02/2025
The EU Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework for AI regulation, entered into force on August 1, 2024. The EU AI Act aims to safeguard fundamental rights and minimize harm by regulating AI usage within the European Union. Moreover, its reach extends beyond EU-based organizations, applying to any entity using AI in interactions with EU residents due to its extraterritorial scope.
The Act categorizes AI systems by risk level and assigns obligations accordingly. Therefore, it’s essential for everyone involved in an AI system’s lifecycle to understand the classification of their system to determine their obligations.
Key parties with obligations under the Act include providers, deployers, importers, distributors, and authorized representatives for non-EU providers. Providers—those responsible for developing, training, or marketing AI systems—bear the most extensive obligations under the regulation.
The compliance process begins with creating an inventory of AI assets, which helps organizations determine whether they are dealing with an AI system or an AI model. This distinction sets the stage for two separate assessment processes: one for AI systems and another for general-purpose AI (GPAI) models.
In this guide we will dive deep into AI systems.
1.Understanding the Two Key Evaluations for AI Systems
Under the EU AI Act, two essential evaluations are required to understand the obligations that apply to an AI system. These evaluations serve to assess both the risk categorization and the transparency requirements of the system.
A common misconception about the EU AI Act is the interpretation of a “four-level risk pyramid.” Many people use the term “limited risk” to describe a category of AI systems with less stringent transparency requirements. However, the Act does not use the term “limited risk” for defining transparency obligations under Article 50. This is an important distinction because the term “limited risk” does not appear in the Act’s framework for categorizing risk levels.
Risk Category Evaluation
The first evaluation focuses on categorizing the AI system’s risk level. The obligations under the Act are primarily aimed at high-risk systems, which are subject to the most stringent requirements.
Transparency Evaluation
The second evaluation is a binary evaluation that determines whether the AI system is classified as “transparency-requiring system” or not. Transparency requirements are applicable to AI systems independent from their risk-level.
2. First Evaluation: Risk Categories
Under the EU AI Act, AI systems are divided into three risk categories:
Each AI system is assigned to only one category.
2.1 Prohibited Systems
The first step in assessing the risk level of an AI system is to check if it falls into the prohibited categories outlined in Article 5 of the EU AI Act. These prohibitions are designed to protect privacy, fundamental rights, and public safety, with some narrowly defined exceptions.
Here are the key prohibitions:
These restrictions are intended to safeguard individual autonomy, prevent discrimination, and uphold ethical AI use. If your system falls into any of these categories without meeting an exception, it is prohibited under the Act.
2.2 High Risk Systems
If your AI system isn’t prohibited, the next step is to assess whether it qualifies as high-risk under Article 6 of the EU AI Act. High-risk systems are subject to stricter requirements to safeguard public safety, privacy, and fundamental rights.
The EU AI Act identifies specific sectors in Annex III where AI systems are classified as high-risk due to their potential impact on public safety and fundamental rights.
2.3 What are the High-Risk Sectors?
Biometric and Emotion Recognition
AI systems used for biometric identification and categorization of natural persons are considered high-risk. These systems are subject to strict requirements to ensure accuracy and prevent misuse.
Critical Infrastructure
AI systems managing utilities and transportation networks, such as electricity grids and railway systems, are classified as high-risk. Their failure or manipulation could have significant consequences for public safety.
Education
AI applications in education, including those used for admissions, grading, and monitoring student behavior, are considered high-risk. These systems must operate transparently and fairly to maintain trust in educational institutions.
Employment
AI systems involved in hiring processes, performance evaluations, and task allocation are classified as high-risk. Ensuring these systems are free from bias and operate transparently is crucial for protecting workers’ rights.
Public Services
AI applications in public services, such as welfare distribution, credit scoring, and emergency response, are considered high-risk. These systems must be reliable and fair to serve the public effectively.
领英推荐
Law Enforcement
AI systems used for criminal profiling, evidence analysis, and predictive policing are classified as high-risk. Strict regulations are in place to prevent misuse and protect individuals’ rights.
Migration and Border Control
AI applications in migration and border control, including risk assessments and document verification tools, are considered high-risk. These systems must operate transparently and fairly to respect individuals’ rights.
Judicial Processes
AI systems supporting evidence evaluation, legal interpretation, and dispute resolution are classified as high-risk. Ensuring these systems are accurate and transparent is essential for maintaining trust in the judicial system.
Sectoral Harmonization Legislation
Additionally, Article 6 also categorizes more AI systems that are high-risk.
AI systems regulated under specific EU harmonization laws according to Annex I and requiring conformity assessments fall into this category. Examples include medical devices, lifts, machinery and toys. These systems are considered high-risk due to their critical roles in safety.
2.4 Flexibility and Exceptions in High-Risk Classification
The Act allows providers of AI systems in high-risk sectors to contest the classification. They can demonstrate that their system does not pose significant risks to health, safety, or fundamental rights, potentially exempting them from high-risk obligations.
Article 6(3) provides exceptions to this classification for AI systems listed in Annex III, allowing them to be considered non-high risk under specific conditions.
An AI system listed in Annex III may be exempt from high-risk classification if it meets one of the following:
It is important to note that, even though provider claims one the aforementioned uses, AI systems that process personal data for profiling purposes can not be exempted. This classification underscores the importance of safeguarding individuals’ privacy and data protection rights.
2.5 Low (or Minimal) Risk
If your AI system does not fit into the prohibited or high-risk categories under the EU AI Act, it is classified as low-risk or minimal-risk. They are not subject to mandatory compliance requirements under the Act.
3. The Second Evaluation: Specific Non-Transparency Risk
The EU AI Act introduces a unique set of obligations for AI systems often called “limited risk AI systems.” These transparency requirements cut across all risk levels. Even if your system is classified as low risk, you’ll still need to meet these obligations if your AI interacts with users in ways covered by Article 50.
The systems are primarily identified by their potential to impact users directly, prompting the need for additional transparency. To avoid the risk of deceit or manipulation.
3.1 What are the use cases for the transparency obligations?
AI That Interacts Directly with Users
If your system communicates with users, it must clearly state it’s AI-driven. This rule doesn’t apply if it’s already obvious or disclosure is waived for scenarios like criminal investigations.
Synthetic Content Creators
AI-generated content—whether audio, video, images, or text—must come with a label marking it as artificial. This label should be machine-readable, though exceptions exist (e.g., editorial tools or law enforcement use).
Emotion Recognition and Biometric Categorization
Systems analyzing emotions or categorizing individuals based on biometrics must notify people within their range. However, law enforcement operations may bypass this requirement under specific conditions.
AI-Generated Human-Like Content
Creations resembling human outputs, like deepfakes, need clear disclosures about their AI origins—unless they’re part of artistic, satirical, or law enforcement activities.
Publicly Shared AI-Generated Text
Any AI-generated text made available to the public must be identified as such, except when reviewed by humans or under specific legal exemptions.
As we can see here, for example biometric categorization systems can be both high-risk and transparency requiring systems. Demonstrating that transparency requirements are not a distinct category but rather an overlapping obligation across different risk levels.
What comes Next?
Understanding the classification of your AI systems is the first crucial step toward navigating the EU AI Act’s obligations. With this clarity, your organization can confidently identify compliance requirements, mitigate risks, and ensure your AI systems align with the highest standards of ethical and transparent use.
But compliance isn’t just about ticking boxes—it’s about embracing responsible AI to build trust, enhance user experiences, and unlock new opportunities. That’s where Validaitor comes in.
With Validaitor, you’re not just meeting regulatory demands; you’re future-proofing your AI strategy. Our platform equips you with the insights and tools to confidently navigate compliance while driving innovation. Step into the future of AI governance with us—because responsible AI isn’t just a requirement; it’s the way forward.