The EU AI Act - the status quo and the February Deadline

The EU AI Act - the status quo and the February Deadline

With the upcoming first deadline for the EU AI Act requirement in February 2025, organizations need to adjust now to meet the following two requirements:

  1. Prohibited AI system needs to be taken off the market (Article 5 of the EU AI Act), and
  2. Organizations need to ensure AI literacy (Article 4 of the EU AI Act).

Therefore, we give you a brief overview of the EU AI Act in this article and inform you on prohibited AI systems and AI literacy.

The EU AI Act

The EU has introduced the EU AI Act to ensure AI systems are developed and used safely, reliably, and transparently. This regulation classifies AI systems by risk levels, impacting organizations that use AI within the EU. Here’s a summary of the Act and ways to prepare.

The Act’s definition of AI aligns with the OECD, viewing AI as systems generating outputs (e.g., predictions, recommendations) that influence virtual or physical environments, with varying levels of autonomy. The EU AI Act aims to protect citizens from AI-related harm, ensuring non-discriminatory, unbiased AI applications. Organizations using AI must explain outcomes, which may require restructured, transparent development processes.

Proposed in 2021, the EU AI Act is the first legal framework aiming to regulate AI use across the EU, prioritizing safety and transparency. The AI Act uses a risk-based approach:

  • Unacceptable Risk (e.g., social scoring, subliminal manipulation)
  • High Risk (e.g., AI in recruitment, credit scoring, and medical devices)
  • Limited Risk (e.g., chatbots, deepfakes)
  • Low Risk (e.g., spam filters)

Depending on the risk classification, AI systems may face restrictions, specific requirements, or mandatory user notifications.


Are you uncertain which EU AI Act Classification might apply to your AI systems? Check out our free Compliance Checker here.

Status and Timeline

The EU AI Act became effective in August 2024. Provisions have phased timelines:

  • By February 2025: AI systems in the unacceptable risk category are prohibited and organizations need to ensure proper AI literacy.
  • By August 2026: Those making use of high-risk AI systems need to fulfil all requirements.
  • By August 2027: Those making use of AI in regulated products (e.g. medical devices) need to comply with the AI Act.


Article 5: Prohibited AI Systems need to be taken off the market

The first requirement due in February 2025 requires all organizations implicitly to classify their AI systems. Once this is done, the requirement is straight-forward: take the AI systems with an unacceptable risk off the market.

Unacceptable risk is defined as:

  • AI systems that manipulate people’s decisions or exploits their vulnerabilities,
  • AI systems that evaluate or classify people based on their social behaviour or personal traits, and
  • AI systems that predict a person’s risk of committing a crime.
  • Also, AI systems that scrape facial images from the internet or CCTV footage,
  • AI systems that infer emotions in the workplace or educational institutions, and
  • AI systems that categorize people based on their biometric data

need to be taken off the market - with a few exceptions for the search for missing people or terrorism prevention. Read the full text of Article 5 here.

Article 4: AI Literacy needs to be ensured

While the first requirement of Article 5 is fairly straightforward, the second one is more tricky to meet, as it is framed more vaguely.

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”

Hence, companies need to ensure the correct training and use of their AI systems among both their staff as well as users of AI systems. In practice, this includes educational content, potential training certificates for employees, and transparent information about the systems in use. Read the full text of Article 4 here. Are you looking for the right AI training to meet the requirements of Article 4? Check out our free AI Training Marketplace here and find the program that fits your needs and budget!

Fines and Consequences

Non-compliance with Article 5 may lead to fines of up to €35 million or 7% of global revenue. Moreover, non-compliance with obligations of providers, importers, distributors, and deployers may lead to fines of up to €15 million or 3% of global revenue. Additionally, in case of incorrect, incomplete, or misleading information to authorities, an administrative fine of up to €7.5 million or 1% of global revenue can be issued.

How to Prepare

Though compliance can be challenging, there’s enough time to adapt if you start to prepare now. This will help you to avoid operational or financial setbacks when the AI Act is enforced by the authorities. Additionally, using and communicating responsible and ethical AI use helps you to secure a competitive advantage. The key? Establishing the right governance processes and tools specifically designed to reflect the EU’s transparency focus and foster collaboration within development teams.

If you want more details about the EU AI Act, visit our blog where we have published information and even a White Paper about the EU AI Act and its implications.

要查看或添加评论,请登录

trail的更多文章

社区洞察

其他会员也浏览了