The European AI Act and Responsible AI
Max Stepanov
Product Design Lead. UXD, UXR, and HCI specialist. Experience in Product Design and Development, Design Management, and Digital Communication
The European Artificial Intelligence Act officially came into force on August 1, 2024, YEAH!
This is a significant milestone in regulating AI technologies across the European Union, and other regions and countries will likely follow the EU approach sooner or later. The legislation aims to ensure the safe, ethical, and transparent development and deployment of AI systems.
The European Commission proposed the AI Act in April 2021 as part of the EU’s broader digital strategy. The primary objective of the AI Act is to create a regulatory framework that allows innovation while ensuring fundamental rights and safety — core European values.
Objectives
First of all, the EU aims to protect individuals’ safety, health, and fundamental rights from potential risks associated with AI products. Ensuring Safety and Fundamental Rights is fundamental.
Secondly, to foster public trust and acceptance, the EU aims to establish a framework for trustworthy AI.
And, of course, Innovation. The European Union encourages developing and deploying AI technologies within a well-defined regulatory environment.
Approach
One of the distinguishing features of the AI Act is its risk-based approach to regulation. The Act categorizes AI systems into four risk levels.
(1) Unacceptable Risk
AI systems that clearly threaten safety or fundamental rights are prohibited. Examples include government social scoring and real-time biometric identification in public spaces for law enforcement purposes.
(2) High Risk
AI systems with significant implications for individuals or society, such as those used in critical infrastructures, education, employment, and law enforcement, must comply with stringent requirements. These include robust risk management, high-quality datasets, and clear documentation.
(3) Limited Risk
AI systems with limited risk must adhere to transparency obligations. Users should be aware that they are interacting with an AI system.
(4) Minimal Risk
AI systems with minimal risk, such as spam filters or AI in video games, are subject to minimal regulation.
领英推荐
Compliance and Enforcement
To ensure compliance, the AI Act outlines several key requirements for high-risk AI systems:
Enforcement of the AI Act will be carried out by national competent authorities designated by EU member states. The Act establishes several new bodies to oversee its implementation:
Non-compliance can result in significant fines, up to 6% of the annual global turnover for companies.
Implications
The AI Act presents businesses with both challenges and opportunities. On one hand, compliance with the new regulations may require substantial investment in updating AI systems and processes. On the other hand, the Act provides a clear regulatory framework that can drive innovation by establishing trust and reducing uncertainties.
So, there are mixed opinions on how the AI Act will impact innovation. Some industry experts argue that stringent regulations might stifle innovation initially. However, historical precedents, let’s take the automobile industry’s safety regulations, suggest that well-implemented regulations can foster innovation by building trust and ensuring safety.
Businesses operating in the AI space will need to focus on developing compliance strategies to meet the AI Act’s requirements, emphasize ethical AI development and deployment to build public trust and align with European values as well as try to leverage innovation within the regulatory framework to innovate responsibly and gain a competitive edge in the European market.
Implementation
The Act’s provisions will be enforced in stages over the next 36 months. Immediate bans on high-risk systems take effect within six months, while requirements for general-purpose AI systems, such as providing detailed training data, will be phased in over 12 months. Full compliance for all provisions is expected within three years.
Impact
The AI Act is likely to have a ripple effect beyond the EU. As one of the first comprehensive regulatory frameworks for AI, it sets a precedent that other regions may follow. Companies operating globally may adopt EU standards to streamline operations and ensure compliance across multiple jurisdictions.
Furthermore, the AI Act could influence international discussions on AI governance, contributing to the development of global standards and best practices.
The European Commission has also launched a consultation on a Code of Practice for general-purpose AI models, focusing on areas like transparency, copyright, and risk management. This initiative seeks input from various stakeholders to shape future regulatory practices and ensure the responsible use of AI across diverse applications.
Afterwords
The enactment of the European AI Act marks a pivotal moment in the global AI regulatory landscape. By setting clear standards and promoting ethical AI development, the EU is paving the way for a safer and more innovative future. As the AI Act takes effect, businesses, developers, and policymakers will need to collaborate closely to navigate its requirements and harness the full potential of AI technologies.