World’s First Major AI Law 'EU AI Act' Enters into Force — Here’s What It Means for Tech Giants and Other Stakeholders
The European Union (EU) has taken a major step towards regulating artificial intelligence (AI) by introducing the EU AI Act. This groundbreaking legislation aims to establish the EU as a leader in trustworthy, human-centric AI. Below is a comprehensive overview of the EU AI Act and its implications.
AI Regulations
The EU AI Act serves as the primary legislative framework for regulating AI within the EU. In addition, the AI Liability Directive has been proposed to ensure that liability rules are appropriately applied to AI-related claims.
Status of the AI Regulations
Published in the EU Official Journal on July 12, 2024, the EU AI Act is the first comprehensive horizontal legal framework for AI regulation across the EU. It will come into force on August 1, 2024, and become effective on August 2, 2026, with specific provisions listed in Article 113.
The AI Liability Directive, still in draft form, awaits consideration by the European Parliament and the Council of the EU, with no fixed timeline for its finalization.
Related Laws Affecting AI
Several existing EU laws may impact AI development and use, including:
Definition of AI
The EU AI Act defines AI with precise terms:
The AI Liability Directive is expected to adopt the same definitions as the EU AI Act.
Territorial Scope
The EU AI Act applies extraterritorially to:
The AI Liability Directive applies to non-contractual fault-based civil law claims within the EU.
Sectoral Scope
The EU AI Act is non-sector-specific, applying across all sectors. Similarly, the AI Liability Directive pertains to non-contractual fault-based civil law claims in national courts.
领英推荐
Compliance Roles
The EU AI Act defines specific roles and their compliance obligations:
The AI Liability Directive increases the likelihood of successful claims against AI system developers or users relying on AI outputs.
Core Issues Addressed by the AI Regulations
The EU AI Act aims to promote human-centric, trustworthy AI while ensuring high levels of protection for health, safety, fundamental rights, democracy, and the rule of law. It also supports innovation and the internal market's functioning.
The AI Liability Directive ensures that individuals harmed by AI systems receive the same protection as those harmed by other technologies, addressing the complexities of fault-based liability for AI-enabled products and services.
Risk Categorization
The EU AI Act classifies AI systems by risk levels:
General-purpose AI models are further classified by their systemic risk potential.
Key Compliance Requirements
Compliance obligations are determined by the AI system's risk level:
General-purpose AI models must meet technical documentation and transparency obligations and cooperate with the Commission and national authorities.
Regulators and Enforcement
Enforcement involves a combination of national and EU authorities. EU Member States will designate national competent authorities, including notifying and market surveillance authorities. An AI Office within the Commission and an AI Board with Member States' representatives will support enforcement and ensure consistent application.
Penalties for non-compliance include significant fines and restrictions on market access. The AI Liability Directive introduces a rebuttable presumption of causality and empowers national courts to order disclosure of evidence related to high-risk AI systems.
Conclusion
The EU AI Act represents a landmark effort in AI regulation, aiming to position the EU as a leader in human-centric, trustworthy AI. By setting comprehensive compliance requirements, risk categorization, and robust enforcement mechanisms, the Act provides a blueprint for global AI governance. As the regulatory landscape evolves, the EU AI Act will serve as a critical reference point for jurisdictions worldwide in crafting their AI regulations.