The EU's AI Act: Mixed reaction to regulations aimed at balancing innovation with rigorous oversight
Emma Linaker
Fractional CMO/CCO | Growth Strategist | Ex-Google & Ogilvy | 25+ Years in Marketing | Digital Transformation Expert | Middle East & Asia Specialist | Crisis & Reputation Expert | Speaker
The European Union’s Artificial Intelligence (AI) Act, which came into force recently, marks a significant milestone in the regulation of AI technologies within the EU and represents a big step towards ensuring that AI technologies are developed and deployed responsibly.
In summary, the Act classifies AI systems into four categories based on their risk level and sets out clear obligations for providers, with the intention of protecting citizens while encouraging innovation within the EU.
Reaction from stakeholders and industry has been a mixed bag. The International Association of Privacy Professionals, reports that industry players are welcoming the rules while looking toward compliance and regulatory options, whereas members of civil society have been more critical, with some saying the AI Act does not do enough to protect human rights and will harm both citizens and their intellectual property.
In a Euronews Next article, Max von Thun, Europe director of the Open Markets Institute says there are significant loopholes for public authorities and relatively weak regulation of the largest foundation models that pose the greatest harm. His biggest concern is tech monopolies.
"The AI Act is incapable of addressing the number one threat AI currently poses: its role in increasing and entrenching the extreme power a few dominant tech firms already have in our personal lives, our economies, and our democracies," he said.
Cecilia Bonefeld-Dahl, director general of industry association Digital Europe, expressed her concerns in a reaction published after the vote.
"The AI Act, if implemented smoothly, can be a positive force for AI uptake and innovation in Europe. But being so horizontal, the AI Act touches upon so many sectors and their existing legislation (like medical devices, machinery or toy safety). This is on top of the unprecedented number of digital laws we've seen this term, such as the Cyber Resilience Act and the Data Act. It's like a regulatory spaghetti bowl and a lot to digest - the next Commission will have to focus on untangling it.”
A KPMG report titled ‘Decoding the EU AI Act’ shares that the Act is widely considered a significant step toward regulating AI, promoting safe, transparent, and ethically sound AI practices and that it aims to balance innovation with protection of fundamental rights.?
“Concerns remain, however, about the potential for overregulation, particularly for startups and SMEs, which could face high compliance costs. As a result, while the Act sets a global standard for AI governance, its long-term impact will depend on how effectively it can promote innovation without stifling competition or technological advancement,” the report states.
The report also shares that critics suggest that the classifications of the Act could be too rigid, potentially misclassifying or overburdening certain AI applications.?
Understanding the EU’s New AI Act
The four categories of risk for AI are as follows:
领英推荐
· ? ? ? ? Manipulative AI that distorts behaviour or decision-making through subliminal, deceptive techniques.
· ? ? ? ? Biometric categorisation systems that infer sensitive attributes like race, religion, or political beliefs from biometric data.
· ? ? ? ? Social scoring systems that evaluate or classify individuals based on social behaviour or personal traits.
· ? ? ? ? Emotion recognition in sensitive environments such as workplaces or educational institutions, unless for medical or safety reasons.
Provider obligations
Providers of high-risk AI systems, whether based within the EU or outside, must adhere to several key obligations. These include implementing a risk management system throughout the AI system’s lifecycle; ensuring that training, validation, and testing datasets are relevant, representative, and free of errors; provide detailed technical documentation to demonstrate compliance and facilitate assessments by authorities; design AI systems to allow human oversight, ensuring that the technology does not operate unchecked; and implement measures to protect against cybersecurity threats and maintain the system's accuracy and robustness.
General Purpose AI (GPAI)
General Purpose AI models, capable of performing a wide range of tasks, are also regulated under the AI Act. Providers must document the training and testing processes, evaluation results, and any systemic risks associated with the model. A summary of the content used to train the model must be published. Lastly, If an AI model poses systemic risks, providers must perform adversarial testing, ensure cybersecurity protections, and report serious incidents.
Governance and implementation
The AI Act will be overseen by a newly established AI Office within the European Commission, responsible for monitoring compliance and investigating systemic risks. The Act's provisions will be rolled out gradually, with different deadlines for various categories of AI systems.
Prohibited AI systems are required to comply within 6 months, General Purpose AI within 12 months and High-Risk AI Systems within 24 to 36 months, depending on the category.
Over the next few months, as various provisions of the Act are implemented and come into effect, businesses and developers will need to closely follow these regulations to ensure compliance and continue to operate within the EU market.