Setting the Standard: How Harmonised EU Guidelines Will Shape the Future of AI Safety and Accountability
Massimo Buonomo
Global Expert & Futurist | Keynote Speaker & Influencer in AI, Web3, Metaverse, and CBDCs | AI Board Member for Corporates & International Organizations
In August 2024, the European Union formally adopted the AI Act, a landmark regulation set to govern high-risk AI systems with provisions that will take effect following a transition period of two to three years.
The AI Act is designed to ensure that high-risk AI applications in the EU adhere to strict health, safety, and fundamental rights protections. To streamline compliance, AI systems developed according to harmonised standards published in the Official Journal of the EU will receive a legal presumption of conformity with the AI Act.
European standardisation organizations, primarily CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization), are leading the charge in drafting these critical standards under the guidance of the European Commission.
Here we outline the main characteristics expected from the upcoming harmonised standards, focusing on risk management to support effective AI Act implementation.
Key Standardisation Deliverables Requested by the European Commission
1. Establishing a Robust Risk Management System
2. Addressing Technical and Non-Technical Aspects of AI Risk
3. Comprehensive Risk Identification and Documentation
领英推荐
4. Testing and Evaluation as Key Risk Mitigation Measures
5. Clear Requirements for Processes and Outcomes
Implications for AI Providers and Developers
The upcoming standards will demand more rigorous assessments of AI systems’ impacts on users and society, focusing on mitigating risks that AI systems pose to fundamental rights. Developers and providers will need to adopt a proactive approach to risk management, addressing ethical and societal implications throughout the AI lifecycle. Testing and evaluation will play a prominent role, not only in ensuring compliance but also in strengthening public trust in AI technologies.
?
Conclusion
?
The harmonised standards developed by CEN and CENELEC will play a pivotal role in supporting the AI Act’s mission to make AI in Europe safe, ethical, and accountable. By embedding risk management systems that address both technical and ethical considerations, these standards will lay a robust foundation for high-risk AI systems, safeguarding individuals and aligning with the EU’s values on fundamental rights and public safety. As the standardisation process unfolds, these deliverables will enable a smoother transition to full AI Act compliance, reinforcing Europe’s leadership in responsible AI governance.
Interested in learning how AI can transform your organization? Schedule a call with our experts today! Please send an email to [email protected]
?
Thanks for sharing ??
Trustworthy AI for a better world.
4 周Very informative.