Setting the Standard: How Harmonised EU Guidelines Will Shape the Future of AI Safety and Accountability

Setting the Standard: How Harmonised EU Guidelines Will Shape the Future of AI Safety and Accountability

In August 2024, the European Union formally adopted the AI Act, a landmark regulation set to govern high-risk AI systems with provisions that will take effect following a transition period of two to three years.

The AI Act is designed to ensure that high-risk AI applications in the EU adhere to strict health, safety, and fundamental rights protections. To streamline compliance, AI systems developed according to harmonised standards published in the Official Journal of the EU will receive a legal presumption of conformity with the AI Act.

European standardisation organizations, primarily CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization), are leading the charge in drafting these critical standards under the guidance of the European Commission.

Here we outline the main characteristics expected from the upcoming harmonised standards, focusing on risk management to support effective AI Act implementation.


Key Standardisation Deliverables Requested by the European Commission

1. Establishing a Robust Risk Management System

  • Harmonised standards for the AI Act will prioritize creating a systematic approach to managing risks associated with AI products and services.
  • These standards will shift from ISO/IEC’s traditional focus on organizational practices to product-specific requirements. The AI Act’s risk management focus centers on potential impacts on health, safety, and fundamental rights.
  • The goal is to ensure that AI systems under the Act undergo rigorous processes to identify and mitigate risks across their lifecycles.

2. Addressing Technical and Non-Technical Aspects of AI Risk

  • Standards will be designed to consider both technical elements, such as the software lifecycle and continuous monitoring, and non-technical elements, which include ethical concerns like fundamental rights.
  • AI-specific characteristics, such as adaptive algorithms and decision-making opacity, will be accounted for to address potential risks effectively.

3. Comprehensive Risk Identification and Documentation

  • Standards must require thorough identification of all foreseeable risks associated with AI systems, ensuring that safety, compliance, and accountability are prioritized.
  • Risk identification processes must be documented, providing evidence that all known risks have been addressed. This includes leveraging suitable risk mitigation strategies based on the unique nature of each AI system.

4. Testing and Evaluation as Key Risk Mitigation Measures

  • The testing and evaluation of mitigation measures are critical to ensuring the safety and compliance of AI systems. The AI Act emphasizes this process, particularly in Article 9, which mandates that risk management measures be validated through objective testing methods.
  • Standardisation will encourage the use of metrics and thresholds to evaluate the effectiveness of risk mitigation steps, assuring that systems meet the stringent requirements outlined in the Act.

5. Clear Requirements for Processes and Outcomes

  • Instead of mandating specific risk treatment measures for every AI system, the standards will outline essential requirements that providers must meet, particularly focusing on process-based objectives and measurable outcomes.
  • Key criteria, such as safety thresholds, assessment protocols, and priorities in testing risk mitigation effectiveness, will be defined to offer guidance without being overly prescriptive.


Implications for AI Providers and Developers

The upcoming standards will demand more rigorous assessments of AI systems’ impacts on users and society, focusing on mitigating risks that AI systems pose to fundamental rights. Developers and providers will need to adopt a proactive approach to risk management, addressing ethical and societal implications throughout the AI lifecycle. Testing and evaluation will play a prominent role, not only in ensuring compliance but also in strengthening public trust in AI technologies.

?

Conclusion

?

The harmonised standards developed by CEN and CENELEC will play a pivotal role in supporting the AI Act’s mission to make AI in Europe safe, ethical, and accountable. By embedding risk management systems that address both technical and ethical considerations, these standards will lay a robust foundation for high-risk AI systems, safeguarding individuals and aligning with the EU’s values on fundamental rights and public safety. As the standardisation process unfolds, these deliverables will enable a smoother transition to full AI Act compliance, reinforcing Europe’s leadership in responsible AI governance.


Interested in learning how AI can transform your organization? Schedule a call with our experts today! Please send an email to [email protected]

?

Thanks for sharing ??

回复
Michael Charles Borrelli

Trustworthy AI for a better world.

4 周

Very informative.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了