- This EU Commission’ paper outlines the harmonized standards needed for the implementation of the European Union’s AI Act, which came into effect on August 1, 2024. The Act sets a legal framework for artificial intelligence in the EU, focusing on ensuring the safety, health, and fundamental rights of individuals, particularly concerning high-risk AI systems. The regulation mandates compliance for such systems beginning in August 2026 after a transition period of 2-3 years, depending on the system type.
- European standardization bodies, led by CEN and CENELEC, are drafting the necessary standards upon the European Commission's request. These standards, once published in the EU’s Official Journal, will grant AI systems that comply with them a legal presumption of conformity, simplifying the regulatory compliance process for developers.
- Key requirements for high-risk AI systems include robust risk management, data quality and governance, transparency, human oversight, accuracy, and cybersecurity. The standards are expected to provide clear, prescriptive criteria for each aspect, ensuring consistent and verifiable compliance across the EU.
- The standardization process aims to align international efforts with the specificities of the EU's AI regulatory framework. It incorporates the participation of various stakeholders, including SMEs, to ensure broad applicability and support innovation. The standards must cover the entire lifecycle of AI systems, from development to post-market monitoring, and be applicable across different sectors while maintaining flexibility for specific use cases.