The EU AI Act came into force on August 1, 2024, and will be effective from August 2, 2026 https://artificialintelligenceact.eu/
The AI Act will affect a wide range of stakeholders, including AI developers, users, third-party vendors, importers, and public authorities, both within and outside the EU. Its primary focus is on ensuring compliance for high-risk AI systems used in sectors like healthcare, critical infrastructure, law enforcement, and education. The acts full text is available here https://artificialintelligenceact.eu/ai-act-explorer/
.
So I know what you asking how do we comply? Well here I have put together some very simple pointers to help you get in place a simple and effective programme of governance to evidence that you have robust measures in place to comply.
You will need several supporting documents. These documents will provide the structure and guidance for implementing processes, policies, and controls to ensure compliance. Here’s a list of key supporting documents from my perspective and what they should cover:
1. AI Inventory and Classification Template
- Purpose: To track all AI systems in use within the organization and classify them based on the AI Act’s risk categories (high-risk, low-risk, prohibited).
- Contents: AI system name and description Purpose and use case Source (in-house, third-party) Risk level (high-risk, low-risk, prohibited) Date of assessment
- Output: This document will help map out which AI systems are subject to specific compliance requirements under the Act.
2. AI Risk Assessment Procedure
- Purpose: To define how AI systems will be evaluated for risks, especially high-risk systems.
- Contents: Methodology for assessing risks (impact on fundamental rights, ethics, bias, data quality, etc.) Risk scoring criteria Frequency of assessment (initial, periodic re-assessments) Responsible personnel (AI compliance officer, IT security, etc.)
- Output: An assessment procedure ensures that risks are regularly evaluated and managed in alignment with the AI Act.
3. Data Governance Policy
- Purpose: To outline data handling practices for AI, ensuring data quality, integrity, and fairness.
- Contents: Data collection, storage, and processing protocols Measures to prevent and mitigate bias in AI training data Data anonymization and privacy-preserving techniques Review and audit process for data sets used in AI systems
- Output: A comprehensive data governance policy ensures that AI operates on trustworthy, non-discriminatory data and complies with the AI Act’s transparency and fairness mandates.
4. AI Conformity Assessment Checklist
- Purpose: To ensure high-risk AI systems meet the required conformity assessment processes.
- Contents: Required technical documentation (design, algorithms, data sets) Testing and validation steps (security, accuracy, performance) Certification requirements (internal and external approvals) Record-keeping requirements
- Output: A checklist that provides a structured approach to ensure high-risk AI systems are tested and certified in line with the AI Act.
5. Human Oversight and Intervention Plan
- Purpose: To ensure that appropriate human oversight mechanisms are in place for AI systems, particularly for high-risk AI.
- Contents: Detailed process for human monitoring and intervention Roles and responsibilities for overseeing AI Clear “human-in-the-loop” mechanisms for decision-making Escalation processes for intervening in case of AI malfunction or risk
- Output: A plan to ensure human operators can intervene in AI decisions, especially in critical scenarios.
6. AI Transparency Policy
- Purpose: To outline how the organization will ensure transparency of AI systems, especially for high-risk systems.
- Contents: Mechanisms for explaining AI system logic, purpose, and decision-making User communication processes (how end-users will be informed) Internal transparency (how stakeholders and regulators will be informed) Documentation of all algorithms and model development processes
- Output: A policy that ensures users and stakeholders understand how AI systems work and can trust their outputs.
- Purpose: To establish ethical standards for developing and deploying AI systems.
- Contents: Guidelines on fairness, bias prevention, and non-discrimination Prohibited use cases (social scoring, manipulation, exploitation) Framework for assessing ethical risks Responsibility assignment for enforcing ethical AI standards
- Output: A set of guidelines that ensures AI systems align with the ethical principles outlined in the AI Act.
8. Incident Response Plan for AI Systems
- Purpose: To prepare the organization to handle incidents related to AI system failures, security breaches, or non-compliance.
- Contents: Procedures for identifying and reporting AI incidents (especially for high-risk systems) Steps for investigating and resolving incidents Notification processes for regulators and affected parties Post-incident review and improvement measures
- Output: A plan that enables swift response to any AI-related incident and ensures compliance with the AI Act’s reporting requirements.
9. Vendor Compliance Questionnaire
- Purpose: To ensure AI solutions sourced from third-party vendors comply with the AI Act.
- Contents: Vendor information (name, contact, legal agreements) Vendor’s AI risk classification and compliance processes Certifications and conformity assessment evidence Data security, privacy, and fairness measures implemented by the vendor
- Output: A questionnaire that helps assess whether third-party AI systems meet the necessary standards, protecting your organization from non-compliance.
10. Employee Training Program on AI Compliance
- Purpose: To educate employees, especially those working directly with AI, about the AI Act’s requirements and compliance processes.
- Contents: Overview of the AI Act and risk categories Ethical considerations in AI use Procedures for monitoring and intervening in AI decisions Data governance and bias prevention training Reporting and incident management processes
- Output: A training program that builds organizational awareness and capacity to operate AI in compliance with legal standards.
Steps to Implement the Framework:
- Start with a gap analysis to determine what areas of compliance the organization already meets and where it needs improvement.
- Develop the documents using best practices and align them with your organization’s existing processes.
- Assign responsibilities across departments (IT, HR, Legal, Risk, Compliance) to implement the framework.
- Set timelines and milestones to ensure the framework is fully operational by the time the AI Act comes into force.
- Monitor and review on an ongoing basis to adapt to any changes in AI Act regulations or internal needs.
This framework ensures your organization not only complies with the AI Act but also builds a culture of responsible AI use. To check if your AI system is in compliance you can use this website https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/
Hope this helps if you have any questions or comments please dont hesitate to reach out! Keep growing!