Newsletter #12; High-Risk AI
Leif Rasmussen
Passionate about bringing data to create cutting-edge solutions. Using Cloud technologies to succesful drive Data & AI initiatives.
What Does the Law Require?
?Author Henrik Engel
Introduction
High-risk AI is a key focus of the AI Act, which places specific requirements on systems that have a potential impact on citizens' rights, safety or freedoms. But what does this mean in practice? This article provides an overview of the requirements for high-risk AI, how to identify a high-risk system, and how to ensure proper documentation, monitoring and risk management. ?
?
How do you identify high-risk AI?
The AI Act defines high-risk AI as systems that:
Are used in critical sectors such as health, transport, justice and employment.
Have a significant impact on individuals, such as facial recognition in public spaces or algorithms used for recruitment.
Support decision-making processes with direct consequences for individuals' rights, such as credit scoring.
To assess whether an AI system is high-risk, the organisation should:
Conduct a risk assessment that considers the potential consequences for affected parties.
Map the scope and analyse whether the system falls within the AI Act's defined high-risk categories. ?
Documentation requirements for high-risk AI
Organisations that develop or use high-risk AI must ensure comprehensive documentation, including:
Technical documentation - Description of the AI system's functions, algorithms and data sources. Documentation of test procedures and safety controls.
Risk assessments - Identification of possible risks to users and affected groups. Documentation of measures to minimise risks.
Instructions for use - Clear guidelines for the correct use of the system. Warnings about potential limitations or sources of error. ?
Monitoring and compliance
To ensure that high-risk AI remains safe and in compliance with the AI Act, organisations must:
领英推荐
Implement monitoring procedures - Continuous monitoring of the system's performance and key risks. Identification and repair of errors or unforeseen consequences.
Ensure traceability - Documentation must be kept so that supervisory authorities can verify compliance. Logging of decision-making processes to be able to explain the system's actions.
Audit and control - Regular internal and external audits to ensure compliance with legal requirements. ?
Risk management for high-risk AI
The AI Act requires a systematic approach to risk management, which includes:
Identification of risks Working through potential scenarios where the system may fail or cause harm.
Addressing risks Implementing technical and organisational measures, such as encryption, fail-safe mechanisms and robust testing.
Updating risk assessments with regular review based on new applications or user feedback. ?
?
Conclusion
High-risk AI places significant demands on organisations, but it is a necessary investment to protect users' rights and ensure trust in AI systems. By following the AI Act's guidelines, organisations can minimise risks and at the same time responsibly utilise AI's potential. ?
Questions for the reader: How does your organisation work with high-risk AI? Have you experienced challenges in identifying or meeting the requirements? Please share your thoughts!
?
Sources and inspiration:
The AI Act's official documentation.
ISO 31000: Risk Management
ISO 42001: AI Governance
Article 35, GDPR: Data Protection Impact Assessment (DPIA) ?
?