Newsletter #12; High-Risk AI

Newsletter #12; High-Risk AI

What Does the Law Require?

?Author Henrik Engel

Introduction

High-risk AI is a key focus of the AI Act, which places specific requirements on systems that have a potential impact on citizens' rights, safety or freedoms. But what does this mean in practice? This article provides an overview of the requirements for high-risk AI, how to identify a high-risk system, and how to ensure proper documentation, monitoring and risk management. ?

?

How do you identify high-risk AI?

The AI Act defines high-risk AI as systems that:

Are used in critical sectors such as health, transport, justice and employment.

Have a significant impact on individuals, such as facial recognition in public spaces or algorithms used for recruitment.

Support decision-making processes with direct consequences for individuals' rights, such as credit scoring.

To assess whether an AI system is high-risk, the organisation should:

Conduct a risk assessment that considers the potential consequences for affected parties.

Map the scope and analyse whether the system falls within the AI Act's defined high-risk categories. ?

Documentation requirements for high-risk AI

Organisations that develop or use high-risk AI must ensure comprehensive documentation, including:

Technical documentation - Description of the AI system's functions, algorithms and data sources. Documentation of test procedures and safety controls.

Risk assessments - Identification of possible risks to users and affected groups. Documentation of measures to minimise risks.

Instructions for use - Clear guidelines for the correct use of the system. Warnings about potential limitations or sources of error. ?

Monitoring and compliance

To ensure that high-risk AI remains safe and in compliance with the AI Act, organisations must:

Implement monitoring procedures - Continuous monitoring of the system's performance and key risks. Identification and repair of errors or unforeseen consequences.

Ensure traceability - Documentation must be kept so that supervisory authorities can verify compliance. Logging of decision-making processes to be able to explain the system's actions.

Audit and control - Regular internal and external audits to ensure compliance with legal requirements. ?

Risk management for high-risk AI

The AI Act requires a systematic approach to risk management, which includes:

Identification of risks Working through potential scenarios where the system may fail or cause harm.

Addressing risks Implementing technical and organisational measures, such as encryption, fail-safe mechanisms and robust testing.

Updating risk assessments with regular review based on new applications or user feedback. ?

?

Conclusion

High-risk AI places significant demands on organisations, but it is a necessary investment to protect users' rights and ensure trust in AI systems. By following the AI Act's guidelines, organisations can minimise risks and at the same time responsibly utilise AI's potential. ?

Questions for the reader: How does your organisation work with high-risk AI? Have you experienced challenges in identifying or meeting the requirements? Please share your thoughts!

?

Sources and inspiration:

The AI Act's official documentation.

ISO 31000: Risk Management

ISO 42001: AI Governance

Article 35, GDPR: Data Protection Impact Assessment (DPIA) ?

?

要查看或添加评论,请登录

Leif Rasmussen的更多文章

  • Newsletter #15; AI Security:

    Newsletter #15; AI Security:

    Article 32 in an AI Context Autoher Henrik Engel Introduction AI security is a crucial element for protecting personal…

  • Newsletter # 14 - DataOps

    Newsletter # 14 - DataOps

    Navigating the Data Data all over in a data-driven world! Organizations are struggling with an overwhelming surge in…

  • Newsletter #13; AI Governance and Responsibility

    Newsletter #13; AI Governance and Responsibility

    Introduction As organizations increasingly adopt AI, there's a growing need for governance and accountability…

  • When Data Gets Complex

    When Data Gets Complex

    Investigate with Palantir Many businesses and government agencies dealing with large amounts of data face a number of…

  • Observability Market Report

    Observability Market Report

    Trends, Innovations, and Vendor Landscape 1. Observability Market Overview Observability has emerged as a critical…

  • Newsletter #13; AIA & DPIA

    Newsletter #13; AIA & DPIA

    Artificial Intelligence Assessment (AIA) and Data Protection Impact Assessment (DPIA) Autoher Henrik Engel This is the…

  • AI-Powered Data Entry Automation

    AI-Powered Data Entry Automation

    A swift way to fast adoption of your data Many BI/Datalake projects struggle with simplifying and automating data…

    1 条评论
  • Manglende cloud-governance kan koste dyrt

    Manglende cloud-governance kan koste dyrt

    Alt for ofte st?der vi p? cloud deployments, der ikke er blevet opdateret i flere ?r. Det var m?ske en fin l?sning, da…

    1 条评论
  • Newsletter #11: AI Act and Transparency Requirements

    Newsletter #11: AI Act and Transparency Requirements

    Ensuring Explainable and Accountable AI Systems Author Henrik Engel Introduction Transparency is a cornerstone of the…

    1 条评论
  • Observability: Fremtidens N?gle til Digital Modstandsdygtighed

    Observability: Fremtidens N?gle til Digital Modstandsdygtighed

    I en verden med multi- og hybrid-cloud er kompleksiteten eksploderet. If?lge Splunk's State of Observability, oplever…

社区洞察

其他会员也浏览了