Understanding the EU Guidelines on Prohibited AI Practices under the AI Act
Parul Kaul-Green, CFA
Insurance & Asset Management Transformation Leader | Delivering Strategic Growth, AI Innovation & Operational Excellence | Leadership at AXA, Aviva, Citi, Liberty Mutual
Executive Summary
The European Commission has published guidelines clarifying prohibited artificial intelligence (AI) practices as defined in the EU's Artificial Intelligence Act (AI Act).
These guidelines aim to ensure consistent application and understanding of the Act across the European Union (EU), safeguarding innovation while prioritising health, safety, and fundamental rights.
Though non-binding, they provide critical insights into prohibited AI activities, such as harmful manipulation and social scoring, offering practical examples for stakeholders.
This blog explores the implications of these guidelines for insurance and financial sectors, alongside recommendations for effective risk management.
Prohibited AI Practices and Their Rationale?
The AI Act identifies specific AI practices that are outright prohibited due to their potential to harm individuals or society. These include:?
These activities are banned to protect fundamental rights, prevent discrimination, and mitigate risks to personal autonomy and democratic values.
Scope of Application and Sectoral Implications
The guidelines apply across all EU member states, ensuring uniformity in enforcement. For life insurance and pension products, this means heightened scrutiny over any AI systems used for customer profiling or risk assessment to ensure they do not involve prohibited practices like social scoring. For non-life insurance and health insurance sectors, companies must ensure compliance when deploying AI tools for claims processing or fraud detection, avoiding manipulative or discriminatory algorithms.
领英推荐
Implications of Non-Compliance and Timeline
Breaching these guidelines could result in significant penalties under the AI Act, including fines of up to €30 million or 6% of global annual turnover. While the guidelines are currently in draft form, companies should prepare for their formal adoption expected later this year. Early alignment with these principles will minimise compliance risks.
Recommendations for Risk Management
To navigate these requirements effectively:?
?
For further information or tailored advice on managing these changes, please contact us at [email protected] or book an appointment to discuss.
?
Citations:
Full Digitalized Chief Operation Officer (FDO COO) | First cohort within "Coca-Cola Founders" - the 1st Corporate Venture funds in the world operated at global scale.
1 个月Interesting
Love it! And we can help everyone get there too. Allegra AI's Global AI Risk Alignment & Regulation Guide provides essential insights into AI compliance, risk governance, and regulatory alignment for those using, buying, or selling AI. This guide compares established Global Risk Management Standards from the US, UK, Australia and NZ, with emerging AI regulations—including the EU AI Act, ISO/IEC 23894:2023, and other global frameworks—helping businesses plan for compliance in regulated markets. https://www.allegraai.com/risk
Cofounder and CTO at Artificial Genius Inc.
1 个月Parul Kaul-Green, CFA these EU guidelines to avoid obvious harm to vulnerable people seem fairly mild compared to what’s required of licensed finance professionals. What’s the source of the hysteria?
Digital/Data Business Analyst, Product Owner & Product Manager @ InsureTech, UK Financial Services | Gen AI | Digital Products & Services | Self-service | Automation | Data | APIs | SaaS
1 个月Helpful summary! Thank you for sharing. Looking forward to read more about the definitions and implementation of “Harmful manipulation of individuals' behaviour that exploits vulnerabilities”.