AI Regulatory Journey has Officially Begun

AI Regulatory Journey has Officially Begun

On 1 August 2024, the EU’s Artificial Intelligence Act (AI Act) came into force. The regulation is the first of its kind, providing the first set of comprehensive and sector-agnostic regulations for the use of AI systems and models by manufacturers, developers, deployers, and users in the EU.

At its core, the AI Act aims to achieve trust, transparency and accountability, providing the foundation for the safe adoption of AI in a way that eases fears around its ethicality.

The AI Act categorises AI systems into four segments:

  1. Unacceptable Risk - AI systems belonging to this category are prohibited entirely. Unacceptable risk systems are defined as those that have significant potential for manipulation, or can exploit vulnerabilities. An example includes the use of real-time remote biometric identification in public spaces. These systems are generally out of scope for the financial services industry.
  2. High-Risk - High-risk AI systems must meet various requirements. These include; risk management, data governance, monitoring and record-keeping practices, human oversight obligations, and standards for accuracy, robustness and cybersecurity. High-risk AI systems must also be registered in an EU-wide public database. Credit scoring of banking customers can fall into this category.
  3. Limited Risk - Includes chatbot systems. These AI systems should comply with minimal transparency and disclosure requirements, ensuring users are aware that they are interacting with AI systems.
  4. Minimal Risk - Includes applications that are already widely available, such as spam filters that will be largely unregulated.

General purpose (GP) AI models (including generative AI models, such as ChatGPT) are treated separately from the above risk categorisations and are not deemed to pose systemic risks. As such, they will face existing EU copyright and cybersecurity laws. In particular, providers will need to make information available to downstream users who intend to integrate the GP AI model into their AI systems, assess and mitigate systemic risks, and conduct adversarial training of the model. More details can be found here .

Of course, the financial services industry relies heavily on the use of AI, with key use cases including trading, fraud detection, and risk management. Financial services firms in the EU will have to adapt processes and systems in order to stay aligned with the regulations. In particular, the high-risk applications will require financial institutions to prioritise the following:

  • Continuous Risk Management – Focus on ethicality, safety, and rights throughout the AI lifecycle, including regular updates, documentation, and stakeholder engagement.
  • Comprehensible Documentation – Maintain clear and up-to-date technical documentation for high-risk systems, including properties, algorithms, data processes, and risk management plans.
  • Human Oversight and Transparency – Maintain human oversight throughout the AI lifecycle and ensure clear and understandable explanations of AI-based decisions.
  • Rigorous Governance – Implement robust governance practices to prevent discrimination and ensure compliance with data protection laws.
  • Fundamental Rights Impact Assessment – Conduct thorough assessments to identify and mitigate potential risks to fundamental rights.
  • Data Quality and Bias Detection – Ensure training and testing datasets are representative, accurate, and free of bias to prevent adverse impacts.
  • System Performance and Security – Ensure consistent performance, accuracy, and cybersecurity throughout the lifecycle of high-risk AI systems.

To align with the AI Act, in-scope firms should take a structured approach. They should develop a comprehensive compliance framework to manage AI risks, ensure adherence to the Act, and implement risk mitigation strategies. In addition, firms need to take inventory of existing AI assets like models, tools, and systems, classifying each into the four risk categories outlined by the Act, with strict governance in place to oversee the transition. Although most applications are currently considered low/no-risk and won’t be in scope of the regulation at all, firms should still remain vigilant of the AI technologies that they are deploying in order to maintain compliance.

Following the enforcement, there will now be a staggered entry of the AI Act into force, with the provisions relating to prohibited AI systems applying from around December 2024 (six months after the publication of the AI Act in the Official Journal, which is expected to occur shortly). The obligations relating to GP AI models will apply from around June/July 2025, and most of the remaining provisions for AI developers, including as to high-risk AI systems, will be applicable from June/July 2026. Firms can face penalties of up to seven per cent of annual turnover for violations of banned, high-risk AI applications.

For more information, take a look at our Substack newsletter platform.


About us

GreySpark Partners is a global business and technology consultancy, specialising in mission-critical areas of the capital markets.

For decades, we have been a lynchpin to the world's most critical financial firms, tapping into our deep expertise and helping them adapt to changing regulatory and technological environments.

GreySpark has offices in London, New York, and Hong Kong.


Ayush Prakash

Author, Building AI's with World Models @newsapience

3 个月

When it comes to AI regulation, the name of the game is quality over quantity

要查看或添加评论,请登录

社区洞察

其他会员也浏览了