AI Regulatory Journey has Officially Begun
GreySpark Partners
Business & Technology Consulting, Specialising in Mission-critical Areas of the Capital Markets Industry
On 1 August 2024, the EU’s Artificial Intelligence Act (AI Act) came into force. The regulation is the first of its kind, providing the first set of comprehensive and sector-agnostic regulations for the use of AI systems and models by manufacturers, developers, deployers, and users in the EU.
At its core, the AI Act aims to achieve trust, transparency and accountability, providing the foundation for the safe adoption of AI in a way that eases fears around its ethicality.
The AI Act categorises AI systems into four segments:
General purpose (GP) AI models (including generative AI models, such as ChatGPT) are treated separately from the above risk categorisations and are not deemed to pose systemic risks. As such, they will face existing EU copyright and cybersecurity laws. In particular, providers will need to make information available to downstream users who intend to integrate the GP AI model into their AI systems, assess and mitigate systemic risks, and conduct adversarial training of the model. More details can be found here .
Of course, the financial services industry relies heavily on the use of AI, with key use cases including trading, fraud detection, and risk management. Financial services firms in the EU will have to adapt processes and systems in order to stay aligned with the regulations. In particular, the high-risk applications will require financial institutions to prioritise the following:
To align with the AI Act, in-scope firms should take a structured approach. They should develop a comprehensive compliance framework to manage AI risks, ensure adherence to the Act, and implement risk mitigation strategies. In addition, firms need to take inventory of existing AI assets like models, tools, and systems, classifying each into the four risk categories outlined by the Act, with strict governance in place to oversee the transition. Although most applications are currently considered low/no-risk and won’t be in scope of the regulation at all, firms should still remain vigilant of the AI technologies that they are deploying in order to maintain compliance.
领英推荐
Following the enforcement, there will now be a staggered entry of the AI Act into force, with the provisions relating to prohibited AI systems applying from around December 2024 (six months after the publication of the AI Act in the Official Journal, which is expected to occur shortly). The obligations relating to GP AI models will apply from around June/July 2025, and most of the remaining provisions for AI developers, including as to high-risk AI systems, will be applicable from June/July 2026. Firms can face penalties of up to seven per cent of annual turnover for violations of banned, high-risk AI applications.
For more information, take a look at our Substack newsletter platform.
About us
GreySpark Partners is a global business and technology consultancy, specialising in mission-critical areas of the capital markets.
For decades, we have been a lynchpin to the world's most critical financial firms, tapping into our deep expertise and helping them adapt to changing regulatory and technological environments.
GreySpark has offices in London, New York, and Hong Kong.
Author, Building AI's with World Models @newsapience
3 个月When it comes to AI regulation, the name of the game is quality over quantity