Navigating the Urgent and Significant New Regulatory Landscape of Digital Operational Resilience and AI Governance

Navigating the Urgent and Significant New Regulatory Landscape of Digital Operational Resilience and AI Governance

Introduction

The financial services industry has witnessed a transformative impact from new technologies, necessitating significant adaptation. This digital evolution has increased reliance on IT systems and introduced new risks. In response to these challenges, the European Union has introduced the Digital Operational Resilience Act (DORA) and the AI Act. These regulations are not just a response, but a necessary step to address the challenges of digital evolution, strengthening the resilience and integrity of digital and AI systems within financial services.

DORA and the AI Act: An Overview

DORA and the AI Act establish new regulations for digital operational resilience and the ethical use of AI in the financial sector. Both require robust governance structures and comprehensive risk management frameworks. These frameworks are designed to cover all potential risks, ensuring that financial institutions are well-prepared for any eventuality. Financial institutions must comply with new obligations for managing ICT risks, incidents, and third-party AI providers. Prompt incident reporting is essential under both regulations to ensure transparency and timely mitigation.

Critical Regulatory Articles, Lex Specialis, and NSA Designations

The AI Act contains several critical clauses that impact DORA’s implementation and highlight the regulatory interplay between the two frameworks. These essential articles and the concept of Lex Specialis and NSA designations form the cornerstone of the new regulatory landscape.

The AI Act will serve as Lex Specialis, taking precedence over DORA in specific scenarios where AI-specific governance and risk management requirements are more detailed. The AI Act’s provisions for high-risk AI systems and AI-specific incident reporting will prevail over DORA’s general regulations. This ensures that the most pertinent and rigorous standards are applied to the governance and management of high-risk AI systems.

These articles collectively ensure that the AI Act’s stringent standards are seamlessly integrated into the broader ICT governance frameworks established by DORA. This integration is not just important, it's vital for maintaining a cohesive and resilient digital operational landscape within the financial sector.

  • Article 9 - Risk Management System: Requires providers of high-risk AI systems to establish a risk management system integrated into DORA’s ICT risk management frameworks.
  • Article 10 - Data and Data Governance: Mandates data governance standards aligning with DORA’s requirements.
  • Article 11 - Technical Documentation: Obligates maintaining technical documentation for AI systems, incorporated into DORA’s ICT governance documentation.
  • Article 12 - Record-Keeping: Requires maintaining logs for AI systems, aligning with DORA’s ICT log requirements.
  • Article 13 - Transparency: Imposes transparency requirements integrated into DORA’s governance frameworks.
  • Article 14 - Human Oversight: Ensures human oversight of AI systems, complementing DORA’s emphasis on governance and accountability.
  • Article 15 - Cybersecurity: Mandates the robustness and security of AI systems, aligning with DORA’s resilience requirements.
  • Article 17(4): Allows compliance with specific internal governance requirements under Union financial services law to be deemed sufficient for the quality management system.
  • Article 18(3): Requires maintaining technical documentation as part of the documentation kept under Union financial services law.
  • Article 19(2): Obligates maintaining logs generated by high-risk AI systems as part of the documentation under financial services law.
  • Article 26(5): Mandates monitoring high-risk AI systems and reporting significant incidents to relevant authorities.
  • Article 26(6): Requires maintaining logs of high-risk AI systems as part of financial service documentation.
  • Article 54 - National Supervisory Authorities: Each Member State must designate a national supervisory authority (NSA) responsible for monitoring the application and implementation of the AI Act, including in the financial services sector.
  • Article 72(4): Ensures compliance for high-risk AI systems placed on the market by financial institutions.

Overlaps and Practical Considerations

  1. Testing and Auditing: Both acts emphasise continuous monitoring through testing and auditing. DORA mandates digital operational resilience testing, while the AI Act requires regular AI system validation.
  2. Accountability and Liability: Both acts impose organisational accountability for digital and AI systems. DORA assigns responsibility for ICT resilience to senior management, while the AI Act requires documentation and accountability for AI system development.
  3. Supervisory Frameworks: Supervisory frameworks under both acts ensure effective compliance monitoring. DORA empowers ESAs to oversee ICT providers, while the AI Act designates national authorities for AI compliance.

Conclusion

Financial institutions stand at a pivotal juncture where aligning with DORA and the AI Act is not just a regulatory requirement but a strategic imperative for long-term resilience and competitiveness. Institutions can preempt potential disruptions and safeguard their operations by conducting comprehensive risk assessments that encompass both ICT and AI-related threats. Developing robust incident response plans tailored for ICT and AI incidents ensures institutions can respond swiftly and effectively to any breaches, minimising impact. Integrating governance frameworks that blend ICT resilience with AI ethical standards will create a robust foundation for navigating the complexities of digital transformation. This strategic alignment will ultimately protect against diverse risks, enhance stakeholder trust, and pave the way for sustainable growth in the digital age.

Article by Dr Ian Gauci

Dr Gauci is the Managing Partner of GTG, a technology-focused corporate and commercial law firm that has been at the forefront of major developments in fintech, cybersecurity, telecommunications, and technology-related legislation.

Disclaimer: This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.

要查看或添加评论,请登录

Ian Gauci的更多文章