Qatar Central Bank's AI guidelines for the financial sector

Qatar Central Bank's AI guidelines for the financial sector

In this edition, we explore the latest guidelines issued by the Qatar Central Bank (QCB) to ensure the ethical use of AI in financial services offered in Qatar. These guidelines mandate that QCB-regulated financial firms and entities adhere to the Artificial Intelligence Guideline , whether they develop and implement AI in-house, purchase AI systems, or outsource AI-dependent processes or functions.

Financial firms or entities are required to develop a clear AI strategy based on their needs and risk appetite. To ensure consistency with the entity's other strategic reviews, this AI strategy must be periodically reviewed. In terms of operational risk management, entities need to provide QCB a business case and address information and communication technology requirements, information security, and business continuity, disaster recovery, and resilience. AI strategies should include an architectural roadmap and implementation plan that covers the IT environment, the transition from the current environment to the target environment, the operating model, and any organizational changes and additional skills. Additionally, entities need to allocate adequate resources to manage their AI projects and meet ongoing business requirements.

Interestingly, the guidelines specify that the board of directors and senior management of financial firms remain accountable for the outcomes and decisions of their AI systems, including those that make decisions on behalf of the financial firms. Based on the requirements listed in the guidelines for the board of directors and senior management it is clear that moving forward, individuals in these roles must be technologically proficient and possess a deep understanding of AI and technology risks to effectively manage and oversee AI systems.

Key responsibilities of the Board of Directors (BOD)

BOD will approve AI exposure levels within the risk framework, evaluating the suitability of governance structures, assigning clear accountability, and ensuring adequate human resources for AI functions.

Key responsibilities of the senior management

Senior management must include members knowledgeable in technology risks, particularly AI. Senior managers are responsible for assessing, understanding, and monitoring the firms' AI reliance. They must ensure appropriate personnel oversee AI portfolio deployment and provide the BOD with clear, consistent, timely, and detailed information for effective oversight and management challenge.

Ethical concerns

Under the guidelines, firms must either establish a dedicated function to oversee AI or delegate this responsibility to an existing function within the entity. The function responsible for AI oversight must create or utilize appropriate committees to evaluate AI use cases before implementation. Firms must manage AI-related risks within their enterprise risk management frameworks and ensure their AI systems promote fair treatment, produce objective, consistent, ethical outcomes, and align with the firm's ethical standards, values, and codes of conduct.

Firms may appoint an ethics oversight body or ethics-focused resources within their corporate structure. They should develop monitoring controls to measure the fairness of AI models and establish policies for initiating remedial actions. Firms may define fairness for AI models by system area or use case and assess the impact of AI models on individuals or groups to ensure no systematic disadvantage occurs without clear, documented justification. Precautions must be taken to minimize unintentional or undeclared bias.

AI Governance Policy

Firms are required to establish clear AI governance policies to ensure proper management of AI-related functions and develop a risk management system to evaluate and mitigate AI-related risks, preventing potential harms. They must identify 'high-risk' AI systems. These high-risk systems are subject to stricter scrutiny, higher disclosure, and more rigorous risk management requirements.

AI Governance

Entities must ensure that AI governance functions are aware of their roles, properly trained, and adequately resourced. They need to allocate key roles and responsibilities for managing the AI portfolio effectively. Regular audits should be conducted to cover regulatory compliance, governance, customer interaction, risk management, and control evaluation. It is essential to maintain, monitor, document, and review deployed AI models consistently. Roles must be clearly allocated between model owners, developers, and approvers. Additionally, staff should be trained to interpret AI outputs, manage bias, and understand the benefits, risks, and limitations of AI.

Risk Management

Entities must evaluate the risks associated with AI deployment and assess its use in critical organizational processes. AI risk levels should be determined through risk assessments of processes and functions related to AI, aligned with overall process and function risk assessments. AI risk scores should also include additional factors, such as the risk of AI systems without human oversight and the risk of AI-assisted outcomes determined by humans. Vendor reputation and information access for third-party AI systems should also be considered.

Risk Classification

Entities must determine if an AI system is high risk. An AI system is considered high risk if it meets the following criteria:

  • potential harm to individuals,
  • material impact on employee decisions, or
  • the processing of sensitive personal information.

Risk Management

Entities must develop a risk management system tailored to handle AI-related risks for each AI system profile.


High-Risk AI Management Framework

Entities that use or provide high-risk AI systems must establish a comprehensive framework. This framework must be documented systematically through written policies, procedures, and instructions, covering the following aspects:

  1. Regulatory Compliance: Ensure adherence to Central Bank guidelines.
  2. Management Procedures: Outline how to manage or modify high-risk AI systems.
  3. Design and Development: Include techniques, procedures, and systematic actions for design control, design verification, development, quality control, and quality assurance of high-risk AI systems.
  4. Testing and Validation: Conduct examination, testing, and validation before, during, and after development, with specified review frequencies.
  5. Data Management: Implement systems and procedures for data collection, analysis, labeling, storage, filtration, mining, aggregation, retention, and other data operations related to high-risk AI systems.
  6. Monitoring Systems: Set up, implement, and maintain monitoring systems.
  7. Record Keeping: Maintain systems and procedures for keeping all relevant documentation and information.
  8. Resource Management: Ensure security of supply-related measures.
  9. Accountability Framework: Define responsibilities of management and staff.

The implementation of these aspects should be proportionate to the size of the provider's organization and the use of high-risk AI.

AI Register

Entities must maintain an updated register of all AI system arrangements. They should disclose the criteria for high-risk classification and provide a high-level risk and impact assessment to the central bank. Additionally, the full register must be disclosed to the central bank annually and upon request.

Entities must include the classification of AI systems as high risk or not, specify the entity's role (user or provider), the category reflecting the AI system's function, and the human oversight protocol. For purchased, licensed, or outsourced AI systems, they must disclose the provider's details and conduct a third-party supplier assessment. Additionally, they must maintain a detailed description of high-risk AI systems, including lifecycle development, data use, testing, tracking, and risk assessment or audit dates with summaries of results or next planned assessments.

QCB AI Approval

Entities must obtain official central bank approval before launching a new AI system or making material modifications to an existing one. Additionally, they must secure central bank approval before signing any high-risk AI purchase, licensing, or outsourcing agreement. The central bank may require further evaluation of a particular AI system in a sandbox environment before granting approval.

Outsourcing Requirements

Entities must conduct regular due diligence on outsourcing service providers, including reviewing their identity, legal status, activities, and financial position, and obtain prior consent from QCB. Before selecting an outsourcing provider, entities must assess their capabilities and expertise and conduct a risk assessment, considering data location and any third-party vendors involved. Periodic reviews of the provider's suitability and performance are required.

Board approval is necessary for outsourcing any AI-related function, and this must be documented. Entities must ensure confidentiality and security of information accessed by the provider and establish reporting and monitoring mechanisms to maintain the integrity and quality of the provider's work. Both external and internal auditors must be able to review the provider's accounting records and internal controls.

Entities must have a contingency plan for sudden termination of the outsourcing arrangement and a comprehensive service agreement with the provider. This agreement should include clauses on monitoring compliance with laws and standards, professional ethics, defined roles and responsibilities, confidentiality and security procedures, business continuity management, termination rights, data wiping upon termination, and QCB's right to audit the provider's accounts. Entities are responsible for all acts and omissions of their outsourcing service providers.

Human Oversight of AI Systems

Entities must establish a human oversight protocol for all AI systems. High-risk AI systems should be designed with appropriate human-machine interface tools to allow effective oversight by natural persons during their use. A competent and trained supervisor with the necessary authority must be assigned to oversee the AI system. The entity must ensure the supervisor has the tools and authority to:

  1. Understand the capacities and limitations of the high-risk AI system and monitor its operation.
  2. Correctly interpret the AI system's output using available interpretation tools and methods.
  3. Decide when to disregard, override, or reverse the AI system's output.
  4. Intervene in the AI system's operation or stop it using a stop button or similar procedure.

Requirements for Fully Autonomous AI Systems

Non-High Risk AI Systems

If an AI system operates without human oversight and full control over activities, the entity must provide detailed information to QCB to support its use, even if the system is considered low or no risk.

High-Risk AI Systems

Entities planning to use or provide a high-risk AI system without human oversight must obtain prior approval from QCB before launching. Once the approval is granted, they must ensure the AI system has built-in guardrails or specific limits that cannot be overridden by the AI. They must regularly review these guardrails and limits on a set schedule or in response to external factors.

Requirements for Volatility Spikes

Entities must establish built-in limits linked to warning levels or automatic closure routines. Supervisors must have the ability to shut down the system if its outputs or related data appear abnormal.

Human Exception Monitoring

Entities using AI systems that require human oversight must ensure several key measures.

  • They need to provide human-machine interface tools that allow supervisors to take control of the AI system.
  • Human monitoring should offer information that supervisors can respond to promptly, enabling them to adjust algorithm parameters during operation.
  • The design and development of AI systems should facilitate oversight and timely decision-making by supervisors.
  • Before using the AI system, appropriate human oversight measures must be in place.
  • The AI system should have built-in operational constraints that it cannot override and must be responsive to the supervisor. Additionally, operators should be trained to monitor AI-generated output and use it for decision-making.

AI System Approvals, Training, and Testing

Entities must submit the full results of training and validation testing for any new high-risk AI systems to QCB for approval. Users of AI systems must also submit the provider's instructions, technical results, and training outcomes to QCB. Users need to incorporate these details into their procedures and provide their own testing results, data sources, and human oversight plans. If a user entity makes any material changes to an AI model and rebrands it under their own name or trademark, they will be considered a provider and must comply with the guidelines applicable to providers.

AI Data Governance Requirements

Entities must manage their AI systems throughout their lifecycle, either directly or through contractual agreements with providers. Providers are responsible for supplying all relevant data from development, testing, and performance evaluations. Entities must ensure this data is available to QCB.

Entities should use distinct data sets for training, validation, and testing AI systems, ensuring internal data quality controls. Models must be reviewed to identify any false causal relationships, with validation potentially conducted by an independent function or external organization. Testing data must be used to determine AI system accuracy and check for systematic bias across different demographic groups.

In order to ensure accurate, complete, consistent, secure, and timely data, a data governance framework is required. This framework should document data quality requirements, identify gaps, and outline steps to address them. High-risk systems should use training, validation, and testing data sets, with appropriate design choices and data governance practices.

Data collection, preparation, processing, and assumptions must be suitable for AI system development. Entities should assess data availability, quantity, and suitability, identify biases, and address data gaps. Testing data sets must be relevant, representative, and statistically appropriate, sourced from reputable vendors.

To ensure accuracy and reliability, AI models need rigorous, independent validation and testing. In order to develop AI systems, entities must ensure high-quality data, prevent discrimination, and ensure safety. Privacy protection measures, such as pseudonymization or encryption, may be used by providers of high-risk AI systems.

AI Security Requirements

Entities must comply with sector-specific security regulations when deploying AI solutions and develop a program for AI trust, risk, and security management (AI TRiSM). This program should include tools for content anomaly detection, data protection, third-party system security, acceptable use policies, and processes for assessing privacy, fairness, and bias. Providers of AI models must abide by these programs, using defined AI model management processes and security controls.

Entities must examine AI models for vulnerabilities before deployment and address any security findings. They must protect AI models from integrity-related attacks, query attacks, and prompt injections. Data loss prevention tools should be deployed to protect sensitive data, and tools for content anomaly detection should also be used.

The design of AI models must be documented, including input data sources, data quality checks, model design choices, methodologies, expected outcomes, evaluation metrics, model use, validation, monitoring, and review. Entities must support monitoring systems for high-risk AI systems, ensuring compliance with provider obligations and evaluating AI system performance throughout their lifecycle.

Entities must maintain auditable records of their AI system experiences, including audit logs, decision traceability, design documentation, versioning, and original data sets. In case of errors or failures, entities should have processes to review and rectify issues, and report serious incidents to QCB.

The use of AI in customer interactions must be transparent, with clear, understandable disclosures about AI products and services, associated risks, and limitations. Customers should be informed about updates to AI systems and have access to instructions and feedback channels. AI must be used with customer consent, and personal information must be kept up-to-date. Disclosure of intellectual property and security details is not required, except for fraud and identity theft detection.

Customer Rights and Recourse

  • Mechanism for Inquiries and Reviews: Entities must provide a way for customers to raise inquiries about AI decisions. Customers can request reviews of AI decisions made without human intervention.
  • Two-Choice Process for Customers: Customers can either supply data to alter the AI system and resubmit it. Alternatively, they can request a review of a negative AI decision by a qualified human decision maker. Complaints must be handled through standard customer complaint processes.
  • Option to Opt-Out: Entities should consider offering customers the option to opt-out of AI products or services. This option can be provided by default or upon request, based on factors like risk, decision reversibility, and technical feasibility. If opting out is not provided, other recourse options must be available, such as channels for reviewing decisions.


Looking Ahead

In the next issue, we will discuss the current state of AI regulation in Saudi Arabia.

Thank you for joining me on this exploration of AI and law. Stay tuned for more in-depth analyses and discussions in my upcoming newsletters. Let's navigate this exciting and challenging landscape together.

Connect with me

I welcome your thoughts and feedback on this newsletter. Connect with me on LinkedIn to continue the conversation and stay updated on the latest developments in AI and law.

Disclaimer

The views and opinions expressed in this newsletter are solely my own and do not reflect the official policy or position of my employer, Cognizant Technology Solutions. This newsletter is an independent publication and has no affiliation with #Cognizant.




























































































































































































































































































































































































































要查看或添加评论,请登录