Fifth Third Bank's Chief Model Risk Officer Presents a Framework for Managing Generative AI in Retail Banking

Fifth Third Bank's Chief Model Risk Officer Presents a Framework for Managing Generative AI in Retail Banking

In last week's QuantUniversity 's Guest lecture series, Rafic Fahs , Chief Model Risk Officer at Fifth Third Bank , discussed the challenges and solutions for managing generative AI in retail banking. The discussion emphasized the importance of balancing innovation with robust risk management to ensure the secure and efficient use of this transformative technology.

Key Risks and Challenges

Generative AI presents unique risks that traditional model risk management frameworks might not fully address. Some of the key challenges highlighted by Fahs include:

  • Confabulation: Generative AI models, particularly large language models (LLMs), can generate outputs that sound plausible but are factually incorrect.
  • Harmful Recommendations: The models might provide recommendations that could lead to negative consequences if acted upon without proper human oversight.
  • Mishandling Sensitive Data: Generative AI can inadvertently reveal or misuse sensitive information, leading to privacy breaches and regulatory violations.
  • Runaway Models: The widespread accessibility of generative AI tools like ChatGPT can result in models being developed and deployed outside the purview of established risk management practices.
  • Zero-Shot Applications: Using LLMs without appropriate training on banking-specific data can lead to inaccurate and unreliable results.
  • Lack of Initial Controls: Existing control protocols may not adequately address the novel modeling scenarios presented by generative AI. This necessitates establishing robust controls early in the model development lifecycle.
  • Scope Criteria: Determining the scope of models subject to rigorous risk management is crucial, as the inherent risk of generative AI applications varies greatly. For example, customer-facing applications like chatbots require more stringent assessment than internal task automation tools.
  • Validation and Change Management: Traditional model validation techniques might not be sufficient for generative AI. There's a need to incorporate assessments of prompt creativity, contextual understanding, performance of subjective outputs, and synthetic data quality.

Addressing the Challenges: A Comprehensive Framework

To mitigate these risks, Fahs proposes a comprehensive framework consisting of several key components:

  • AI Governance Working Group: Establishing a diverse working group comprising experts from Model Risk, Compliance, Enterprise Risk, IT, Enterprise Data Management, Third-Party Risk Management, Legal, and business units is crucial. This group facilitates proactive identification and management of runaway models, addresses initial control gaps, manages access levels, monitors for ethical usage, ensures human oversight, and drives overall progress in generative AI adoption.
  • Exemption Criteria: A well-defined set of exemption criteria can help streamline risk management processes. Generative AI systems may be exempt from the full model risk management framework if they meet specific criteria, such as not directly impacting critical business decisions, handling sensitive data, having regulatory relevance, producing customer-facing outputs, or posing high ethical, legal, or security risks.
  • Generative AI Model Lifecycle Management Framework: For use cases involving material risk, a robust lifecycle management framework is essential. This framework should cover all stages of a generative AI model's lifecycle, from identification and tiering to development, validation, implementation, monitoring, and change management.
  • Customized Risk Tiering: Risk tiering should be tailored specifically for generative AI, recognizing that these models present unique risks compared to traditional models. The tiering system should consider factors like confabulation risk, potential for harmful recommendations, data privacy risks, and the significance, reliance, and uncertainty associated with the model's outputs.
  • Enhanced Validation Templates: Standardizing validation processes for generative AI is key to ensuring consistency and accuracy in risk assessments. This includes publishing specific documentation and validation templates that outline expectations and capture key risks associated with these models.
  • Training: Comprehensive training programs are crucial to educate model owners and other stakeholders on the intricacies of the model risk management lifecycle for generative AI models. (Check out QuantUniversity comprehensive 6-month program we are offering in partnership with PRMIA - Professional Risk Managers' International Association -> ML and AI Certificate program -> https://prmia.org/Public/Shared_Content/Events/PRMIA_Event_Display.aspx?EventKey=8864)

Scope Criteria and Override Controls

Determining the scope of models subject to the framework involves assessing the inherent risk associated with various use cases. This assessment considers the user group, usage category, and specific task performed using generative AI.

For example, a technical expert using generative AI for a non-generative classification task might be considered out of scope. In contrast, a customer service representative using open text generation to interact with customers would likely be considered in-scope, particularly without appropriate controls.

Override controls can help mitigate risks and potentially reclassify certain in-scope use cases to a lower risk tier. Some of the override controls discussed include:

  • Comprehensive Human-in-the-Loop (HITL) oversight
  • Use of proven, low-risk, proprietary algorithms
  • Strict data access and role-based controls
  • Sandbox environments for non-critical models
  • Automated output with mandatory human review
  • Restricting certain models to non-public facing use

Validation Considerations for Generative AI

Validating generative AI models requires a nuanced approach, going beyond traditional performance metrics. Fahs recommends incorporating specific considerations into validation templates, including:

  • Clear Metrics: Defining clear metrics for both quality (coherence, fluency, relevance) and safety (detection of inappropriate, biased, or harmful content).
  • Contextual Evaluation: Assessing outputs within the context of their intended use.
  • Quantitative & Qualitative Methods: Combining quantitative metrics like F1 score, BLEU score, and ROUGE score with qualitative methods like expert reviews and user feedback.
  • Synthetic Data Quality: Ensuring the representativeness and lack of bias in synthetic data used for training and validation.
  • Prompt Creativity & Contextual Understanding: Evaluating the model's ability to handle diverse prompts and generate creative outputs within the appropriate context.
  • Documentation: Maintaining thorough documentation throughout the model lifecycle, including data preparation, training process, prompt templates, inference parameters, testing procedures, and bias testing results.

Fahs' talk underscores the evolving landscape of model risk management in the age of generative AI. Financial institutions must proactively adapt their frameworks to address the unique challenges posed by these powerful models. By implementing robust controls, establishing comprehensive lifecycle management processes, and tailoring risk assessments to specific use cases, institutions can harness the innovative potential of generative AI while effectively managing associated risks.


Slides and Video of the workshop

The slides and video from yesterday's workshop are available here:

If you don't have a www.qu.academy account, register using the code "QUFallSchool24" to get access to the video and slides. If you already have an account, just login and you will see this and all the other lectures from the QuantUniversity AI Fall school!


Join 5000+ subscribers to the QuantUniversity's weekly edition of the AI&Risk Management Newsletters to get valuable insights from academics, industry professionals and thought leaders. You will also be alerted about the weekly guest lecture series I host every week!

Yours truly

Sri Krishnamurthy, CFA, CAP

QuantUniversity


Adeola Sopade

AI GRC | Risk Assessment and Management | ISC2 CC | Lawyer

3 个月

Insightful as usual ??????

Inna Khagleeva

Quantitative Analyst, VP, Model Validation

3 个月

Useful tips

要查看或添加评论,请登录

Sri Krishnamurthy, CFA, CAP的更多文章

社区洞察

其他会员也浏览了