A Practical AI Governance Framework

A Practical AI Governance Framework

Introduction

In today's fast-paced AI landscape, organizations need a pragmatic approach to AI governance that balances innovation with risk management. Traditional frameworks often hinder progress with overly broad definitions and blanket restrictions. Our new framework, the AI Opportunity and Risk Assessment (AORA) Matrix, focuses on specific risks for actual use cases, enabling AI adoption while protecting organizations. By narrowing the focus to relevant risks and opportunities, this approach allows companies to move forward with beneficial AI projects while maintaining a realistic view of potential pitfalls.

Framework Overview

Our AORA framework provides a structured approach to identifying, assessing, and managing AI-related risks within an organization. It begins by identifying specific AI use cases across different business units and assessing their potential impact and benefits. This crucial step helps prioritize efforts on the most promising AI applications. The framework then focuses on selecting the right technology solution and build options for each use case, as these choices determine which risk categories are relevant and likely. A comprehensive risk evaluation process follows, assessing each use case against the applicable risk categories. This approach allows for a nuanced understanding of potential risks, their impacts, and likelihood of occurrence. The framework also considers contextual factors and emphasizes adaptability to different business needs and changing environments.

Below are the key components of the framework, which can be used as resources throughout the risk assessment process.

AI Use Case Identification and Impact Assessment (Opportunity)

Unlike most AI governance frameworks that begin with broad explanations of AI technologies, our approach starts with identifying specific use cases. This practical method allows organizations to focus on what they want to achieve with AI, why they want to pursue these goals, and the potential impact of each initiative. By starting with use cases, we ground the discussion in tangible objectives and outcomes, making it easier to assess risks and benefits in a meaningful context.

To help guide the identification process, we've outlined several general categories of AI use cases. These categories are not exhaustive but serve as a starting point to inspire thinking about potential AI applications within your organization. Feel free to adapt these categories or add your own based on your specific industry and organizational needs.

General Categories of AI Use Cases

  1. Internal Efficiency
  2. Decision Support and Analytics
  3. Enhanced Customer Experience
  4. Revenue Growth and Market Share Expansion

Assessing Impact and Benefits

Once you have selected potential use cases you would like to pursue, it's crucial to assess the potential benefits in two primary dimensions: dollar value and strategic value. This assessment helps prioritize initiatives and balance against the risk incurred to get to a final decision.

  1. Dollar Value
  2. Strategic Value

Recognizing that precise value calculations can be challenging, especially for strategic benefits or long-term impacts, we recommend using simplified sizing methods:

  • T-shirt Sizing: Categorize use cases into sizes like Small, Medium, Large, and Extra Large based on their potential impact.
  • Relative Sizing: Compare use cases against each other, ranking them in order of potential benefit or impact.

These methods allow for quick, intuitive comparisons between use cases without the need for exact figures. They are particularly useful in the early stages of assessment when detailed data may not be available.

AI Technology Scope

Once you have selected potential use cases for AI, you need to narrow down the scope of the AI systems and build options involved. This step is not about designing the detailed technical solution for your use cases. Instead, it focuses on categorizing the type of AI used for each use case and determining your build option. The goal is to provide just enough information to narrow down the relevant risk categories that need to be assessed in the governance framework.

To simplify the technical aspects of this, we have abstracted the full spectrum of AI systems to a few types-of-AI categories based on their different nature. With these categories and high level descriptions, you should be able to select and asses quickly and without needing a graduate degree in AI.

Types-of-AI Categories:

1.???? Generative AI:

Systems that can create new content (text, images, audio, video)

2.???? Predictive AI:

Systems that forecast future outcomes based on historical data

3.???? Classification and Content Understanding:

AI systems that categorize, interpret, and comprehend various inputs (e.g., images, text, speech)

4.???? Automation:

AI systems that automate tasks and processes

5.???? Expert Systems:

AI that emulates the decision-making ability of a human expert

6.???? Self-Learning Systems

AI that improves its performance through experience and interaction with an environment

7.???? Autonomous Systems:

AI capable of operating independently in complex environments (e.g., self-driving cars)

8.???? Recommendation Systems

AI that suggests items or content based on user preferences and behavior

Each of your use case is leveraging at least one of these Types-of-AI, and could be using multiple.

Build Options:

In addition to specifying the categories, you will also need to specify if your build options: buy, build, or customize. The build option Buy is implementing a 3rd party vendor for the use case. Build of course is building the full AI solution in-house, including the model. Customize is the option where your writing custom code against an external model or platform. Take a chatbot for example, if you rollout ChatGPT directly in-house, that’s the Buy option. If you build your own with your own LLM model, that’s Build. If you create a web app that calls the OpenAI GPT model through an API, that’s Customize.

Risk Scope and Probability (Risk)

After categorizing AI use cases and determining build options, the next crucial step is to narrow down the risk scope. This step of the process is tailored to focus on the most relevant risk categories based on the AI technology and implementation approach chosen for each use case. Not all risk categories are applicable to all types of AI. You will need an AI technical subject matter expert to help you indicate which ones are relevant for your use cases and technology scope.

Risk Categories:

The following risk categories should be considered for each AI use case:

  • Data Privacy, and Security
  • Legal, Regulatory Compliance, and Intellectual Property
  • Ethical and Bias Issues
  • Operational Risks
  • Reputational Risks
  • Cybersecurity
  • AI Performance and Reliability
  • Vendor and Third-party Risks
  • Intellectual Property

Once you have identified the relevant risk categories for each use case, you need to assess the impact and probability of the risk occurring. Use Low, Medium, and High for both.

Risk Assessment Process

For each AI use case, follow these steps:

  1. Identify Relevant Risks: Based on the AI technology category and build option, determine which risk categories are most applicable.
  2. Assess Likelihood: Evaluate the probability of each identified risk occurring. This can be done using a simple scale (e.g., Low, Medium, High) or a more detailed numerical scale.
  3. Assess Impact: Determine the potential consequences if the risk were to materialize. Consider both quantitative (e.g., financial losses) and qualitative (e.g., reputational damage) impacts.
  4. Calculate Risk Score: Combine the likelihood and impact assessments to derive an overall risk score for each identified risk.

AI Opportunity and Risk Assessment (AORA)Matrix

Now that you have all the components of this framework, you’re ready to build your AI Opportunity and Risk Assessment (AORA) Matrix. This will be a grid of your AI uses cases, their benefits and impact, technology scope/build option, risk categories and risk score. Use this to select your top priority use cases to move forward towards building, technology solution, and risk mitigation planning.

For the risk assessment to be truely relevant to your organization, other factors to consider when doing the risk assessment:

Risk Allocation: Distinguish between risks associated with:

a. The company

b. The vendor (if applicable)

c. The AI technology itself

Contextual Factors:

  • Consider organization and business unit characteristics
  • Account for data sensitivity and IP concerns
  • Factor in employee demographics and technology literacy

Adaptability:

  • Allow for different risk profiles and tolerances across business units
  • Conduct regular reviews and updates to the risk assessment grid

Conclusion

This is the current working model that’s based on 15 months of practical experience with every variation in a highly regulated industry. This model has allowed both AI adoption, and AI innovation to move forward without exposing the firm to needless risk even in an ever evolving legal and regulatory landscape. There were many practical lessons learned along the way of focusing to a smaller working group, always benchmark against the current risk levels of existing technology, and allowing plenty of time for education and iterative understanding. This is a very large topic, and I will continue in the next newsletter with example assessments, lessons learned sharing, and how to actually create and run an AORA working group.


If you like this article, check out my other ones in https://yuying.substack.com. Subscribe for permanent access to my full library.

Also, catch my monthly podcast AI Afterhours for the monthly AI news round up, the big questions around AI answered, and cutting through the BS in AI.

Holistic Hoctagon?AI

hoctagon.com/ai ??Hoctagon AI Business Academy | ? ChatGPT GPTs Custom | ?? AI Governance | ?? AI Principles, Values & AI Ethics | Productivity | Leadership | Social Impact | Digital Transformation ??

2 周

That's a fascinating topic! AI governance and AI for good are critical issues to explore. I'm part of a new group where we want to discuss these subjects in depth with experts and passioned people. We'd love to hear your perspective. Here the link to join the group https://www.dhirubhai.net/groups/9588727

回复
Annabelle Vultee

CEO | Board Member | Harvard MBA | Msc in AI Student

3 周

Love this, and agree Yuying Chen-Wynn

回复
Monsieur Mahmoud Helping Gvrts., Entities and Orgs. in AI Transfms

AI Strategist | AI Enthusiast | AI Governance |Entrepreneur| Board Member| CEO | Head of Digital Transformation Committee |MD |Transformation Leader | Problems Solver | Change Management Leader| Innovation Catalyst

3 周

Very informative, Thanks Yuying Chen-Wynn

回复

Thanks for sharing Yuying Chen-Wynn. The abstraction of AI categories is logical and straightforward to follow

回复

Interesting, thank you for sharing this!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了