AI Governance Reality Check
Part 2 of AORA Framework: Examples and Lessons Learned
There are so many areas of risk for AI Governance in general that you can easily come to the conclusion that the letters A and I should never be put next to each other in your organization if you don’t narrow down on the specific use cases you’re considering. This is why I created the AI Opportunity and Risk Assessment (AORA) Framework based on my practical experience driving AI initiatives. In the last post, you got an overview of this practical tool for responsible AI governance. Now, let's dive into the nitty-gritty of how it actually works in the real world. We will start with 2 example assessments for different organizations and likely use cases so you can see what an actual assessment can look like and potential next steps based on those results. I will also share lessons learned with Reality Checks based on actually running AI governance committees and processes so you can save some detours along the way.
Whether you're just dipping your toes into AI governance or looking to fine-tune your existing approach, these insights will give you a clear roadmap. Let's get down to business and show you how to make responsible AI a reality in your organization.
Example AI Opportunity and Risk Assessment (AORA) Matrix: Small Marketing Firm
A small marketing firm is considering the implementation of AI in three distinct areas of their business operations.
The first use case is Content Creation, which aims to generate marketing campaign packages including ads, articles, email content, and short videos for various social media platforms. This project is expected to yield medium efficiency gains and some cost savings.
The second use case focuses on Sales Process Automation, leveraging AI to automatically route leads, do follow ups, and do outbound communications to improve efficiency and potentially increase revenue, with an expected medium to high revenue impact.
The third use case involves Employee Onboarding, using a chatbot to streamline new employees getting up to speed to the organization, the specialized in-house know-how, and day-to-day processes. Since it’s a small organization and new employees are infrequent, the anticipated benefit is small in terms of efficiency. All three projects will be leveraging 3rd party generative AI solutions, classified as the “buy” build option for the technology.
Risk Assessment
Prioritization Analysis
1. Sales Process Automation: This use case offers the highest potential benefit (Medium to High) with a moderate overall risk score of 4. The main concern is how accurate the AI is when deciding on responses or follow ups, which can be mitigated with adding a human review step before communication goes out, or with well-thought out messaging that works well even if there’s a mis-step in classification.
Reality Check: My vendor outreach to a chatbot platform company was once classified as an investor inquiry because my company is classified as an investor and I ended up getting a CEO call who couldn’t answer my technical questions on the platform.
2. Content Creation: While offering medium benefits, this use case has the highest overall risk score of 4.25. The main risks are in ethical/bias potential for issues when generating images and reputational risk of clients believing AI generated marketing is lower value. If you don’t carefully mitigate the risks could potentially be outweighed by the efficiency gains.
Reality Check: Reputational risk is very specific to your clients, you have to ask to know. For some clients it might be a sign of innovation and speed that you’re using AI, but watch out if their expectations on lower prices accompany anything produced with AI.
3. Employee Onboarding: This use case offers the smallest benefit but also has the lowest overall risk score of 0.5. It could be a good starting point for AI implementation, especially considering the potential reputational benefit. This is reputation as an employer where empolyees are exposed to AI tools and potentially learn AI related skills.
Recommended Next Steps
1. Begin with the Sales Process Automation use case, as it offers the highest potential return with a manageable risk score.
2. Second, start surveys or focus groups with clients on perception of using AI in the Content Generation process. This will help you determine the true reputational risk and decide if you should proceed with mitigation or kill the use case.
3. Optional: Consider implementing the Employee Onboarding use case as a low-risk pilot project to gain experience with AI systems and potentially boost the company's reputation as an innovative employer. I recommend you pick one platform that can implement all three and using this as the pilot to prove out the vendor.
4. Create an AI working committee focused on moving these use cases and their risk mitigation forward. See one use case through to implementation or decision to stop before you add on new use cases. This keeps your AI governance and your working committee and processes focused, practical and manageable.
Reality Check: In this case you need to have procurement as part of your committee and a generative AI technical SME to help answer questions. You don’t want that SME to be from the vendor, but someone independent. You don’t need much legal and compliance presence in these use cases, but might be good for them to sit in just to learn.?
Example AI Opportunity and Risk Assessment (AORA) Matrix: Healthcare Nonprofit
A nonprofit community health services provider wants to use AI to improve efficiency and reduce workload for overworked clinical and operational staff. They’re considering 3 use cases:
1.???? Patient Session Notes: Use a customized AI to help our clinical staff write better, more consistent notes faster, and more often. This can improve patient follow ups and not add to the workload.
领英推荐
2.???? Grant Applications: Use customized AI to research, match, and help write more and better grant applications. The goal is to increase our funding to support more people.
3.???? Predicting Relapse Risk: Patient follow ups can be very inconsistent. If we can prioritize outreach to those with the highest risk, we can better leverage our resources for those who need help the most.
?
Risk Assessment
Prioritization Analysis
1. Patient Session Notes: This use case offers high benefits with a high overall risk score of 15. The main concerns are in data privacy, legal compliance, AI performance, and vendor risks. These will require careful management but could potentially be outweighed by the significant efficiency and quality gains in day-to-day operations.
Reality Check: The true benefit of this use case is if it’s seamlessly integrated into the clinical staff’s existing workflow without needing a lot of training using a new tool. Just having them recording voice notes that is used later to generate the written notes using AI might be a great way to implement this to maximum benefit.
2. Grant Applications: This use case offers medium benefits with a moderate overall risk score of 8. The benefits are medium because there’s no guarantee that the AI use case actually results in more successful grants, only an efficiency in the process is guaranteed. The main risks are around AI generating content that’s inaccurate, which can be easily mitigated with human review. Its lower risk profile makes it a good candidate for early implementation.
3. Addiction Relapse Prediction: While this use case offers high benefits, particularly in terms of reputational impact, it also carries the highest overall risk score of 18.5. The main concerns are in data privacy, legal compliance, ethical issues, AI performance, and reputational risks. Given its complexity and high-risk profile, this use case is prioritized last to allow for more careful planning and risk mitigation strategies.
Reality Check: A very cool use case, but because you need to build your own in-house model, do you have enough data to both train and test a model?
Recommended Next Steps
1. Begin with the Patient Session Notes use case. Focus on:
?? - Developing robust privacy and security measures.
?? - Ensuring regulatory compliance.
?? - Establishing clear processes for AI-generated content review.
?? - Training staff on the new system and its integration into their workflow.
2. Second, experiment with using AI just to research for net new Grant Applications first. The results can quickly tell you if there are a lot of untapped grant opportunities that using AI to write applications can help you get to. If it’s not a lot, you don’t need to pursue this use case. If it’s 5-10 net new, you might want to prioritize this use case earlier for implementation.
3. For the Addiction Relapse Prediction model, start with talking with other organizations who have taken on this type of build before and learn from them. You will quickly find out if this use case is even feasible for your organization.
Reality Check: You might even find out that there’s one from a respected peer organization you can try and use if you agree to contribute your own data.
4. Build your AI Governance working committee focused on the Patient Session Notes use case. The right regulatory and compliance expertise will need to be present to ensure the risk mitigations meet the bar. You need a technical expert familiar with text generative AI, voice AI, and PII anonymization as well. That expert should be independent of the vendor. As the committee work through the implementation of this use case, they will be a great starting point for new use cases that leverages AI in operations and clinical settings.
Reality Check: Ask any vendor you’re selecting to speak to their regulatory and privacy expert. If they don’t have one, probably not the right vendor.
Conclusion
So, what have we learned? We've walked through two real-world examples, each tackling three AI use cases with the AORA Framework. Along the way, some Reality Checks to thrown in to show potential pitfalls when theory meets practice.
Here's the bottom line: AI is a game-changer for how we work. It's powerful stuff, but we need a way to tap into that power without being reckless or getting bogged down in red tape. That's where the AORA Framework shines. It's not about holding you back – it's about helping you move forward with confidence. You get to seize those AI opportunities while keeping risks in check. No paralysis by analysis, and no flying blind either. It's time to make AI work for you – responsibly and effectively.
ARR Team Manager
2 个月great framework, hope we can build a real case to go through it
Well said!