Best Practices in AI Risk Assessments

Best Practices in AI Risk Assessments

AI Risk Assessment Best Practices

The most common question I get when we talk about AI Risk Assessment is if there is a template we could follow. I wish I could answer

“Yes, Here you go!"?

If you have been reading the newsletters in the last two weeks, you must now appreciated the various aspects and dimensions you need to think about when thinking about structuring AI Risk Assessments.

It is always fascinating when you discuss with practitioners who are in the weeds of working with data driven models. You have to factor the domain, context and the use cases where the models are going to be used. You have to align with the internal policies and applicable regulations. You have to adhere to the data governance policies and the IT and software deployment policies. If you are building applications for high-critical use cases, safety is super important and the margin for errors would be extremely low. If you are in a dynamic environment when data and the environments aren’t stationary, you have to incorporate processes to monitor and factor the changes in your modeling. And all models won’t be treated equally. You have to incorporate processes depending on the use of models. The users of the model, depending on how sophisticated they are, will warrant specific design criteria. The list goes on and on!?

So what template can factor all these criteria? NONE!!!

While a prescriptive approach to AI risk isn’t practical, some universal best practices can be used across the board. I will share a few here:

??Data is everybody’s problem:

Data-driven models require high quality, accurate, current datasets to ensure the decisions are actionable. Modelers at times, assume that the modeling is their job and data is someone else’s problem. A comprehensive data strategy with vetted out policies and clear responsibilities is required for the deployment of a successful AI program! AI Risk Assessments must incorporate data reviews!

??Defining what is your risk:

Your risk is unique to your use case! You need to contextualize and decide if the model and decision strategy works for you. Engage the requisite stakeholders and ensure you have vetted out the model prior to deployment. When you are delegating decision making power to models, you should be clear about the expectations and risks! AI Risk Assessments must contextualize the use of the models for specific use cases.

??Someone needs to look under the hood:

I have seen a lot of systems and products in the marketplace abstracting the complexities and providing APIs and black-box solutions to solve problems. I understand, every company cannot hire PhDs to build models from scratch. But you are responsible for the models and the decisions you take based on these models. Your team must have the right skills to vet solutions out before adoption. Remember, only you understand the context and use cases. You need to be comfortable with the decisions! Delegating decisions to a model doesn’t mean it is someone’s risk. You are still responsible! AI Risk Assessments must have a plan to assess vendor/supplier models too!

??Test, Test, Test throughout the lifecycle:

Testing is usually thought out as a gating process. Managing risk comprehensively means you identify, analyze and evaluate risks throughout the lifecycle and ensure your solution is robust in every step. It is impossible to test everything, that’s where a formal test plan would help assess how much testing was done and needed to have confidence in your solutions. AI Risk Assessments must incorporate a formal test plan and an ongoing test plan!

??Assess for strong contingency plans :

I had a friend who once told me jokingly:

“Newton’s law may fail but Murphy’s law never fails!”

As models get more complicated, it will be hard to ensure the models perform for all use cases as planned. There will be edge cases you may never have factored. That’s why a strong contingency plan (including a human-in-the-loop/kill-switch) is required. Firefighting is hard, especially if you are learning how to fight fires during a fire! Plan early! AI Risk Assessments must evaluate risk control, mitigation and control plans to see if they are adequate for the use case.

In the next few weeks, we will pick examples and illustrate how AI Risk Management best practices could be incorporated but again, these would be illustrations. You have to contextualize and customize it to your own needs! Ultimately,?

You are responsible for the systems you build!

??Keep on learning!

?? Want to learn more? Join the AI Risk Management Certificate program developed in partnership with PRMIA ->?https://lnkd.in/eVEhyNSQ

??Many of these topics will be elaborated in the?AI Risk Management?Book published by Wiley. Check updates here ->?https://lnkd.in/gAcUPf_m

??Subscribe to this newsletter/share it with your network ->?https://www.dhirubhai.net/newsletters/ai-risk-management-newsletter-6951868127286636544/

I am constantly learning too :) Please share your feedback and reach out if you have any interesting product news, updates or requests so we can add it to our pipeline.

Sri Krishnamurthy?

QuantUniversity

#machinelearning?#airiskmgt?#ai

要查看或添加评论,请登录

社区洞察

其他会员也浏览了