Risk Assessment for AI projects

Risk Assessment for AI projects

Risk Assessment is an integral part of any project to be taken upon. So here we are going to talking about how to assess the risks in implementing Data Science project or Machine Learning Projects.

Machine Learning, Artificial Intelligence are some technologies which will shape the future, but also expose the organizations taking up the projects to a fast-changing universe of risk and ethical pitfalls that regulators signal they will be watching for.

So, it is important to assess the identify and manage the associated risks of the ML/AI projects as well.

Organizations must put together the legal and risk management teams alongside the data science team at the center of AI/ML development process and initiate risk analysis at the start of the AI model design, including the data collection and governance process. Involving legal, risk and tech teams since starting enables then to function as “tech trust team” that ensures the models conform to social norms and legal requirements still delivering the maximum business value. And also apply and informed risk-prioritization plan as initial step in an effective, dynamically updated risk-management approach.

Such prioritization plan helps in designing a catalog for the organization’s AI/ML specific AI risks to define the harm you seek to avoid and then following a clear approach to estimate those risk for mitigation.

Now the major point how to identify the AI/ML risks. We can use six-by-six framework to map risk categories against possible business contexts.

First, we can go through the six types of risk, possible in AI context:

  1. Privacy: Data is the lifeline of any AI model. Privacy laws around the world authorize how companies can utilize the data, while consumer set normative standards. Violating the laws and norms can result in significant liability, as well as harms o consumers. And disregarding consumer trust, even if the data use was technically lawful, can lead to reputation at risk and decrease in customer loyalty.
  2. Security: New AI model are complex hence vulnerable to create new and known risks both. Model extraction and introduction of bad data into training data can lead to new challenges to long-standing security approaches. Minimum security standard in the existing legal framework can help to minimize the risk.
  3. Fairness: AI models can be easily exposed to biased output by data feeding into the model. That bias in the model can potentially harms classes and groups can expose the company to fairness risks and liabilities.
  4. Transparency and interpretability: A lack of transparency around a model (such as how data sets feeding into a model were combined) or the inability to interpret how a model arrived at a specific result can run to problems, not the least of which is potentially running afoul of legal mandates. For example, if a consumer opens an inquiry into how his or her data were used, the organization using the data must know into which models the data were fed.
  5. Safety and performance: AI applications, if not executed and tested properly, can suffer performance issues that breach contractual guarantees and, in extreme cases, pose threats to personal safety. Suppose a model is used to ensure timely updates of Stock out of Drugs in Hospitals or Pharmacy; a failure of this model could constitute negligence under a contract and/or lead to unavailability of Drugs for Patients.
  6. Third-party risks. The process of building an AI model mostly involves third parties. For example, organizations may outsource data collection, model selection, or deployment environments. The organization engaging third parties must acknowledge and understand the risk-mitigation and governance standards applied by each third party and should independently test and audit all high-stakes inputs.

Now that we know these risk types in AI project, now we are pinpointing the context in which these risks can occur can help us provide guidance as to where mitigation measures should be directed.

  1. Data: Risks can turn up through the way data get captured and collected, the feature engineering.
  2. Model selection and training: Models should be evaluated, selected, and trained based on various criteria, involving choices that represent grounds for risk. For example, some models are more transparent than others: While a relatively Black box model might offer better performance, a legal requirement for openness could compel use of a different model.
  3. Deployment and infrastructure: Models are deployed, when ready for real-world use. This process and the essential infrastructure supporting it present risks. For example, the model might not succeed to perform in the real world as demonstrated by its performance in a lab environment.
  4. Contracts and insurance: Contractual and insurance promises often explicitly address some AI risks. for example, some service may include in their service-level agreements parameters around model performance. Insurance providers might assign liability for incidents such as security or privacy breaches.
  5. Legal and regulatory: Different Industries, sectors, and regions around the world have differing standards and laws concerning privacy, fairness, and other risks presented in this framework. Therefore, it’s important to be informed of applicable laws and regulations based on where, how, and in what sector the model will be deployed.
  6. Organization and culture: Broader efforts, such as training, resource provision, and interdisciplinary partnership among cross-functional teams play key roles in mitigating risk. To consider the types and likelihood of risks that might arise, it’s important to know if these exist, and at what level.

Now that we have risks clearly defined, the next step is to assess the catalog for the most significant risks and sequence them for mitigation. As it will be impossible to mitigate all risk in AI, by prioritizing the risks most likely to generate harm, projects could prevent AI liabilities from arising-and mitigate them quickly if they do.

A helpful way for prioritizing could be to calculate the possible loss is multiplying the probability of harmful event by the loss the event can generate.

Some of the key foundation or best practices of managing risks in any AI project:

  • Standard practices and model documentation: Standard policies for steps in the development life cycle: recording data provenance, managing metadata, mapping data, creating model inventories, and more—ensure development teams follow the same sound, approved processes.
  • Consistent model documentation: If individual data scientists document their models differently at varying phases of the AI life cycle, for example, it is harder to do an apples-to-apples comparison of AI models across the organization so that risk standards and reviews can be more easily and consistently applied.
  • Independent review: Existing and proposed AI regulations call for different types of audits to demonstrate compliance. Some regulators, such as the FTC, explicitly recommend the use of independent expertise to evaluate model fairness, performance, and transparency. Regular internal and external audits can contribute to a solid compliance program for an organization’s AI.


Source: https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/getting-to-know-and-manage-your-biggest-ai-risks

要查看或添加评论,请登录

Sonu Singh的更多文章

社区洞察

其他会员也浏览了