What, How, and Why of Artificial Intelligence Risk Management
Photo by Gabriella Clare Marino on Unsplash

What, How, and Why of Artificial Intelligence Risk Management

The Tenets of Artificial intelligence

All forms of?Artificial intelligence (AI)?can be risky, especially when newly implemented. This article addresses the risks and rewards of AI, and explains how?to?protect against risks. Not all risks are preventable, but they can be managed.?

AI has the potential to?transform society?for the better?and improve lives?in many ways. At the same time, however,?it?introduces?risks that can harm??not only business enterprises but the also the workers?and the?end users of business products. Security?risks?from AI?are different from those posed by traditional software or information-based systems.?This occurs because the?complexity??and relative novelty?of AI systems make it difficult to detect?risks?and respond?appropriately. We must all learn to anticipate?AI risk?and to effectively deal with it when it arises. We must learn to designing appropriate AI that aligns?with?the?goals and values?of specific businesses.?

Responsible use of AI means a razor focus on the human beings that run the business, on the community for whom the business is run, and?on the maintenance or?sustainability?ofthe?business for the foreseeable future. A human focus??necessitates a critical weighing of?potential positive and negative impacts of AI systems.??

First?and?foremost, are the individuals who may be affected. Next is the socialresponsibility?to one’s community?and?its prosperity.??All the while, there is an urgency to consider?and?not endanger?future generations.

Responsible?use of?AI??results?in?outcomes that are?equitable?for all. So, when?designing, developing, and deploying AI systems, please keep goals and values in mind.

A Risk Management Framework

An AI risk management framework (a “Framework”)?has?to minimize negative impacts of AI systems and maximize?its?positive impacts. This is done?by?awareness of risks,??screening for risks, meticulously documenting risks, and??effectively managing?them when they emerge.?A Framework?considers?the likelihood of?a risky?event occurring and?qualitatively and quantitatively measures its negative impact, direct and indirect.?

Building a risk management Framework takes time but is imperative because, if carried out properly, it protects against negative?consequences?for?the?firm, for?the?people working in the firm and for the community the firm serves. It may also protect future generations.??The human aspect is important because humans are fallible and make mistakes, are forgetful, are sometimes lazy, often don’t follow rules. To be trustworthy, AI systems have to take human frailty into account.?

Various other?factors complicate the?potential for?AI risks, including the use of third-party software, hardware, and data,??the advent of unexpected emergencies, the?availability of reliable metrics,?the?differences in risk assessment at different stages of the AI lifecycle, real-world deployment scenarios, and?difficulties in early understanding of signs of risk.

Third-party data or systems can facilitate the development and deployment of AI systems, but can also pose new risks that are difficult to measure. Emergent risks need to be identified and tracked. This indicates the need for?impact assessment approaches. Unfortunately, there is a??current?lack of consensus on robust and verifiable measurement methods for AI systems?that are trustworthy. Moreover, real-world scenarios often differ from the results obtained in a laboratory or controlled environment. The effects of an AI-enabled cyberattack?can,?for?instance, be very difficult to??anticipate and recognize. In?AI systems intended to augment or replace human activity,?it?is difficult to??compare a human response to an?AI?response because the tasks are?different.?

Risk Tolerance

Risk tolerance refers to the?severity of?risks??one is willing to bear?in order to achieve?one’sobjectives.?This?tolerance is influenced by legal and regulatory requirements, policies and norms established by AI system owners and policy makers, and can change over time as AI policies evolve. A Framework can prioritize risk, but it?cannot?dictate risk tolerance. Different organizations have different?degrees of??risk?tolerance?based on their?specificpriorities and resource considerations. A Framework is meant to complement?not replace?existing risk practices. In sectors where established guidelines for risk tolerance do not exist, organizations must define risk tolerance?that is reasonable for them?and use a Framework to manage?the?risks and document?their?risk management processes.

Eliminating all risks is unrealistic and can?actually?be counterproductive, so it’s important to recognize that not all AI risks are equal. A risk management culture can help organizations allocate resources?appropriately.?When applying a Framework, the level of risk and potential impact of each AI system should be assessed, and resources should be prioritized accordingly. Higher priority may be given to AI systems that interact directly with humans?or?involve sensitive data.?Lower priority?can?be given to AI systems that only interact with computational systems and?access?non-sensitive data. Regular risk assessment prioritization is important, even for non-human-facing AI systems that may have?potential?downstream impacts. Residual risk, or risk that remains after risk treatment,?must be?documented?in order to inform?end users about potential negative?consequences in?the?future.

AI risk management cannot be done in isolation and must be incorporated into a broader enterprise risk management strategy. A Framework should be used in conjunction with other relevant guidelines to manage AI risks?which include?privacy, cybersecurity, energy and environmental?impact?and?overall?security of AI systems. There should be clear accountability mechanisms, roles, and responsibilities for effective risk management, which requires commitment from senior?management. It is possible that this will??requirea cultural change within the organization or?within the?industry?as a whole.??The challenges for small to medium-sized organizations implementing a Framework may differ from those of large organizations?because of size differences in?resources and capabilities.

The following are characteristics of good AI risk management systems:

Trustworthiness

Trustworthiness is essential for AI systems?that employees accept?and?use?effectively.?Definition of?trustworthiness: it must be?valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair. All?harmful bias?must be?managed.?Achieving?trustworthiness is not just about fulfilling?each of?these characteristics, but?also about?balancing them, one against?the?other. Trade-offs may arise between different characteristics and the choice of which ones to prioritize depends on the context and?the values that are most important to the organization.

Involve multiple perspectives and parties in the AI lifecycle to ensure that decisions are informed and?applicable to a specific context.?Ultimately, trustworthiness depends on the decisions and actions of those involved and requires transparency and justification.?

Trustworthiness?characteristics of AI systems are interrelated.?Their?deployment?mustconsider the costs, benefits, impacts, and risks.

  • Validation is the process of confirming that the AI system has met the requirements for its intended use.
  • Reliability refers to the ability of the system to perform as required without failure over a given time interval.
  • Accuracy is the closeness of results to the true values.
  • Robustness refers to the ability of the system to maintain performance under different circumstances.

Validity, reliability, accuracy, and robustness should all be assessed through?routine?testing and monitoring?to minimize?potential negative impacts. In cases where the AI system cannot detect or correct errors, human intervention may?often?be necessary.

Transparency

Transparency refers to the extent to which information about an AI system and its outputs is accessible to those?who interact?with?it. Transparency enhances confidence in AI systems by promoting understanding?of its potential and of its limitations.??The more transparency, the less uproar about?negative impacts caused by?wrong?outputs. Transparency?helps in keeping responsible parties accountable??for severe consequences, such as those involving life and liberty. It?can be enhanced by?documenting and?maintaining training data,?knowing who is responsible for what?decisions, and?repeatedly?testing transparency tools with AI deployers.?Not everything can be shared –?proprietary information must, of course,?be?kept confidential.?

Risk to Human Life, Health, Property, or Environment

AI systems should not pose any risk to human life, health, property, or environment. Safe operation is?essential.?Different types of safety risks may need?to be?tailored?to?risk management approaches, with the highest priority given to those that pose serious injury or death risks. To prevent dangerous conditions, AI safety considerations should be incorporated into the system starting as early as possible in?the?planning and designphase. This incorporation may involve rigorous simulation and testing, real-time monitoring, and the ability to shut down, modify, or intervene in the system if it deviates from?what was?intended. AI safety risk management should align with existing sector or application-specific guidelines or standards and take cues from safety efforts in?other?fields.

Resilience

AI systems are considered resilient if they can withstand unexpected changes and?stillcontinue to function. They should be designed to degrade safely in case of failure. Security is?critically?important, and systems should have protection mechanisms in place to maintain confidentiality, integrity, and availability. Security and resilience are related but distinct, as security includes resilience and also involves avoiding and protecting against attacks. Resilience refers to the ability of the system to return to normal function after an unexpected event and covers robustness against unexpected or adversarial use of data.

Explainability

Explainability and interpretability are important characteristics of AI systems that ensure their?effective function?and?their?trustworthiness. Explainability refers to?how?the mechanisms behind AI?systems operate, while interpretability refers to the meaning of AI outputs?as they pertain to?their designed purposes. Together,?explainability and interpretability?help users and those in charge of AI systems understand the potential impact of the system.?A?lack of explainability and interpretability can?lead to misgivings and failure to use the systems appropriately.??Explainability and interpretability can be managed by providing?easy-to-understand?descriptions of AI functions and?also by clearly?communicating why decisions?about their use??were made. Transparency, explainability, and interpretability are distinct but interconnected?concepts, with transparency answering “what”, explainability answering “how”,?and interpretability answering “why”.

Privacy

Privacy refers to the norms and practices that help protect individual autonomy, identity, and dignity by limiting?surveillance and?intrusion, and?preventing?the disclosure of personal information. Privacy values should guide the design, development, and deployment of?all?AI systems. AI systems can pose new risks to privacy. Privacy-enhancing technologies and data minimizing methods such as de-identification and aggregation?are adesign?must.

Fairness

Fairness in AI refers to addressing issues of equality and avoiding?all forms of?bias and discrimination. The concept of fairness is?culture-specific?and?varies in different parts of the world.?Risk management in AI is enhanced by recognizing and considering?culturaldifferences.

There are three major categories of AI bias that organizations?must?consider and manage: systemic, computational,?statistical, and human-cognitive.?Biases?can be present in datasets,?in?organizational norms and practices, and, of course, in?the broader society?from which AI not only gleans its data but by which?AI systems?are used. Computational and statistical biases can occur in AI datasets and algorithms, often due to?sourcing datafrom non-representative samples. Human-cognitive biases relate to the way individuals or groups perceive AI system information and make decisions, or how they think about the purposes and functions of AI.?Simple examples are assumptions that all users are right handed, all have normal sight, normal hearing, and can nimbly operate keyboards.

Bias can exist in many forms, including unconsciously?ingrained in?early designs. Organizations need to consider?the?needs?of?all their employees and all the community users of their products.

Conclusion

AI must be based on principles of privacy, fairness and freedom from bias. It must be transparent, explainable, and easily interpretable. It must be resilient, trustworthy, reliable, valid, accurate, and robust. It must pose no risk to human life,??and minimal, preventable or manageable risk to health, property, or the earth’s environment.?While AI has great potential to transform society for the better, we must keep in mind the many issues discussed in this paper to ensure that?risk, while never possible to totally eradicate, is sufficiently mitigated to allow AI to bring significant improvement to people’s lives.

Imtiaz Hussain

Attorney and Management Consultant at IH Consulting Group

1 年

Thank you Bob Seeman for sharing .

回复
Rosemary Hood

Rosemary Hood DVM Emerita

1 年

Privacy Act - Canada is missing this.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了