Exploring the Importance of Explainability in AI

Exploring the Importance of Explainability in AI

How to explain AI

As AI continues to advance, we increasingly entrust it with complex tasks, including those involving significant responsibilities, such as making decisions that can even impact human life. We often appreciate AI for its ability to recommend products tailored to our needs or suggest friends on social networks whom we actually know. If it occasionally makes a wrong recommendation, we tend to forgive it easily, considering it a minor incident. However, there are instances where AI decisions can have profound effects on human health, life, career prospects, or legal outcomes. In such critical situations, it becomes crucial for us to comprehend the underlying basis on which AI arrives at its decisions.

Interpretability vs Explainability

When delving into the topic of explainable AI (XAI), it is essential to differentiate between interpretability and explainability. Interpretability refers to comprehending the inner workings of a model's analysis, providing insights into how decisions are reached and the underlying factors guiding those decisions. However, it's important to note that not all models possess interpretability. In fact, when selecting the most suitable machine learning algorithm for a specific problem, one must consider the trade-off involved. The diagram below visualizes this concept, highlighting the delicate balance that needs to be struck.

No alt text provided for this image


The depicted figure illustrates a crucial relationship: as models attain higher levels of accuracy, their interpretability tends to decrease. Ideally, we would strive to strike where we achieve both high accuracy and effective explanation of the model's decisions. This is where the concept of explainability comes into play. Explainability entails visualizing the outcomes generated by AI, such as highlighting the most activated neurons in a network. In other words, it enables us to discern the crucial information influencing the model's decision-making process. It helps answer questions like which image features played a significant role in the model's decision. Are there any images lacking identifiable features for the model to analyze? Does the model base its judgments on reliable benchmarks? It is important to emphasize that explainability does not modify existing models- instead, it expands upon them. By achieving a higher level of understanding regarding the model's decisions, explainability ensures that accuracy is not compromised.

When and Why Understand Model?

Considering that we already acknowledge the possibility of understanding models, the question arises as to why we should pursue this understanding. Let's envision a scenario where a customer applies for a loan at a bank, and an AI system assesses their creditworthiness, resulting in a loan denial. Naturally, the customer would desire insights into the criteria that influenced this decision. This is precisely where explainable AI (XAI) comes into play, providing all the necessary answers and guidance on what the customer can improve to increase their chances of obtaining a loan.

Another example can be found in the collaboration between doctors and AI. Assuming the model possesses knowledge of a patient's medical history, current symptoms, and test results, it can determine whether the patient has the flu or not. However, it is undeniable that the doctor would prefer to understand the underlying reasons that led the AI to such a conclusion. With this understanding, the doctor can then make an informed judgment on whether to agree with the AI's diagnosis or not, based on their own expertise and assessment.

In both cases, XAI serves as a vital tool in empowering individuals to comprehend the decision-making process of AI systems, enabling them to make informed decisions, improve outcomes, and foster collaborative decision-making between humans and machines.

No alt text provided for this image
source: https://www.youtube.com/watch?v=_DYQdP_F-LA


Embracing this collaborative approach between humans and AI undeniably brings a heightened sense of comfort to decision-makers, while also serving as a powerful defense against the numerous biases that have been exposed within AI systems.

One notable case that shed light on the prevalence of biases occurred in 2015, involving a high-profile scandal surrounding Amazon's AI-based employment system. The system, when introduced, exhibited a clear bias by penalizing resumes that included the term "woman." This bias stemmed from the model being trained on resumes submitted to the company over a decade, predominantly originating from male applicants. As a result, the system perpetuated gender-based discrimination.

Instances like these serve as stark reminders of the pressing need to address biases in AI. By embracing explainable AI and striving for transparency, we can actively protect against such biases and work towards a fairer and more inclusive future. This collaborative partnership between humans and AI not only provides decision-makers with greater confidence but also acts as a vital safeguard against discriminatory practices, fostering an environment that upholds fairness, equality, and social progress. [https://www.bbc.com/news/technology-45809919]

Can we find bias in models?

Another crucial aspect that arises when utilizing explainable AI (XAI) is the examination of whether the model can be deemed trustworthy. A notable example shedding light on this issue is presented in the article https://arxiv.org/pdf/1602.04938.pdf. The study discusses a model designed to differentiate between wolves and huskies, which, as it turned out, did not rely on the animal's appearance when making decisions.

In this intriguing experiment, researchers intentionally trained the network using a dataset where wolves were exclusively depicted in snowy environments, while huskies were not. Consequently, the network yielded accurate results when presented with images of wolves on snowy or light backgrounds, but produced incorrect outcomes when the background differed from what it had been exposed to during training.

The researchers proceeded to present the predictions to the students, who possessed previous experience with machine learning. They were asked to express their level of trust in the network's predictions and offer their insights on what aspects the model might be focusing on. The students were subsequently shown the same photos, wherein key pixels were highlighted to reveal the model's decision-making process.

No alt text provided for this image


The results of the experiment revealed that approximately one-third of the students placed their trust in the model. Surprisingly, the significance of snow as a decisive factor was not significantly more noticeable (further details provided in the table below). This compelling experiment serves as a testament to the dire necessity for explanations in order to determine the trustworthiness of a model. It also highlights the limitations of human perception when it comes to discerning the subtle nuances that govern a model's decision-making process.

No alt text provided for this image


The aforementioned cases serve as vivid illustrations of the need for vigilant oversight when employing AI. By exercising caution and thorough monitoring, we can mitigate the risks of potential situations where human rights might be violated. Simultaneously, the example of distinguishing between animals offers compelling evidence that human perception often struggles to detect biases that may have infiltrated algorithms.

Responsible utilization of AI necessitates a deep comprehension of the parameters that influence decision-making, particularly as AI increasingly pervades critical aspects of our lives. By understanding the factors that shape AI decisions, we can ensure that its deployment aligns with ethical standards and safeguards against potential biases and discriminatory outcomes. This heightened awareness and commitment to responsible AI usage empower us to create a future where the potential of AI is harnessed while preserving human rights and fairness in decision-making processes.


Commited by Adam Wisniewski & Jakub ?ukaszewicz

Ma?gorzata Górska

PR for Chief Digital and AI Officers. Podcast host

1 年

Great reading, explainability is key to build trust that we use reliable and safe AI solutions. Here are some thoughts from our Chief AI Officer on this topic: https://blog.se.com/digital-transformation/artificial-intelligence/2023/05/28/nobody-should-blindly-trust-ai-heres-what-we-can-do-instead/

Wojciech Iwanicki

Technology Senior Consultant at BCG Platinion

1 年

No 1 on the list of important AI topics not being sufficiently discussed right now.

要查看或添加评论,请登录

AI Clearing的更多文章

社区洞察

其他会员也浏览了