AI Explainability Explained | Making AI Models More Transparent
Photo by K8 on Unsplash

AI Explainability Explained | Making AI Models More Transparent

One of the biggest challenges with artificial intelligence (AI) might be a very human one. It comes down to the question: how can we trust an algorithm that ingests vast amounts of data, crunches them in some way that most of us don’t understand and then spits out a result that we can’t reconstruct? It can feel a bit mysterious, especially for the curious mind that needs to know more and understand the why and how behind the output.

In Excel, we can at least check the functions and ensure all the cells are linked correctly, and - while cumbersome - we can go through line by line and understand what’s happening. But AI? That’s a bit more complex.

“How can we make AI more transparent?” is a question we have given much thought to at Intelligencia AI, and certainly one that can’t be answered entirely in a single post, but we can start to scratch the surface.

In this article, we’ll provide a brief overview of how we use explainable AI to improve transparency and how we help our customers better understand what drives the probability of technical and regulatory success (PTRS) predictions provided in our self-serve platform Intelligencia Portfolio Optimizer?, which influence so many business and clinical decisions.?

What Is and Why Do We Need Explainable AI??

First, let’s level-set on definitions. Carnegie Mellon University shared an excellent description of explainable artificial intelligence with the following:?

Explainable artificial intelligence (XAI) is a powerful tool for answering critical How? and Why? questions about AI systems and can be used to address rising ethical and legal concerns. As a result, AI researchers have identified XAI as a necessary feature of trustworthy AI, and explainability has experienced a recent surge in attention.?

Explainable AI addresses essential issues: one is trust. If we have no idea how the algorithm got from the input to the output, it’s human nature to question.

The other is that, in some cases, we need to know what drove that result to support the learning process and knowledge sharing.?

Examples of Explainable AI in Action?

Let’s look at two examples to illustrate explainable AI. If we use AI to perform quality control of painting car doors on an assembly line, we don’t care how the algorithm detects a scratch as long as it does so reliably. However, if we use AI to assess the probability of technical and regulatory success (PTRS) of a drug candidate, we very much need to understand what drives that probability up or down and the interdependent drivers. Is it the drug's mechanism of action, the choice of endpoints in the clinical trial, or is it based primarily on positive initial data such as objective response rate? Or perhaps it is something else entirely.

This is where AI explainability comes in. While it doesn’t explain every part of the process, some methods can tease apart how much each input feature contributes to the output of an AI model.

"We have introduced explainability for our predictions to provide our customers a better understanding of how our models interpret the available data to produce our PTRS. AI explainability adds much-needed transparency to a process that otherwise may feel like a black box," shared Maria Georganaki, Director of Product, Intelligencia AI.

Looking Under the Hood of AI

There are different AI explainability tools, but SHapley Additive Explanations, better known as SHAP, is the method of choice for many machine learning algorithms like those driving our PTRS models.

SHAP values establish the contribution of each input feature to the prediction and pinpoint the most critical elements and their influence on the prediction.?

Here’s an interesting trivia fact for a little SHAP history. SHAP values were initially developed in the context of game theory to distribute a game's payout among the players fairly. In the context of AI, the input factors are the “players,” and the "payout" is the model's prediction. SHAP values ensure that each input factor/player receives their fair share of the payout/prediction.

Returning to our PTRS example, AI explainability based on SHAP values can address the second challenge. The life science industry needs precision, confidence, and data, and it’s imperative to know what drives the PTRS of a specific drug development program and, conversely, which factors play an insignificant or no role.?

Access to this knowledge, especially when displayed in a visually dynamic and user-friendly way, can help decision-making in real life while instilling a sense of trust in the user. For example, knowing why a drug program received an above-average PTRS was driven by the fact that the program was assigned breakthrough therapy status by the FDA makes sense and helps us trust the prediction.?

AI explainability is an invaluable tool that conveys deep insights over and above the prediction provided by the algorithm.?


To see explainable AI in action, get in touch with us today at [email protected]

要查看或添加评论,请登录

社区洞察

其他会员也浏览了