Why Explainability and Trusted AI Matter
?
If you can explain how you arrived at a decision your business is at risk.? One of the cornerstones of Causal AI is to help businesses make better decisions based on understanding a problem by understanding the relationships between variables and the supporting data.? Therefore, it is imperative to explain how you arrived at the answers to drive the business forward.
?
The goal of explainable AI is to justify the conclusions and recommendations from models.? Without being able to explain what is happening in an AI model has significant ramifications.? Explainability requires an understanding of data, variables, and algorithms.? While traditional AI such as deep learning is powerful, it can cause issues when an organization cannot defend decisions and answers provided by these solutions.? These applications are designed by first selecting data sources that are then used to create a complex model.? Without being able to understand the underlying code how can business leaders defend outcomes?? One of the key issues is that if a data scientist operates in isolation from the business goals, failure is inevitable.
?
The bottom line is that explainability must be able to meet the requirements of the people who are paying the bills for the projects being developed.? Therefore, the AI system must be able to demonstrate how the underlying model behaves and how it arrived at its conclusions.? If the system cannot explain its conclusions it will undermine the trust, transparency, and confidence of the business.? There are financial consequences when businesses leverage AI solutions to make business decision.? When an individual denied a loan and then sues the lender can that lender adequately defend the decision?? Can a customer trust that its vendor will protect its privacy?? Can an AI solution be trusted to make a diagnosis of a medical condition?? With all the excitement around generative AI, it will be critical to ensure that recommendations and suggestions offered are accurate and trustworthy.?
?
Why a black box model isn’t explainable.
?
The primary AI technique used by data scientists is the process of creating models from data.? In situations where the use case is well-defined, and the data is well vetted a black box model can be effective.? Black box models can capture patterns and relationships found in large data sets. Deep learning is integral to the way black boxes accomplish their goals.? These models are very effective in computer vision and speech recognition applications.? However, these models created from data lack the transparency to help management understand the conclusions. This lack of transparency raises concerns about accountability and bias that will cause major headaches for management.
?
?
In support of causal AI.?
?
Unlike traditional deep learning solutions, causal models contain a transparent qualitative component that describes all cause and effect relationships in the data in order to be able to support trust and explainability.? The benefit of causal AI is that it combines machine learning techniques with graphical modeling and subject matter expertise.?? The combination of these approaches helps organizations to leverage sophisticated AI techniques that are both trustworthy and explainable.
?
领英推荐
What makes a causal AI model explainable?
?
The goal of explainability in AI is intended to justify the conclusions and recommendations from models.? In order to achieve explainability it begins by answering the following questions:
·????? Is the decision well understood or are there shades of gray?
·????? Is the information included in the model up to date and are all the relevant circumstances and issues accounted for?
·????? Is the model designed to incorporate the nuances of business processes and human situations that are critical to success?
·????? What is the source of the data? Is there enough data to help you make good decisions?
?
The core of explainability in causal AI is the creation of a graphical model that is designed by a collaboration of the data science team and subject matter experts. ?A graphical model has the advantage of being able to show non-technical management what elements are included in the model and how those variables are related to each other.? What are the strengths of those relationships?? If the model (and the underlying Python libraries) indicates that there is a very strong relationship between variables it is likely that this relationship could be the key to a cause of an issue.? For example, crop yield has fallen in the previous year.? What is the cause of this problem?? Was the issue the lack of fertilizer or the quality of the soil? Was there a change in temperature?? Once enough data was ingested and the mathematical libraries are included, the causal AI approach will help management make well informed decisions about actions that can be taken to determine the best next actions available to solve the problem.? For example, the answer to our crop example may be that in the past year a different variety of fertilizer was used that was different than in previous years that resulted in the problem.? Using a causal model that indicates the strengths of relationships.? Using the causal AI approach, management can then ask questions such as what would the results be if we returned to the type of fertilizer that had been used in the previous two years?? Would this have solved the problem or are there other most important factors that need to be understood.
?
Because a causal AI approach abstracts the complexity of AI, it enables the platform to be explainable to the constituents that matter the most – business decision makers.
Steven Eyre Samantha Lakin Daniel Kirsch Paul Hünermund Scott Hebner Scott Cunningham Sandy Carter Alan Trefler Alan Shimel Alan Nugent Nelson Hsu Ibrahim Gokcen Dave Lindquist David Bartlett David Kenny Evan Kirstel B2B TechFluencer Wiley Paul Foster Paul Foster Paul Bleicher Aleksander Molak Amir Husain Anthony Scriffignano, Ph.D. Andy Palmer Aaron Rasmussen Kirk Mettler Jeffrey Bussgang Vicki Reyzelman Daryl Stanbery Daryl Plummer Radha Basu Dr. Melvin Greer Pascal BORNET Pam Reeve Stu Frost Sacha Labourey James Kobielus Fern Halper, Ph.D. Janice L. Brown
Investor @ Sam Greenblatt LLC | Quantum Computing Expert
5 个月The industry has been misguided by focusing on regenerated AI, as Judith pointed out. People wanted to run with it on large data sets. In 82% of all Regenerative AI, errors occur because the data needs to have affinity and coherence with the relevant affinity of the data being collected. As Judith points out, “It's the Data Stupid, not the tool.
Senior Machine Learning Engineer @ Actionable
6 个月AI explainability is crucial to maintaining trust, transparency and confidence in business decisions. The motivations behind the conclusions of an AI model should be understood by both the users of the system and those who will be affected by it.
Chief Operating Officer l ex-AWS l ex-IBM | Board of Directors | AI Expert l Blockchain Expert l Cloud Expert
6 个月Judith Hurwitz. Your post raises crucial points about the need for AI explainability in business decisions. It's clear that transparency is essential, especially considering the financial and legal implications. Your comparison between black box and causal AI models effectively highlights the importance of transparency. Your breakdown of explainability in causal AI, emphasizing collaboration and practicality, is insightful. Ultimately, your stance on causal AI's ability to simplify complexity while remaining transparent is spot on, particularly in aiding business decision-makers. As AI continues to evolve, prioritizing transparency will remain critical for building trust.
CMO at TreviPay / startup advisor / x-Forrester and McKinsey / long-time data nerd
6 个月Great + timely post Judith. Causal aka "hybrid" AI has always been a sensible middle ground, especially in mission critical, decision support type business applications. There's also a lot of helpful prior art, combining neural nets as a pre-processor with a rules engine, for example. Yup, I nerded out on this stuff way back in the 1990s when AI was in labs not in the headlines :)
Serving Customers | Solutions Engineering | Architecture Strategy | Cloud and Security | Enterprise Systems| Transforming Businesses with AI-Driven Solutions | Patent Holder | Ex-Salesforce, Deloitte, Yahoo!
6 个月Explainability is critical!