Explainable Artificial Intelligence
Timothy Goebel
AI Solutions Architect | Computer Vision & Edge AI Visionary | Building Next-Gen Tech with GENAI | Strategic Leader | Public Speaker
Posted on?December 7, 2021?|?by?Tim Goebel
There is great potential for improving products, processes, and research through a method of data analysis that automates analytical model building—this process is called machine learning. However, computers cannot explain the resulting predictions, which leads to barriers in organizational adoption of machine learning. Explainable artificial intelligence, however, can solve this challenge.
What is Explainable Artificial Intelligence?
Explainable artificial intelligence (XAI) is a collection of operations and techniques that allows us to comprehend and trust the results and outputs created by machine learning algorithms.
Explainable Artificial Intelligence (AI) is used to describe an AI model, its expected impact, potential biases, and improve performance. It helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision making.
Explainable AI is crucial for organizations in building trust and confidence when putting AI models into production. AI explainability also supports organizations’ ability to adopt a responsible approach to AI development.
Illustrated in the Unexplainable AI example below, this process starts with?Training Data.
Training data is used?to train the model through a specific algorithm learning the process.
The?Learning Process?results in the?Learned Function.
This function can be fed?Input Data, and then outputs a?Prediction Interface, which the end user sees and interacts with.
The problem, however, with an unexplainable model is that the end prediction comes without justification. The lack of justification leaves?the end user with doubts and questions such as:
The questions above help to illustrate why explainable AI is desirable and necessary for organizational adoption.
In the Explainable AI example below, the red box illustrates how the implementation of an explainable model in the?Learned Function?changes the picture and eliminates the questions.
?
In this example, the end users get interpretability of the data, resulting in more information and understanding the “why” behind the prediction output. By interpreting the input data, organizations can see the importance of how each feature affects the model, its expected impact, and potential biases.
It is important to understand that using too many features or highly engineered features can reduce the model explainability.
领英推荐
Why is Explainable AI Important?
When we train AI models with several parameters to which we apply transformations, this may result in a process of preprocessing and model building into a “black box” model that is extremely hard to interpret.
In today’s landscape, the focus is slowly turning towards accuracy, transparency, fairness and the outcomes in AI decision making. Stakeholders, developers, users, and regulators want to know why the models produce the predictions they provide. How does explainable AI appear in the real world? From insights and analytics to customer engagement and cross-selling, XAI can be implemented across a variety of industries and organizations.
Banking industry example:?Explainable AI can be used in fraud detection, payment exceptions, collections, enhancing robo-advisors, and cross-selling – empowering banks to provide a seamless customer experience to drive loyalty and profitability.
Insurance industry example:?Explainable AI n can positively impact the customer and help the business. When customers don’t receive adequate reasoning for their claims, the result is a poor experience, however explaining the results can help improve customer satisfaction.
Insurance pricing can be complex and depends on multiple factors. With XAI, customers can gain access to better understand pricing changes and make informed decisions, resulting in a sense of confident in their insurance provider.
XAI can be used to predict customer turnover as well as their reasons for leaving. Insurance companies can then use that information to make changes to enhance the customer experience, in turn saving the business money.
Additional examples:?Explainable AI can be used for customer predictions, personalization, insights, data analytics, and anomaly, pattern & trend detection.
When we can provide that transparency into the “why” behind our models, we can reap a variety of benefits that impact processes and organizations.
Benefits of Explainable AI:
As businesses shift toward explainable AI, questions arise around what tools and resources exist to support XAI. The table below provides Cloud platform examples that support explainable AI along with the tools.
Conclusion
While there are instances when there is a trade-off between a system with higher accuracy and one that is more explainable, I believe it’s important to implement explainable AI whenever possible.
XAI is the optimal way to understand the “how” and “why” of our models, leading to increased consistency, reliability, and fairness in the resulting outcomes.
When companies can gain transparency in their models and predictions, not only does it benefit the business but can positively impact their customers’ experience as well.
Continue exploring Data & Artificial Intelligence?here.
About the Author
Tim Goebel?is the Data & Artificial Intelligence Principal Consultant at SafeNet Consulting. Tim is an IT engineering leader with a track record of success bringing projects from conception through completion to enhance efficacy, control costs, and propel business growth. In early 2021 Tim achieved his Microsoft Certified: Azure AI Fundamentals certificate that verifies he has demonstrated foundational knowledge of machine learning (ML) and artificial intelligence (AI) concepts and related Microsoft Azure services.
Tim also has certifications and critical technical knowledge on Visual Studio C#, Scikit learn, Keras, Tensorflow, Python, NLTK, Tokenizing, Panda, Docker, MathWorks, labview, Open CV, RNN, CNN, and more to meet business needs.
Seasoned Technology Executive | Relationship Builder | Women in Tech Advocate | Speaker | Hockey & Football Mom | Non-Profit Leader and Board Member
3 年Great article Tim Goebel!