Explainable AI
Lakshminarasimhan S.
~1 Billion Impressions | StoryListener | Polymath | PoliticalCritique | Agentic RAG Architect | Strategic Leadership | R&D
Top XAI frameworks to hack the AI and improve your productivity.
Explainability Goals are three fold.
1 Pre Model Explainability.
2 Model Explainability.
3 Post Model Explainability.
Here are the various Explainability Frameworks.
Explainerdashboard
explainerdashboard?is a library for quick build interactive dashboards for analyzing and explaining the predictions and workings of (scikit-learn compatible) machine learning models, including xgboost, catboost and lightgbm. This makes your model transparant and explainable with just two lines of code.
SHAP:
SHAP is widely used explainable AI framework. It is an unified frame work of methods used on other frameworks.
LIME
Lime is short for Local Interpretable Model-Agnostic Explanations. Each part of the name reflects something that we desire in explanations.?Local?refers to local fidelity - i.e., we want the explanation to really reflect the behaviour of the classifier "around" the instance being predicted. This explanation is useless unless it is?interpretable?- that is, unless a human can make sense of it. Lime is able to explain any model without needing to 'peak' into it, so it is?model-agnostic. We now give a high level overview of how lime works. For more details, check out our?paper.
Shapash
Shapash is a Python library dedicated to the interpretability of Data Science models. It provides several types of visualization that display explicit labels that everyone can understand. Data Scientists can more easily understand their models, share their results and easily document their projects in a html report. End users can understand the suggestion proposed by a model using a summary of the most influential criteria.
Features
Shapash Reports Samples:
https://shapash.readthedocs.io/en/latest/report.html
ELI5
ELI5?is a Python library which allows to visualize and debug various Machine Learning models using unified API. It has built-in support for several ML frameworks and provides a way to explain black-box models.
Features
ELI5?is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following machine learning frameworks and packages:
ELI5 also implements several algorithms for inspecting black-box models (see?Inspecting Black-Box Estimators):
Explanation and formatting are separated; you can get text-based explanation to display in console, HTML version embeddable in an IPython notebook or web dashboards, JSON version which allows to implement custom rendering and formatting on a client, and convert explanations to pandas DataFrame objects.
InterpretML ( managed by Microsoft)
InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability – glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models.
It features EBM. EBM is an interpretable model developed at Microsoft Research*. It uses modern machine learning techniques like bagging, gradient boosting, and automatic interaction detection to breathe new life into traditional GAMs (Generalized Additive Models). This makes EBMs as accurate as state-of-the-art techniques like random forests and gradient boosted trees. However, unlike these blackbox models, EBMs produce exact explanations and are editable by domain experts.
领英推荐
OmniXAI
OmniXAI (short for Omni eXplainable AI) is a Python machine-learning library for explainable AI (XAI), offering omni-way explainable AI and interpretable machine learning capabilities to address many pain points in explaining decisions made by machine learning models in practice. OmniXAI aims to be a one-stop comprehensive library that makes explainable AI easy for data scientists, ML researchers and practitioners who need explanation for various types of data, models and explanation methods at different stages of ML process:
Alibi Explain
Alibi Explain?is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.
Anchors
The anchors method explains individual predictions of any black box classification model by finding a decision rule that “anchors” the prediction sufficiently. A rule anchors a prediction if changes in other feature values do not affect the prediction. Anchors utilizes reinforcement learning techniques in combination with a graph search algorithm to reduce the number of model calls (and hence the required runtime) to a minimum while still being able to recover from local optima. Ribeiro, Singh, and Guestrin proposed the algorithm in 201861?– the same researchers who introduced the?LIME?algorithm.
XAI
XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by?The Institute for Ethical AI & ML, and it was developed based on the?8 principles for Responsible Machine Learning.
You can find the documentation at?https://ethicalml.github.io/xai/index.html.?
Aequitas
Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive tools.
Model Fairness and bias audit tool would help the community adding more value on the explainability of the models.
Aix360
This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. We invite you to use it and improve it.
The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.
The?AI Explainability 360 interactive experience?provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The?tutorials and example notebooks?offer a deeper, data scientist-oriented introduction. The complete API is also available.
There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some?guidance material?and a?chart?that can be consulted.
https://aix360.mybluemix.net/
Breakdown
The?breakDown?package is a model agnostic tool for decomposition of predictions from black boxes. Break Down Table shows contributions of every variable to a final prediction. Break Down Plot presents variable contributions in a concise graphical way. This package works for binary classifiers and general regression models.
Deeplift
Following the works of Sebastian Bach et al on LRP/Taylor decomposition, Avanti Shrikumar, Peyton Greenside, Anshul Kundaje proposed DeepLiFT method in their work?Learning Important Features Through Propagating Activation Differences (ICML 2017). DeepLiFT(Deep Learning Important FeaTures) uses a reference image along with an input image to explain the input pixels (similar to LRP). While LRP followed the conservation axiom, there was no clear way on how to distribute the net relevance among the pixels. DeepLiFT fixes this problem by enforcing an additional axiom on how to propagate the relevance down.
The two axioms followed by DeepLiFT are:
Axiom 1. Conservation of Total Relevance: Sum of relevance of all inputs must equal the difference between the score of the input image and baseline image, at every neuron. This axiom is same as the one in LRP.
The two axioms followed by DeepLiFT are:
Axiom 1. Conservation of Total Relevance: Sum of relevance of all inputs must equal the difference between the score of the input image and baseline image, at every neuron. This axiom is same as the one in LRP.
https://towardsdatascience.com/explainable-neural-networks-recent-advancements-part-3-6a838d15f2fb
Why explainability is very important?
Adding Explainability to ML Models improve the Overall Performance of the Models. It saves high amount of cost of iterative training and monitoring of them.