Explainable AI

Explainable AI

Top XAI frameworks to hack the AI and improve your productivity.

Explainability Goals are three fold.

1 Pre Model Explainability.

2 Model Explainability.

3 Post Model Explainability.

No alt text provided for this image


Here are the various Explainability Frameworks.

Explainerdashboard

explainerdashboard?is a library for quick build interactive dashboards for analyzing and explaining the predictions and workings of (scikit-learn compatible) machine learning models, including xgboost, catboost and lightgbm. This makes your model transparant and explainable with just two lines of code.

https://lnkd.in/edGJjbJ

SHAP:

SHAP is widely used explainable AI framework. It is an unified frame work of methods used on other frameworks.

  1. LIME:?Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should i trust you?: Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.
  2. Shapley sampling values:?Strumbelj, Erik, and Igor Kononenko. "Explaining prediction models and individual predictions with feature contributions." Knowledge and information systems 41.3 (2014): 647-665.
  3. DeepLIFT:?Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. "Learning important features through propagating activation differences." arXiv preprint arXiv:1704.02685 (2017).
  4. QII:?Datta, Anupam, Shayak Sen, and Yair Zick. "Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems." Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.
  5. Layer-wise relevance propagation:?Bach, Sebastian, et al. "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation." PloS one 10.7 (2015): e0130140.
  6. Shapley regression values:?Lipovetsky, Stan, and Michael Conklin. "Analysis of regression in game theory approach." Applied Stochastic Models in Business and Industry 17.4 (2001): 319-330.
  7. Tree interpreter:?Saabas, Ando. Interpreting random forests.?https://blog.datadive.net/interpreting-random-forests/


https://lnkd.in/e9_Hvmja

LIME

Lime is short for Local Interpretable Model-Agnostic Explanations. Each part of the name reflects something that we desire in explanations.?Local?refers to local fidelity - i.e., we want the explanation to really reflect the behaviour of the classifier "around" the instance being predicted. This explanation is useless unless it is?interpretable?- that is, unless a human can make sense of it. Lime is able to explain any model without needing to 'peak' into it, so it is?model-agnostic. We now give a high level overview of how lime works. For more details, check out our?paper.

https://lnkd.in/dJbEiEv


Shapash

Shapash is a Python library dedicated to the interpretability of Data Science models. It provides several types of visualization that display explicit labels that everyone can understand. Data Scientists can more easily understand their models, share their results and easily document their projects in a html report. End users can understand the suggestion proposed by a model using a summary of the most influential criteria.

https://lnkd.in/gmMuNM8

Features

  • Compatible with Shap, Lime and ACV
  • Uses shap backend to display results in a few lines of code
  • Encoders objects and features dictionaries used for clear results
  • Compatible with category_encoders & Sklearn ColumnTransformer
  • Visualizations of global and local explainability
  • Webapp to easily navigate from global to local
  • Summarizes local explanation
  • Offers several parameters in order to summarize in the most suitable way for your use case
  • Exports your local summaries to a Pandas DataFrame
  • Usable for Regression, Binary Classification or Multiclass
  • Compatible with most of sklearn, lightgbm, catboost, xgboost models
  • Relevant for exploration and also deployment (through an API or in Batch mode)
  • Freeze different aspects of a data science project as a basis of an audit report
  • Regroup features that share common properties together
  • Explainability Quality Metrics to increase confidence in explainability methods

Shapash Reports Samples:

https://shapash.readthedocs.io/en/latest/report.html


ELI5

ELI5?is a Python library which allows to visualize and debug various Machine Learning models using unified API. It has built-in support for several ML frameworks and provides a way to explain black-box models.

Features

ELI5?is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following machine learning frameworks and packages:

  • scikit-learn. Currently ELI5 allows to explain weights and predictions of scikit-learn linear classifiers and regressors, print decision trees as text or as SVG, show feature importances and explain predictions of decision trees and tree-based ensembles.
  • Pipeline and FeatureUnion are supported.
  • ELI5 understands text processing utilities from scikit-learn and can highlight text data accordingly. It also allows to debug scikit-learn pipelines which contain HashingVectorizer, by undoing hashing.
  • Keras?- explain predictions of image classifiers via Grad-CAM visualizations.
  • XGBoost?- show feature importances and explain predictions of XGBClassifier, XGBRegressor and xgboost.Booster.
  • LightGBM?- show feature importances and explain predictions of LGBMClassifier and LGBMRegressor.
  • CatBoost?- show feature importances of CatBoostClassifier and CatBoostRegressor.
  • lightning?- explain weights and predictions of lightning classifiers and regressors.
  • sklearn-crfsuite. ELI5 allows to check weights of sklearn_crfsuite.CRF models.

ELI5 also implements several algorithms for inspecting black-box models (see?Inspecting Black-Box Estimators):

  • TextExplainer?allows to explain predictions of any text classifier using?LIME?algorithm (Ribeiro et al., 2016). There are utilities for using LIME with non-text data and arbitrary black-box classifiers as well, but this feature is currently experimental.
  • Permutation Importance?method can be used to compute feature importances for black box estimators.

Explanation and formatting are separated; you can get text-based explanation to display in console, HTML version embeddable in an IPython notebook or web dashboards, JSON version which allows to implement custom rendering and formatting on a client, and convert explanations to pandas DataFrame objects.

https://lnkd.in/gjE7nERi

InterpretML ( managed by Microsoft)

InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability – glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models.

It features EBM. EBM is an interpretable model developed at Microsoft Research*. It uses modern machine learning techniques like bagging, gradient boosting, and automatic interaction detection to breathe new life into traditional GAMs (Generalized Additive Models). This makes EBMs as accurate as state-of-the-art techniques like random forests and gradient boosted trees. However, unlike these blackbox models, EBMs produce exact explanations and are editable by domain experts.


https://interpret.ml/

OmniXAI

OmniXAI (short for Omni eXplainable AI) is a Python machine-learning library for explainable AI (XAI), offering omni-way explainable AI and interpretable machine learning capabilities to address many pain points in explaining decisions made by machine learning models in practice. OmniXAI aims to be a one-stop comprehensive library that makes explainable AI easy for data scientists, ML researchers and practitioners who need explanation for various types of data, models and explanation methods at different stages of ML process:


https://lnkd.in/dXda3Mhe

Alibi Explain

Alibi Explain?is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.

https://lnkd.in/dwKNZME

Anchors

The anchors method explains individual predictions of any black box classification model by finding a decision rule that “anchors” the prediction sufficiently. A rule anchors a prediction if changes in other feature values do not affect the prediction. Anchors utilizes reinforcement learning techniques in combination with a graph search algorithm to reduce the number of model calls (and hence the required runtime) to a minimum while still being able to recover from local optima. Ribeiro, Singh, and Guestrin proposed the algorithm in 201861?– the same researchers who introduced the?LIME?algorithm.

https://lnkd.in/gA4hFR_g

XAI

XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by?The Institute for Ethical AI & ML, and it was developed based on the?8 principles for Responsible Machine Learning.

You can find the documentation at?https://ethicalml.github.io/xai/index.html.?

https://lnkd.in/gDfvSat7

Aequitas

Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive tools.

Model Fairness and bias audit tool would help the community adding more value on the explainability of the models.


Aix360

This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. We invite you to use it and improve it.

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.

The?AI Explainability 360 interactive experience?provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The?tutorials and example notebooks?offer a deeper, data scientist-oriented introduction. The complete API is also available.

There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some?guidance material?and a?chart?that can be consulted.

https://aix360.mybluemix.net/

https://lnkd.in/gX-7dzfV

Breakdown

The?breakDown?package is a model agnostic tool for decomposition of predictions from black boxes. Break Down Table shows contributions of every variable to a final prediction. Break Down Plot presents variable contributions in a concise graphical way. This package works for binary classifiers and general regression models.

https://lnkd.in/gJYBUswg

Deeplift

Following the works of Sebastian Bach et al on LRP/Taylor decomposition, Avanti Shrikumar, Peyton Greenside, Anshul Kundaje proposed DeepLiFT method in their work?Learning Important Features Through Propagating Activation Differences (ICML 2017). DeepLiFT(Deep Learning Important FeaTures) uses a reference image along with an input image to explain the input pixels (similar to LRP). While LRP followed the conservation axiom, there was no clear way on how to distribute the net relevance among the pixels. DeepLiFT fixes this problem by enforcing an additional axiom on how to propagate the relevance down.

The two axioms followed by DeepLiFT are:

Axiom 1. Conservation of Total Relevance: Sum of relevance of all inputs must equal the difference between the score of the input image and baseline image, at every neuron. This axiom is same as the one in LRP.

The two axioms followed by DeepLiFT are:

Axiom 1. Conservation of Total Relevance: Sum of relevance of all inputs must equal the difference between the score of the input image and baseline image, at every neuron. This axiom is same as the one in LRP.

https://towardsdatascience.com/explainable-neural-networks-recent-advancements-part-3-6a838d15f2fb

https://lnkd.in/gADTcwep

Why explainability is very important?

Adding Explainability to ML Models improve the Overall Performance of the Models. It saves high amount of cost of iterative training and monitoring of them.

No alt text provided for this image

要查看或添加评论,请登录

Lakshminarasimhan S.的更多文章

社区洞察

其他会员也浏览了