Explainable AI (XAI) : Unravelling the "Black Box"?
Images are sourced from the Internet

Explainable AI (XAI) : Unravelling the "Black Box"

Internet and the open-source communities have truly democratized the IT technology and its evolution. With the right knowledge, one can access the 'state of the art' tools within seconds of its release. Resources like computational infrastructure, easy to use code implementations and lots of data are readily available today, leading to an exponential acceleration in the R&D of evolving technologies.

Deep learning is the prime example of a technology that has evolved at an unprecedented rate. Ground-breaking research published today can become a widely-used standard practice in a matter of days.

Algorithms today are becoming increasingly accurate and sophisticated but these developments have also made the deep learning algorithms more and more complex, difficult to understand & interpret, and a "black-box" darker than ever.

No alt text provided for this image
Fig 1 Trade-off between Explainability & Accuracy


No alt text provided for this image
Fig 2. Why XAI?

XAI: Where and Why?

Adoption of AI and Deep Learning technologies are challenged due to the black-box nature of more accurate systems, With XAI techniques we can make AI systems:

More Accountable: Industries like Banking, Finance & Legal Industry are bound to be a complaint and accountable for each decision they make.

The black-box nature of deep learning systems makes the adoption of AI difficult. Here are some use cases where accountability issues with the AI system are addressed using XAI techniques.

More Understandable: If we use AI models to predict failure in business goals, AI models can be used to predict them before-hand. But the challenge remains to understand the reasons for failure and proper articulation of the reason. XAI can provide us with 'Actionable Insights' to optimize our processes. This article discusses a similar use-case in marketing space.

More Transparent and Trustworthy: ML/AI systems have severe roadblocks when it comes to trust and social acceptance. The use of AI in Defence, Healthcare, and other sensitive areas is debatable. Building trust is the key here are some use-cases around this.

Within the open-source community, there are multiple well-packaged implementations ready to plug and play:

LIME(Local Interpretable Model-Agnostic Explanations):

Paper Link: https://arxiv.org/pdf/1602.04938v1.pdf

  • It is a model-agnostic method, can be used for an explanation of any ML model. LIME identifies an interpretable model that is locally faithful to the explained classifier/ regressor.
  • It perturbs the input around its neighborhood and sees how the model-predictions behave. It then weights these perturbed data points by their proximity to the original example and learns an interpretable model on those and the associated predictions.

SHapley Additive exPlantions (SHAP):

Paper Link: https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf

  • Shapley value is a solution concept in cooperative game theory (1953), this was conceptualized to distribute group rewards amongst individual contributors. SHAP (May 2017) uses a similar approach to find the contribution of each input in predicting the outcome of the particular case.
  • SHAP is considered to be one of the most unified and holistic approaches. Although it is computationally complex to calculate the effect of all subsets of input features.
  • It can be used in combination with other methods to improve computation performance, for example, it's implemented with DeepLift as Deep SHAP (DeepLIFT + Shapley values) that leverages information extra knowledge about the compositional nature of deep networks to improve computational performance.

The ELI5 Library:

ELI5 is not a particular method but an Interpretable AI wrapper package, it is a well-managed library. It provides tools for multiple techniques like input permutation-based importance, linear-approximations to explain the predictions. It also provides efficient and articulate mechanisms and to explain text features and image-features.

Propagation-based Approaches:

There are a plethora of methods to interpret ANNs that rely on either perturbation methods or back-propagation (2015) to estimate the contribution of individual inputs.

  • Layer-wise Relevance Propagation: It operates by propagating the prediction backward in the neural network, using a set of purposely designed propagation rules (basically decomposing the results).
  • Back-propagation based methods like Grad-CAM and Guided CAM (Jan 2017) are quite efficient with CNN-based image data classification. DeepLift (April 2017) is another well-packaged and one of the latest solution that finds the contribution of features through Propagating Activation Differences.
Devesh Yadav

Manager HRBP at Vertisystem I 20K connections I Law Graduate I Looking for Tech Talent

4 年

Very nice and informative article ????

Kunal Agrawal

Sr. Business Analyst | Allianz | Product | MBA, TAPMI | SAFe 6 Agilist | Certified Scrum Product Owner

4 年

Insightful

Shubham Deshpande

Software Development Engineer @ Amazon Web Services (AWS) | Master of Science in Computer Science

4 年

Nicely written!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了