Is Your AI Model Explainable?
Image Credit: Google

Is Your AI Model Explainable?

Why don't we just trust the AI models and accept the decisions made by the machines, if the machine learning model performs well?

As the AI systems are increasingly proliferating the high stakes domains such as healthcare, finance, aviation, automated driving, manufacturing, law, etc., it becomes even more crucial that these systems must be able to explain their decision to the diverse end-users in a comprehensible manner.

Tech giants like Google, Facebook, Amazon are collecting and analyzing more and more personal data through smartphones, personal assistant devices such as Siri and Alexa, and social media that can predict and model individuals better than other people. There is also a growing demand for explainable, accountable, and transparent AI systems as the tasks with higher sensitivity and social impact are more commonly entrusted to AI services.

Currently, many such AI systems are non-transparent with respect to their working mechanism, and that is the reason they are called Black-Box Models. This black box character establishes severe problems for a number of fields including the health sciences, finance, criminal justice and demands for explainable AI.

Explainable AI aims to :

  1. Produce more explainable models while maintaining a high level of learning performance (e.g. prediction accuracy)
  2. Enable humans to understand, trust, and effectively manage the emerging generation of artificially intelligent partners.


Goals of Explainable AI:


No alt text provided for this image


In general, humans are reticent to adopt techniques that are not directly interpretable, tractable, and trustworthy. The danger is on creating and using decisions that are not justifiable, legitimate, or that can not be explained. Explanations supporting the output of a model is crucial e.g., in precision medicine, where experts and end-users require far more detailed information from the model rather than just a simple binary prediction for supporting their diagnosis.

There is a trade-off between the performance of a model and its transparency. However, with improved understanding and explainability of the system, the correctness of the deficiencies can also be achieved. If a system is not opaque and one can understand how inputs are mathematically mapped to the outputs, then the system is interpretable, this also implies model transparency.

5 main aspects of the focus of recent surveys and theoretical frameworks of explainability:

  1. What an explanation is?
  2. What are the purpose of goals and explanation?
  3. What information do explanation contain?
  4. What types of explanation can a system give?
  5. How can we evaluate the quality of explanation?

No alt text provided for this image

The current theoretical approach of explainable AI also reveals that it does not pay enough attention to what we believe is a key component: who are the explanations targeted to ?

It has been argued that explanations cannot be monolithic and each stakeholder looks for explanations with different objectives, different expectations, different backgrounds, and of course with different needs. How do we approach explainability is the starting point for creating explainable models and allows to set following three pillars on which explanation is built:

?? Goals of an explanation.

?? Content of an explanation, and

?? Types of explanation.


We will discuss the above three pillars of explainability in detail in my next article. Thank you for reading this article. Hope this information has been useful. I would love to know your feedback and learnings.

Happy Learning !!

Rupa Singh

Founder and CEO(AI-Beehive)

www.ai-beehive.com



References:

Christoph Molnar. 2018. Interpretable Machine Learning: a guide for making black box models explainable.

Tim Miller. 2019. Explanation in artificial intelligence : Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi. org/10.1016/j.artint.2018.07.007?

?Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Lea

Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (xai): Towards medical xai. arXiv preprint arXiv:1907.07374, 2019

?Brent Mittelstadt, Chris Russell, and Sandra Wachter. Explaining explanations in ai. In Proceedings of the conference on fairness, accountability, and transparency, pages 279–288. ACM, 2019.

Andreas Welsch

AI Advisor | Author: “AI Leadership Handbook” | Host: “What’s the BUZZ?” | Keynote Speaker

2 年

Rupa— You’re mentioning a key aspect that I see is often brushed over in an explainable AI discussion — at least in those that are high-level. The fact that different personas have different requirements towards an explainable AI model is key. While a subject matter expert in Finance might want/need to understand why a decision is being proposed, auditors might have a different set of the questions looking at the same business situation.

Kaushik Chaudhuri

Associate Professor at SoME, Shiv Nadar Inst of Eminence deemed to be University, member thinktank Gobal AI Ethics Ins

2 年

Thanks for sharing!

要查看或添加评论,请登录

Rupa Singh的更多文章

  • EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    Because of the rapid growth of technological advancements and innovation, governance and regulatory mechanisms are put…

    2 条评论
  • Angulimala in Our Algorithmic World!!

    Angulimala in Our Algorithmic World!!

    Once upon a time, in the lush forests of ancient India, there lived a fearsome bandit named Angulimala. His name struck…

    10 条评论
  • AI Ethics Approach is Reactionary instead of Proactive

    AI Ethics Approach is Reactionary instead of Proactive

    In the recent past, AI solutions are pervasively deployed and at scale in many application areas of societal concerns…

    8 条评论
  • Discriminatory Hiring Algorithm

    Discriminatory Hiring Algorithm

    Algorithms do not build themselves. They often rely on human input and the choices they make about the outcomes.

    6 条评论
  • Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    AI Ethics should not be treated as an aftermath, rather organizations must prioritize the incorporation of AI ethics at…

  • Auditing for Fair AI Algorithms

    Auditing for Fair AI Algorithms

    With the widespread deployment of AI systems, there has also been valid concerns about the effectiveness of these…

    4 条评论
  • Fairness Compass in AI

    Fairness Compass in AI

    An interesting experiment was performed around two decades ago on fairness and inequality on two Capuchins monkey that…

    2 条评论
  • Real World Biases Mirrored by Algorithms

    Real World Biases Mirrored by Algorithms

    We are going to grab the bull by its horns by tackling first the most challenging type of algorithmic bias: Cognitive…

    8 条评论
  • 5-Steps To Approach AI Explainability

    5-Steps To Approach AI Explainability

    The concept of explainability in AI is often related to transparency, interpretability, trust, fairness, and…

  • Multicollinearity in Linear Regression

    Multicollinearity in Linear Regression

    Multicollinearity Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple…

    5 条评论

社区洞察

其他会员也浏览了