Is Your AI Model Explainable?
Rupa Singh
Founder and CEO at 'The AI Bodhi' and 'AI-Beehive' | Author of "AI ETHICS with BUDDHIST PERSPECTIVE"| Top 20 Global AI Ethics Leader | Thought Leader| Expert Member at Global AI Ethics Institute
Why don't we just trust the AI models and accept the decisions made by the machines, if the machine learning model performs well?
As the AI systems are increasingly proliferating the high stakes domains such as healthcare, finance, aviation, automated driving, manufacturing, law, etc., it becomes even more crucial that these systems must be able to explain their decision to the diverse end-users in a comprehensible manner.
Tech giants like Google, Facebook, Amazon are collecting and analyzing more and more personal data through smartphones, personal assistant devices such as Siri and Alexa, and social media that can predict and model individuals better than other people. There is also a growing demand for explainable, accountable, and transparent AI systems as the tasks with higher sensitivity and social impact are more commonly entrusted to AI services.
Currently, many such AI systems are non-transparent with respect to their working mechanism, and that is the reason they are called Black-Box Models. This black box character establishes severe problems for a number of fields including the health sciences, finance, criminal justice and demands for explainable AI.
Explainable AI aims to :
Goals of Explainable AI:
In general, humans are reticent to adopt techniques that are not directly interpretable, tractable, and trustworthy. The danger is on creating and using decisions that are not justifiable, legitimate, or that can not be explained. Explanations supporting the output of a model is crucial e.g., in precision medicine, where experts and end-users require far more detailed information from the model rather than just a simple binary prediction for supporting their diagnosis.
There is a trade-off between the performance of a model and its transparency. However, with improved understanding and explainability of the system, the correctness of the deficiencies can also be achieved. If a system is not opaque and one can understand how inputs are mathematically mapped to the outputs, then the system is interpretable, this also implies model transparency.
5 main aspects of the focus of recent surveys and theoretical frameworks of explainability:
The current theoretical approach of explainable AI also reveals that it does not pay enough attention to what we believe is a key component: who are the explanations targeted to ?
It has been argued that explanations cannot be monolithic and each stakeholder looks for explanations with different objectives, different expectations, different backgrounds, and of course with different needs. How do we approach explainability is the starting point for creating explainable models and allows to set following three pillars on which explanation is built:
领英推荐
?? Goals of an explanation.
?? Content of an explanation, and
?? Types of explanation.
We will discuss the above three pillars of explainability in detail in my next article. Thank you for reading this article. Hope this information has been useful. I would love to know your feedback and learnings.
Happy Learning !!
Rupa Singh
Founder and CEO(AI-Beehive)
www.ai-beehive.com
References:
Christoph Molnar. 2018. Interpretable Machine Learning: a guide for making black box models explainable.
Tim Miller. 2019. Explanation in artificial intelligence : Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi. org/10.1016/j.artint.2018.07.007?
?Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Lea
Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (xai): Towards medical xai. arXiv preprint arXiv:1907.07374, 2019
?Brent Mittelstadt, Chris Russell, and Sandra Wachter. Explaining explanations in ai. In Proceedings of the conference on fairness, accountability, and transparency, pages 279–288. ACM, 2019.
AI Advisor | Author: “AI Leadership Handbook” | Host: “What’s the BUZZ?” | Keynote Speaker
2 年Rupa— You’re mentioning a key aspect that I see is often brushed over in an explainable AI discussion — at least in those that are high-level. The fact that different personas have different requirements towards an explainable AI model is key. While a subject matter expert in Finance might want/need to understand why a decision is being proposed, auditors might have a different set of the questions looking at the same business situation.
Associate Professor at SoME, Shiv Nadar Inst of Eminence deemed to be University, member thinktank Gobal AI Ethics Ins
2 年Thanks for sharing!