5-Steps To Approach AI Explainability
Rupa Singh
Founder and CEO at 'The AI Bodhi' and 'AI-Beehive' | Author of "AI ETHICS with BUDDHIST PERSPECTIVE"| Top 20 Global AI Ethics Leader | Thought Leader| Expert Member at Global AI Ethics Institute
The concept of explainability in AI is often related to transparency, interpretability, trust, fairness, and accountability.
Interpretability and explainability is sometimes used synonymously. However, according to researchers,
Model is said to be interpretable, if :
? It is able to summarize the reasons for its behaviors.
? Gain the trust of users.
? Produce insights about the cause of decisions.
Model is said to be explainable, if :
? It has the capacity to defend its actions.
? Provide relevant responses to questions.
? It can be audited.
Defining an explanation is the starting point for creating explainable models. And the key aspect to be considered while defining an explanation is: "The subjects involved in any explanation".
Whom the explanation is targeted is one of the most important consideration while creating AI with the capability of explainability.
Explanation is built on primarily three pillars:
?? Goals of an explanation.
?? Content of an explanation.
?? Types of explanations.
What can we do better?
Key aspects to create better explanations:
?? Providing more than one explanation, targeting different user group.
??Making explanations simple and understandable, that follow principles of human conversation.
Based on the goals, background, and relationship with the product, Explainee can be categorize into three main groups:
领英推荐
According to Miller, Explanation must follow the four maxims:
a) Avoid saying things you believe to be false.
b) Avoid saying things for which you do not have sufficient evidence.
2. Quantity : Proving the right quantity of information. Delivering the right quantity of data and abstraction.
a) Making your contribution as informative as needed.
b) Do not make more informative than needed.
3. Relation : Providing information that is related to the conversation.
a) To be relevant with the information needed.
b) To be relevant to each stakeholder.
4. Manner : Relates to how an information is provided rather than what is provided.
a) Avoiding obscurity of expression.
b) Avoiding Ambiguity.
c) Be brief
d) Information should be provided in order.
Conclusion: It is difficult to approach explainable AI in a way that fits all the expected requirements at the same time, hence, different explanations for every need and user profile should be considered. As explanations are multifaceted, it cannot be achieved with one single, static explanation. This calls for the system that targets explanations to different types of users, considering their different goals and objectives, and providing them with relevant and customized information.
When explanations are user-centric, it makes explainability easier to approach rather than when we try to create explainable systems that fulfills all the requirements of a general explanation.
Thankyou for reading this article. Hope this was helpful.
I would love to know your feedback and learning.
Happy Learning !!
Rupa Singh
Founder and CEO(AI-Beehive)
www.ai-beehive.com