5-Steps To Approach AI Explainability

5-Steps To Approach AI Explainability

The concept of explainability in AI is often related to transparency, interpretability, trust, fairness, and accountability.

Interpretability and explainability is sometimes used synonymously. However, according to researchers,

Model is said to be interpretable, if :

? It is able to summarize the reasons for its behaviors.

? Gain the trust of users.

? Produce insights about the cause of decisions.

Model is said to be explainable, if :

? It has the capacity to defend its actions.

? Provide relevant responses to questions.

? It can be audited.

Defining an explanation is the starting point for creating explainable models. And the key aspect to be considered while defining an explanation is: "The subjects involved in any explanation".

  1. The Explainer (the system): The one who provides the explanation.
  2. The Explainee (the human): The one who receives the explanation.

Whom the explanation is targeted is one of the most important consideration while creating AI with the capability of explainability.

Explanation is built on primarily three pillars:

?? Goals of an explanation.

?? Content of an explanation.

?? Types of explanations.

No alt text provided for this image


What can we do better?

Key aspects to create better explanations:

?? Providing more than one explanation, targeting different user group.

??Making explanations simple and understandable, that follow principles of human conversation.

Based on the goals, background, and relationship with the product, Explainee can be categorize into three main groups:

No alt text provided for this image

  1. Software developers, data analysts, investigators in AI.
  2. Specialists in the domain such as physicists or lawyers.
  3. Final recipient of the decision. Ex. A person whose loan has been accepted or rejected.



According to Miller, Explanation must follow the four maxims:

  1. Quality : Making sure that the content of the explanation is of high quality.

a) Avoid saying things you believe to be false.

b) Avoid saying things for which you do not have sufficient evidence.

2. Quantity : Proving the right quantity of information. Delivering the right quantity of data and abstraction.

a) Making your contribution as informative as needed.

b) Do not make more informative than needed.

3. Relation : Providing information that is related to the conversation.

a) To be relevant with the information needed.

b) To be relevant to each stakeholder.

4. Manner : Relates to how an information is provided rather than what is provided.

a) Avoiding obscurity of expression.

b) Avoiding Ambiguity.

c) Be brief

d) Information should be provided in order.


Conclusion: It is difficult to approach explainable AI in a way that fits all the expected requirements at the same time, hence, different explanations for every need and user profile should be considered. As explanations are multifaceted, it cannot be achieved with one single, static explanation. This calls for the system that targets explanations to different types of users, considering their different goals and objectives, and providing them with relevant and customized information.

When explanations are user-centric, it makes explainability easier to approach rather than when we try to create explainable systems that fulfills all the requirements of a general explanation.


Thankyou for reading this article. Hope this was helpful.

I would love to know your feedback and learning.

Happy Learning !!


Rupa Singh

Founder and CEO(AI-Beehive)

www.ai-beehive.com

要查看或添加评论,请登录

Rupa Singh的更多文章

  • EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    Because of the rapid growth of technological advancements and innovation, governance and regulatory mechanisms are put…

    2 条评论
  • Angulimala in Our Algorithmic World!!

    Angulimala in Our Algorithmic World!!

    Once upon a time, in the lush forests of ancient India, there lived a fearsome bandit named Angulimala. His name struck…

    10 条评论
  • AI Ethics Approach is Reactionary instead of Proactive

    AI Ethics Approach is Reactionary instead of Proactive

    In the recent past, AI solutions are pervasively deployed and at scale in many application areas of societal concerns…

    8 条评论
  • Discriminatory Hiring Algorithm

    Discriminatory Hiring Algorithm

    Algorithms do not build themselves. They often rely on human input and the choices they make about the outcomes.

    6 条评论
  • Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    AI Ethics should not be treated as an aftermath, rather organizations must prioritize the incorporation of AI ethics at…

  • Auditing for Fair AI Algorithms

    Auditing for Fair AI Algorithms

    With the widespread deployment of AI systems, there has also been valid concerns about the effectiveness of these…

    4 条评论
  • Fairness Compass in AI

    Fairness Compass in AI

    An interesting experiment was performed around two decades ago on fairness and inequality on two Capuchins monkey that…

    2 条评论
  • Real World Biases Mirrored by Algorithms

    Real World Biases Mirrored by Algorithms

    We are going to grab the bull by its horns by tackling first the most challenging type of algorithmic bias: Cognitive…

    8 条评论
  • Is Your AI Model Explainable?

    Is Your AI Model Explainable?

    Why don't we just trust the AI models and accept the decisions made by the machines, if the machine learning model…

    3 条评论
  • Multicollinearity in Linear Regression

    Multicollinearity in Linear Regression

    Multicollinearity Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple…

    5 条评论

社区洞察

其他会员也浏览了