Explainable Artificial Intelligence(XAI)
Explainable Artificial Intelligence (XAI)

Explainable Artificial Intelligence(XAI)

In the last decade, there has been an unprecedented rise in the adoption of Machine Learning to address some of the challenges that were earlier seen as impossible for a machine to solve. This has been possible due to rapid development in the field of parallel computing using hardware accelerators like Graphical Processing Units (GPU) or Tensor Processing Unit (TPU). These hardware accelerators can efficiently (time and space) perform complex mathematical calculations required to solve these problems. Machine Learning is the field of Artificial Intelligence that applies various techniques and algorithms to learn from a large pool of data by identifying the patterns and then applying that learning to infer from patterns that are present in previously unseen data.

Deep Learning is a subfield of Machine Learning that does rely on large data sets to learn the patterns using a concept called artificial neural networks. These neural networks are usually multi-layered (hundreds of layers with millions of learning parameters) with each layer capable of automatically learning different patterns/features of the data set during the training process.

This field of Deep Learning has opened up a wide range of applications of Machine Learning in the field of Computer Vision – object detection/image classification, Natural Language Processing – speech recognition, etc. There has been a huge success in the adoption of Deep Learning in the field of e-Commerce, health care, industrial robotics, customer services, banking, automotive etc.

Even with the high training/validation accuracy exhibited by state-of-the-art DNN models, there have been multiple instances of AI failures in the recent past involving human life, that has forced researchers to understand the internal learning of the DNN model. Due to the inherent nature of Deep Neural Networks (DNNs) that involve a large number of cascaded layers, and a?large number of learning parameters, make DNN a “black box” for the developers to understand what features of the data its learning and for end-users to comprehend its decision-making capabilities.

Due to all these needs, the US Department of Defense Advanced Research Projects Agency (DARPA) initiated the development of tools that can explain the learning and decision-making process of an AI model. This new field of AI is called eXplainable Artificial Intelligence (XAI). XAI aims to develop methods and techniques that would help understand the results produced by AI models in a human-understandable format.

Artificial Intelligence is the field for Machine Learning where a system learns, comprehends, and interacts with the real-world environment. For doing so, the system makes certain predictions and takes certain decisions. Explainable AI is the field of Machine Learning that explains the reasoning behind the predictions/decisions made by the AI system. It is a part of a broader field, named Interpretability, that aims to explain AI systems decisions in a human understandable format, thus help improve transparency, fairness, trust and accountability of AI system.


No alt text provided for this image

Explainability approaches are being employed in a wide range of artificial intelligence applications, including natural language processing (NLP), computer vision, medical imaging, health informatics, and others.

Benefits of XAI:?

·??????Minimizing Risk of Errors: Certain applications of AI are life critical and hence any wrong decision taken is a direct threat to human life. Understanding and correcting erroneous results can help improve model and hence reduce risk to human life.

·??????Lowering Model Bias: AI models are susceptible to bias – gender, race, environment, etc. XAI helps to identify such bias and improve model to lower model bias.

·??????Compliance Regulation: XAI provides confidence to developers to abide by the strict regulations from compliance bodies before deploying AI models to production environment.??

Written by: Prof. Shakti Kinger ([email protected]) & Prof. Rashmi Rane ([email protected])

要查看或添加评论,请登录

MIT-WPU School of Computer Engineering and Technology的更多文章

社区洞察

其他会员也浏览了