Is explainable Deep Learning important?
Ahmad Haj Mosa, PhD
Director @ PwC | Head of AI CoE | Co-founder of Digicust | PhD, Machine Learning, GenAI, Driving Innovation | TEDAI Panelist
Recently, artificial intelligence (AI) has become one of the fastest emerging technologies. It’s hard to predict how complex and advanced AI will be in the next decades. However, it is easy to define how AI can converge faster with our daily life activities in business, government, and society. According to a recent study from PwC [1], building an explainable, transparent and responsible AI is one of the most eight important AI trends in 2018.
Deep learning (DL) is one of the fastest-growing field in artificial intelligence. The importance of deep learning is due to its capability to learn high level features, that gives a higher level of abstraction of the raw attributes. Deep learning has been very successful in the field of computer vision. One main reason behind this success is that visual systems are easy to analysis and it is possible to visualize the deep learned features. This feature makes visual based deep learning model relatively a white box model.
However, this is not the case in the scope of non visual machine learning such as price, value added tax (VAT) or payroll cases, in which it is hard to visualize and track the hidden learned roles in the deep learning model. But, is it important to open the black box?
Despite the remarkable results of DL models, there is always a risk that it produces delusional and unrealistic results due to several reasons such as underfitting, overfitting or incomplete training data [2]. For example the famous Move 78 of the professional Go player Lee Sedol which caused a delusional behavioral of Alpha Go [3]. Another more serious example is the erroneous behavior that DeepXplore [2] found in Nvidia DAVE-2 self-driving car platform, in which the system has decided to do two different decisions for the same input image, where the only difference is the brightness level. This is of curse extremely dangerous and will lead to a break in trust between users/investors and AI.
As a bottom line, AI researchers are facing now a trade-off decision between developing a super powerful AI with low level of explainability, which would keep it encapsulated inside the research labs, and developing a highly explainable and controllable AI, that would be trusted in the society, hence faster converged with our daily life activities.