Busting AI Myths - #3
Kimmo Kauhanen
Head of Supervisory Technologies division @ Finanssivalvonta - Finnish Financial Supervisory Authority
The next myth is a very interesting one since it involves the human mind:
AI Myth #3: All black-box AI needs to be explained.
The third myth requires some background information. AI is using a model/algorithm to make decisions/predictions/analyses. An organization must find and select the most useful model for its problem. (Here you can find a good overview of the 10 most popular AI models: https://dzone.com/articles/top-10-most-popular-ai-models)
Some of the models like Na?ve Bayes, logistics regression, and decision trees are high on explainability but accuracy is lower. These models can be called interpretable models. On the other hand, some models like bagging and random forests, and neural nets are very accurate but these models are hard or impossible to explain. Models that are impossible (or at least very difficult) to explain are called black boxes.
For many people, black-box AI models are psychologically difficult because they are used to understand the reasoning and logic behind systems and results of the system. For black-box models, it is impossible to fully explain the reasoning behind the individual result of the AI model.
How to tackle this problematic situation within the organization? You need to build trust within the organization with these actions:
- Determine the need for transparency,
- Give stakeholders visibility into training data, so they see what kind of data is used to train AI, and
- Empower the business with a choice: explainable vs. accurate model
Any thoughts?
Coming up in the next article: Myth #4 AI can be free of bias.
Here are quick links to previous myths:
#ai #digitalization #digitaltransformation #digitalleadership #digital #machinelearning #deeplearning #deepneuralnetworks #aistrategy #strategy #aimodels