COVID-19: Why the need for Continuous Intelligence is now!
This article describes the challenges and solutions for using AI in an environment with highly dynamic data without history like COVID-19 to make accurate predictions in a fast-changing environment.
AI challenges of a highly dynamic environment
COVID-19 poses great challenges to the classical approaches of AI in order to be able to make precise predictions that adapt quickly to new patterns. Data is growing rapidly every day during the pandemic, models have to be continuously re-trained, there is no data history, correlations are largely unknown, new predictions always raise the question of "why" in order to understand the new insights gained by AI. Predictions must always be validated. COVID-19 is representative for many similar application areas of AI where without an automated learning and validation process one cannot keep up with the dynamic development and make predictions based on outdated models. This has to be detected early with suitable methods. In order to enable Continuous Intelligence, AI systems need to learn continuously from new data. The entire process of machine learning (feature engineering, training, validation, deployment, monitoring) must be passed through. The automation of all these steps is a prerequisite for an efficient AI system and is called operationalization. The monitoring of the predictions with regard to their expected quality in the operational application is important and essential in two respects. On the one hand, the dynamics of the data must be constantly checked to see whether the data or prediction pattern has changed in such a way that the previous model must be replaced by a new, better one. Also here, the operationalization plays a decisive role to be able to detect deviations quickly.
Learning from the insights of AI
The AI is able to generate very good prediction models based on a large number of parameters. But especially with topics like COVID-19, the question of "why" a prediction was made is always asked at the same time in order to be able to initiate the appropriate measures. Therefore, an explanation component must be part of an AI system. This makes it possible for humans to learn from the AI! A distinction is made between local and global explainability. Global explainability is about interpreting the behavior of a model. Local explainability, on the other hand, provides information about why a certain prediction was made in a certain case.
Governing AI based measures
If measures are to be initiated automatically on the basis of the predictions of the AI, another aspect, AI governance, comes into play. This is not about blindly trusting the results of the AI, but about setting clear rules regarding the automated implementation of measures. Business Rules Management is an ideal method to implement an effective Governance of the predictions of AI and must be addressed implementing AI based continuous intelligence.
Operationalizing AI is key for adopting Continuous Intelligence
The COVID-19 example shows, representative for many application areas such as payment fraud, multi-channel pricing, credit decisioning etc. that there are a lot of different interdependent requirements when implementing an efficient AI system that need to be operationalized. These can be best handled on an integrated decision automation platform, which combines human knowledge and machine intelligence to enable a robust, agile and scalable solution to these problems.
CEO and Founding Partner at Intralnet
4 年Very insightful and leading edge thinking, thank you for sharing Thomas Cotic