What does a Black Swan Event like Covid-19 do to your AI solution?

No alt text provided for this image

When you look at AI solutions that predict market dynamics in order to optimize your pricing, improve your logistics, ensure your warehouse is stocked with the right equipment at the right time etc., the underlying algorithms leverage big amounts of (historic) data. With sufficient data (quantity and quality) good data scientists can build models that take into account seasonality, competitive actions, demand gen activities etc. to either make directly or allow you to make good decisions to increase revenue or lower your cost.


But what happens to your data and subsequent decisions when a Black Swan event like Covid-19 occurs? Your data is obviously no longer able to support adequate predictions. However, many decision processes are automated using the input from AI models and will continue operating business as usual. It may even drive a reinforcement of a dynamic; something you already see even without the impact of a Black Swan Event. The often quoted example are the algorithms used on a stock market that can accelerate a downturn due to the algorithms all pumping out sell orders at the same time. It also reminds me of a post by #Michael Mederer on Negative Digital Amplification that highlights the impact of algorithms deciding which articles you’re being presented with when reading news online and how that amplifies certain thoughts/behaviors/dynamics.

 

Enough digressed though, let’s come back to the impact of a Black Swan Event on your data and AI models. You are facing three major algorithmic crisis : (1) Current decisions are wack. (2) The data history you are now starting to build is wack (3) Ramping back to normal operations is a nightmare. The model will continue making decisions, unless a manual intervention takes place to stop the algorithm (i.e. someone ‘remembers’ that there is this algorithm making decisions in an environment that is probably driving wrong decisions now) or we include a non trivial 2nd layer of AI that monitors whether the system is still operating within its rules/parameters. A manual intervention combined with subsequent manual decision making would clearly be the opposite of what you want to achieve with AI

in the first place. While the temporary move to alternative decision making models can probably not be avoided, one can do something about the first step. AI solutions work and deliver good results when operating within the rule book and a normal environment. A 2nd layer of AI is necessary though to ensure the model is not drifting. It includes elements that define how to recognize a Black Swan Event, how to build an “alternative history” and finally how to ramp your AI model down and back up in such situations.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了