How the novel coronavirus can infect you AI!!!!

We all have heard of (and some of us may have even burned their hands because of) viruses that attack computers and software systems and could bring them down to their knees. The more insidious of them, could even extract a ransom from you in bitcoins! BUT all of these viruses are basically pieces of code not fragments of DNA!

The novel coronavirus that is causing Covid-19 pandemic all over the world seems to be affecting, in addition to humans, at least some kinds of AI systems too!!!!

Don't believe me?

Take a look at this (https://www.technologyreview.com/2020/05/11/1001563/covid-pandemic-broken-ai-machine-learning-amazon-retail-fraud-humans-in-the-loop/?truid=33395766a4656e42ac3bca489af91dbe&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_content=05-13-2020&fbclid=IwAR1MTBCtoeAkzQG4NsO1pc4owbczzh0qVepv3SP6V87qJVH-MaCtZMfx0xM)

The novel coronavirus has caused us to change our behavior dramatically and therefore, a host of AI driven systems which fed on data created by our behaviors to assist a wide range of decisions, are faltering or reaching absurd conclusions.

The effects have been dramatic - causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that, that normal has changed and many of these models are no longer able to account for this change.

The problem is twofold (a) In the first place, it is a mistake to assume you can set up an AI system and walk away, and expect it to work in perpetuity but more importantly (b) this has also exposed the brittleness of the "understanding(?)" of "knowledge(?)" that machine learning techniques have been able to build in AI systems.

For example a supplier of condiments and sauces found that the dramatic change in its customer's behaviour broke its predictive algorithms which were key to its automated inventory management system. The system's sales forecasts that the company relied on to reorder stock no longer matched up with what was actually selling.

Another firm which uses an AI to assess the sentiment of news articles and provides daily investment recommendations based on the results, had to reply on manual editing because with the news at the moment being gloomier than usual, the AI can no longer generate balanced investment advice. 

The problem is - such AIs are generally trained on historical data, with the expectation that the more extensive the data set, the more number of years you go back in the history, the more "intelligent" is you AI. However nobody trains their AIs on freak events like the Great Depression of the 1930s, the Black Monday stock market crash in 1987, or the 2007-2008 financial crisis. And even if we consciously include such historical catastrophes in our training data, we can’t prepare for everything. In general, if a machine-learning system doesn’t see what it’s expecting to see, then you will have problems! 

However even that inability to train AIs to respond correctly to inputs sufficiently different from the training data is not the real problem. Even human beings are not trained to deal with Black Swan events like Covid-19, nor are they capable of predicting when such a catastrophe will happen, nor even to recognize it when it hits but starts small. However when it snowballs into something so big that it can no longer be mistaken for business as usual, humans do figure out that their usual decision mechanisms are not working and therefore they need to engage in fire-fighting! BUT AIs don't - at least at the present state of technology - they can't figure out that the output they are generating makes no sense in such unusual circumstances and therefore they can't reorient themselves into a fire-fighting mode!

This brittleness of AIs is ofcourse something that is much written about and as yet at least there is no general solution that has emerged. Some interesting approaches have been proposed to address this brittleness. One of them is called curriculum learning. But more about that in my next article. For now, do take a look at the link I referred to earlier to see the interesting and amusing ways in which AI applications can fail because Covid-19 has forced a significant behavioral change on human society. 




要查看或添加评论,请登录

Satish Joshi的更多文章

社区洞察

其他会员也浏览了