With a Hammer in Hand Everything Looks Like a Nail : Story of AI

With a Hammer in Hand Everything Looks Like a Nail : Story of AI

I have no doubt that AI is making huge strides and has been occupying some of the best minds of this century. However, the hype around AI is making equally huge strides. Some of the hype, one can argue is inevitable, but one needs to be a little watchful because it is entering everyday lives. Many of us are in positions, both in our professional as well as personal lives, where we need to evaluate the genuineness of claims to using AI, and if you can’t separate the hype from the truth you would end up spending money on fake products and services. They say that when it comes to capital markets you are in the midst of a bubble if your driver begins to make stock recommendations to you. We can see that equivalent happening in AI now.

Let me first take a couple of examples of genuine AI products.

Some years back, one of the co-founders of ‘Liv.ai’, a Bengaluru based AI start-up, met me and demonstrated their product that used NLP to convert speech to text in multiple Indian languages. I had always known that text to speech was easy, but converting speech to text in multiple languages was a hard problem to solve. I was a bit skeptical, and more so because this was an Indian company. But when I saw the product, I was quite blown away. Before I could even seriously think of recommending it to someone who would see this start-up as a great investment opportunity, Flipkart acquired it and built their shopping assistant, ‘Saathi’, in text and voice interface to support shoppers in the smaller towns of India.

Facial recognition is another problem that has been solved and already has wide applications that touch everyday lives including unlocking one’s smartphone. Work is in progress with image recognition applications in other fields, including in horticulture.

Let me now take some examples of what I would call fake products riding on the AI wave.

A vendor once approached us claiming they had a test that could predict criminal tendency in an individual with an accuracy of 60%, and maybe we should consider this tool to evaluate our delivery boys! Very tempting for a starry eyed gullible user sold on AI.

Let’s dig a little deeper and subject this claim to a rigorous test. The test has a 40% error rate which means that there is a 40% probability that it would classify someone with no criminal tendency as one with a criminal tendency. Let’s now assume that the prevalence of criminal tendency in society is 2%. So, in a population of 100, you would have 98 people without a criminal tendency. But the test would claim that 41 people had criminal tendencies (40% error rate on 98 is 39; add the 2 real ones and you have 41). So, the error rate is actually 39 on 41 or actually 95%. The accuracy of the test with this additional data point is just 5%! Do you need anything else to decide whether you should pay this vendor and run all your new hires through a test like this!

Sometime in April 2019, in an interview with CNBC, Ginni Rometty claimed that IBM artificial intelligence can predict with 95% accuracy which workers are about to quit their jobs. Rometty would only say that its success comes through analyzing many data points. She would not explain “the secret sauce” that allowed the AI to work so effectively in identifying workers about to jump and just said that its success comes through analyzing many data points. IBM is offering this tool to external clients! In my opinion this claim is totally untrue and is driven by a vested interest!

Those familiar with the ‘butterfly effect’ and ‘chaos theory’ would understand this well. Some systems are not very amenable to predictions for the simple reason that even the minutest variation in the initial conditions can result in a huge variation in the end results. In other words a very small variation in the initial conditions does not result in a very small variation in the outcome! Weather forecasting is such a system and hence one can’t forecast accurately for more than a few days irrespective of how much data you may gather and feed into a supercomputer. This phenomenon is more commonly referred to as the ‘butterfly effect’ where a metaphorical flutter of a butterfly wings in the Amazon rain forests could cause a cyclone in the Bay of Bengal. Anyone who claims they can connect the flutter of the butterfly to the cyclone, and can actually predict it, is talking rubbish.

An AI vendor once confidently bragged to us that their tool could look at a job description, evaluate a 100 CVs and pick the best 5 suited for the job. When we probed a little deeper on how their algorithm worked they resorted to the standard shenanigan of ‘we use a deep learning algorithm’. You probe a little deeper into what deep learning really means they are often left gasping. When we got this tool to actually look at a 100 odd CVs and shortlist the 5 best, we found a zero match with what a good recruiter and hiring manager shortlisted!

Professor Arvind Narayanan of Princeton has written that “Much of what’s being sold as ‘AI’ today is snake oil — it does not and cannot work. Why is this happening? How can we recognize flawed AI claims and push back?”

He has classified AI into three broad buckets:

  1. Areas where AI is genuine and making rapid progress like face recognition, medical diagnosis from scans, speech to text, reverse image search etc.
  2. Areas that are imperfect but improving like detection of spam, hate speech, copyright violation etc.
  3. Fundamentally dubious areas like predicting job success, recidivism, at-risk kids etc.

The last category, which is really about predicting social outcomes, is essentially the snake oil being sold to gullible users and used as a pretext for collecting a large amount of data. Users are made to believe that magical insights can somehow be extracted from large amounts of data and more the data better the insights! Professor Narayanan claims that there has been no real improvement in the third category, despite how much data you throw at it; he further goes on to show that for predicting social outcomes, AI is worse off than manual scoring using just a few features. I have no doubt that the inherent limitation in this area is imposed by the chaos theory.

In conclusion

‘Data rich’ is a phrase that has been misused of late to create false expectations. The protagonists of predicting social outcomes will no doubt claim that it is only a matter of time before it gets better. This is untrue. Some things can get better with time, but some ideas have inherent limitations. If small differences in initial conditions, such as those due to rounding errors in numerical computation, can yield widely diverging outcomes even for deterministic systems where an approximate present cannot determine an approximate future, imagine how much more indeterminate or irrelevant would the predictions be for inherently non-deterministic systems like social behaviors and outcomes! Just as the Heisenberg’s uncertainty principle places fundamental limitations at an atomic level, chaos theory places a similar limitation in areas like social outcomes. 

Krishna K.

Product Development I Business Head I IT Sales I Technology Leadership I Value Innovation I Business Analysis | Venture Capital Outreach I Technology Consulting |

3 年

A wonderful read. An eye opener for the starry eyed gullible buyer. This definitely sheds light on , " AI is going to take away all jobs" line which we tend to hear more these days.

Raghu Kaimal

HR Technology | HCM | Employee Experience Tech | People Analytics | Workforce Analytics | Future of HR | Future of Work

4 年

Abhishek Kaushik an interesting read by Hari T.N . Now I can resonate on your argument on using Tech jargons carefully and be simple and try solving a genuine problem of our customers in an effective way. I am sure are able to achieve that at WeCP (We Create Problems) ..

Raghu Kaimal

HR Technology | HCM | Employee Experience Tech | People Analytics | Workforce Analytics | Future of HR | Future of Work

4 年

Very thoughtful post Hari T.N . Loved it ??

Thanks for sharing this thought provoking article Hari. Even positive examples (such as facial recognition) have a slippery slope. I call this bias one of 'racial recognition' (which extends to gender) People of different cultures are more likely to be inaccurately identified using AI Algorithms.? The notorious outcome, i.e. 76.5% prediction that Oprah Winfrey is male - is just one such example https://time.com/5520558/artificial-intelligence-racial-gender-bias/

Ankur Pandey

Founder at Intro AI (AI agents for high intent prospecting), LongShot AI (AI CoPilot for content teams)

4 年

So true sir. Ironically, the ill posed /?non-deterministic type of problems are the ones which are most interesting. The art is how to break them down into solvable chunks.

要查看或添加评论,请登录

Hari TN的更多文章

社区洞察

其他会员也浏览了