Why we should be narrow minded about AI
Artificial Intelligence is making great inroads for businesses, providing them with valuable new insight into the way they run their operations. And for the individual, AI is simply continuing to do what it’s been doing for years; making our lives easier.
But many people today are confused, and even fearful, about Artificial Intelligence. They read stories that make bold claims about the threatening capabilities of AI and how these systems are going to take over the world.
I think this stems from a conflation of general purpose AI (usually called Artificial General Intelligence, or AGI) and special-purpose AI (usually called Artificial Narrow Intelligence, or ANI).
There is a huge difference between these two forms of AI. Everything that exists today in the world of AI is ANI – narrow in the sense that the applications are very good at doing one single thing, such as image recognition and natural language processing. However, a system built for speech recognition would not be able to recognise images.
But it gets even narrower than that - an image recognition system that was trained to recognise pictures of dogs couldn’t suddenly switch to recognising cats - it would need to be trained from scratch again to do that. Even AlphaGo, Deepmind’s AI system which beats the world’s best at Chess and Go, wouldn’t be able to beat you at noughts and crosses. Programmers would have to wipe its memory and ‘retrain’ the AI.
Our brain can pull together all our thoughts (images, sounds, feelings, smells, etc) to create new thoughts and concepts that we use to make decisions and create things, which is what we usually call intelligence, but is more accurately called ‘general intelligence’.
Now, it’s true that there have been some very recent developments from companies like Deepmind that have started to get systems to take learnings from one computer game and use them in others, but it is still very early days.
And this is just one, very specific environment. The challenge of taking that out into ‘the wild’, i.e. real life, is on a whole different order of magnitude. It’s likely that at some point there will be some very limited examples of AGI but I am very sceptical that we will ever recreate the full capabilities of our brain.
There are already many examples of ANI being used in business, and by people, on a day to day basis. Email spam filters use Natural Language Processing (NLP) and Prediction capabilities to differentiate between normal and spam emails. Voice assistants, such as Siri and Alexa, using Speech Recognition and NLP to understand what we want them to do. Satnavs use Problem Solving capabilities to find the optimum route. Many people don’t realise they’re actually using AI in these situations because it has become so commonplace.
Virgin Trains uses Celaton’s AI software to ‘read’ all of their incoming emails to determine who is the best person in the organisation to deal with it; Deutsche Bank uses Speech Recognition AI capabilities to ‘listen’ to all of their dealer’s client calls for hints of non-compliance or fraud; Paypal uses AI Prediction capabilities to identify fraudulent transactions almost as they are happening; and Google used AI Optimisation capabilities to reduce the cooling bill in their data centres by 40%.
So, AI has the ability to transform the way that companies do business, but they need to understand what it is capable of and how to implement it. Otherwise, there is the risk of inflated levels of expectations that become unsustainable, leading to an ‘AI Winter’. These have happened before. The first occurred between 1974 and 1980, and the second between 1987 and 1993 - chiefly due to the failure of 'expert systems' to meet 1985 levels of expectation, when billions of dollars were being spent by corporations on the technology.
In 2018, I believe that four big drivers, which have enabled and accelerated the use of Machine Learning, will ensure that we avoid another AI Winter.
The first driver is the huge increase in the amount of data that is consumed by people and businesses. It is generally agreed that the amount of data generated across the globe is doubling in size every two years – that means that by 2020 there will be 44 zettabytes (or 44 trillion gigabytes) of data created, or copied every year. All good news for AI, which relies on data to deliver most of its value - without data to train it, most current AI applications would be useless.
The second driver is the dramatic drop in the price of computer storage, which we need to store all the above data. In 1980, one gigabyte of storage would cost, on average, $437,500 - by 2016, the cost stood at just under 2 cents per gigabyte.
Thirdly, computer processing speed has become much, much faster. Most people have heard of Moore’s Law: the founder of Intel, Gordon Moore, predicted in 1965 that the number of transistors that could fit onto a chip would double every year. In 1975, he revised this to doubling every two years. The most recent iteration, coined by David House, an Intel executive, is that chip performance (as an outcome of better transistors) would double every 18 months. We are just starting to see this level off slightly, but the advances so far are clearly beneficial to the heavy number crunching that AI systems have to do.
Finally, there is our ubiquitous connectivity. Clearly the internet has had a huge enabling effect on the exploitation of data, but it is only in the last few years that the networks (both broadband and 4G) have become fast enough to allow large amounts of data to be distributed between servers and devices. For AI, this means that the bulk of the intensive real-time processing of data can be carried out on servers in data centres, with the user devices merely acting as a front end. That has enabled completely new ways for AI to be delivered.
So, I’m confident that the forces in play will ensure that the excessive hype there is around AI will not contribute to its downfall. There is now enough momentum, and enough factors in AI’s favour, to ensure its ongoing success.