What I Talk about When I Talk about Artificial Intelligence
I studied Computer Science in the 1980s. I was fascinated by what computers could do. I found hardware all a bit boring (and separate – my tutor once said to me that the design principles of software applied whether the infrastructure was silicon chips and electronic networks or paper cups with string between them), but software fascinated me. Programs, algorithms, data. It was like I had found my intellectual home. Making programs that did something interesting, fun, beautiful and/ or valuable was like heaven to me. Where an essay or a poem felt somewhat impotent to me, a computer program, viewed in the right way, could be both beautiful and powerful – because it could do something. I remember almost crying when I first saw other people having fun playing a computer game that was entirely of my creation. I particularly found the field of Artificial Intelligence (AI) fascinating.
The height of my achievement in terms of AI was, in 1988, making a program that translated commentaries on chess games between English, French and Dutch using an interlingua approach, and a ‘wait and see’ parser, in the language Lisp using a weird Unix workstation called a Whitechapel MG-1. And it did it pretty badly.
Hence I feel a bit intimidated talking about AI, because the digerati are now spending squillions on it, charismatic digital leaders like Elon Musk are making pronouncements on it, high budget, high concept movies are based on it, hundreds of books are being published on it, and the psychological ethical, legal, employment and other macro implications of it are all being pored over by the great and the good.
However, I still kind of love AI, have a lot of chats inside and outside work about it, and had my passion reignited on a study tour of Silicon valley and Seattle, so I am going to talk about it a bit here. I make no claims of completeness, currency or even 100% accuracy (in other words: caveat lector), but I hope you find these reflections interesting and useful:
What is Artificial Intelligence?
AI actually refers to two closely related things, but with slightly different goals.
- Creating Clever Machines: Computers that do intelligent things, such as recognize a face from a photo, win a game of chess, or work out how to get customers to buy more/ pay more.
- Creating Human-like Machines: Computers that behave like humans. This second meaning of AI breaks down into (a) emulating human thought processes (this is where neural networks came from) and (b) producing the same outcomes as humans would. The Turing test, named after the awesome Alan Turing, is a test of the latter. (A computer system is deemed intelligent by the Turing Test if a human talking to it cannot work out if she is talking to another human or a computer.)
Of course these two are related, but they lead to different emphases. If your goal is purely #1 – creating clever machines, you may find a method more suited to a machine that works better than, and is nothing like, the human equivalent.
#2 is a much more specialized goal, and leads to more touchy-feely like discussions about whether machines can have emotions, legal rights, etc. #1 is much more widely applicable, and functional, and interests me more. So, for the rest of this article, I will talk about #1.
Perhaps most importantly, we should be clear that definition #1 is unavoidably ambiguous. If we define AI as making machines clever, we are really just deferring the definition of AI to the definition of the word clever. (c.f. Jacques Derrida’s notion of definition as an endlessly deferred web of difference.) For example, if we were talking about a person being intelligent, we would almost certainly not be talking about their ability to see or hear things, or talk, but computer vision and natural language processing normally fall under the definition of AI – at least for the moment – because they are hard things for a computer to do. This ambiguity isn’t necessarily a huge issue, but (a) we must recognize it is there, so there is no perfect, agreed upon definition of what’s in and what’s out and (b) we must realize that as a consequence, AI is “a bag of monkeys”, i.e. a group of very dissimilar things that are only connected by the fact that they are algorithmically hard do for a computer.
It is also worth noting that AI has multiple names, like Machine Intelligence (which the company I work for, Leading Edge Forum, prefers), Cognitive Computing and many others. And there are many subcategories of AI, like machine learning and deep learning. It is also completely unimportant, and irrelevant to note here, that when I started working on AI in the 1980s, the acronym was much more associated with Artificial Insemination – endless hours of laughter for an immature undergraduate.
And we should also note that there is a big overlap between AI and robotics. AI is kind of the brain of the thinking machine, robotics is the body. They can do powerful things together. But a lot of the work in robotics is about materials science, electronics, etc. – so it is a separate, but highly connected domain. Unfortunately the world has started using the word robot for software too. I find this confusing and unhelpful, and come the revolution, people doing this will be the first to feel my wrath.
Finally, in this section, I wanted to mention crowdsourced intelligence. If our goal is to get computers to do clever things, why should we not make use of humans in doing so!!! We are seeing the emergence of crowdsourced microtask marketplaces – i.e. someone who could do something very small for you for $1 or less. More generally, we can harness the power of a few billion people to come up with very intelligent solutions to any problem you might have, given the mechanism to find them, incentivise them economically, and validate their answers. Clearly this bumps up against the notion of the intelligence being ‘artificial’ or ‘machine’, but maybe that’s OK?
What does AI allow us to do?
I believe AI allows us to broadly do 3 things:
Interact with humans using natural interfaces. My generation have learned to talk the language of computers – i.e. sitting at a computer, using a keyboard and mouse, staring at a screen, using a printer, using specific key combinations and programming languages. More and more, computers interface with us in our world. Think Alexa, Siri, Cortana. Speaking to and listening to us, watching us and being informed by our context, gestures, physiological state and brain patterns. In a pure sense, I think this is not really AI, just hard human-computer interface work. But at least for now it is called AI. Having said that, it is also worth noting that these natural interfaces are often inextricably linked with other aspects of AI. Intelligent Agents that interpret and act on your intentions are a good example. (Think of an agent that can hear and act on: “Book me a flight to see mum and dad in my Christmas holidays.”)
Further automate things. More and more sophisticated algorithms connected to more and more types of data are able to control more and more stuff using the Internet of Things, and make ‘intelligent’ decisions without human intervention. Depending on the domain, this can look widely different. Three examples:
- Program trading in financial services, allowing algorithms to exploit market imperfections at very high speed to create arbitrage value.
- Factories, cars, trains, and cities that can sense problems and heal themselves.
- Drug delivery systems that can sense physiological issues and adapt (e.g. artificial pancreases controlling insulin dosages.)
But essentially, this is about doing what we could do already, cheaper, faster or in some other way better through the use of machines. It is reminiscent of that dystopian joke: “In the future, each factory will be run by one man and one dog. The man’s job will be to feed the dog. The dog’s job will be to keep the man away from the equipment.”
Infer things and generate new insights. This category refers to doing things we could not do before by making inferences based on artificial intelligence. Finding patterns we never knew existed, understanding motivations that we have never identified etc. This isn’t perfectly separate from the previous category, but feels a bit different in flavour, and when we come to the ‘challenges’ section later, this difference is important. Imagine a combination of data and intelligence that worked out from your buying patterns that you were pregnant, and that you attend a meeting with other pregnant women who might also appreciate some of the goods you have been buying. Or intelligence that facilitated the equivalent of precrime (as described in the story and film Minority Report), where risk reducing action could be taken against those deemed likely to commit a crime.
How does AI work?
Because the definition of AI is inescapably ambiguous (even when we limit it to the Clever Machines version) there is also no clear delineation of what tools and methods that are considered AI as opposed to, say, simple analytics. Nevertheless, there are a couple of techniques that are really important.
I. Neural networks. The thought processes of a human brain seem to work as a network of neurons. There is a layer of inputs that are connected, through a bunch of intermediate layers, ultimately to an output layer. For example, if the task is machine vision, the inputs might be all the different dots of colour that hit the back of the retina/camera, and the outputs might be a term describing the subject of the picture, e.g. ‘fruit bowl’. The training process for a neural network is repeatedly providing it with a large number of (input, output) pairs (in this case pictures and descriptions), and the neural network adjusting the weights of its connections until they produce the right outputs. Neural network performance has been able to get much better because of more powerful computers, being able to handle much larger neural networks.
II. Genetic algorithms/evolutionary algorithms. These are AI strategies based on the Darwinian idea of natural selection. Here we create the structure of an algorithm to complete an intelligent task, such as predicting the outcome of a football match from information about the two teams. For example, Team1LikelihoodOfWinning = A*Team1LeaguePosition + B*Team2LeaguePosition + C*%Team1PlayersFit + D*%Team2PlayersFit. Then the parameters of the equation (A,B,C,D) become the DNA of each algorithm. We create a population of, for example, a few hundred such algorithms with different values for A, B, C and D. We test their predictive ability against known results (training data), then allow the best performing algorithms to mate (swap As, Bs etc.) and mutate (have random changes in As, Bs etc.) We repeat this process for many, many generations, hopefully evolving to a very fit population of high-performing algorithms.
There are other techniques that are typically categorized as AI, and more are sure to emerge, but the above two ‘families’ of approach are important ones.
What challenges does AI bring up?
AI clearly has some amazingly positive outcomes, potentially making us more productive, happier, richer, safer, etc. But if we put our ‘glass half empty’ hat on for a second, what are the scary bits? Here are a few:
1. You don’t know when you’re using a computer. Computers are embedded in everything we do, constantly talking away, even when we are not consciously using them, this creates a problem. The problem is that the computer may be doing something we aren’t aware of and don’t want, such as tracking where we are, what we are saying, etc. In addition to this, if computers can behave as humans, it may be that we think we are interacting with a human, when in fact we are interacting with a computer. This seems rather undesirable.
2. Democratizing AI also has a dark side. Although many people have concerns about the government tracking them and having intelligence to understand a lot about their lives, how much worse is it if that kind of capability is available to all companies, large or small, your neighbours, your friends and criminals. AI delivered over the cloud by the digerati means that the power of AI is increasingly in the hands of everyone who wants it.
3. Society depends on ignorance. We might argue that society to some extent depends on us not knowing certain things. For example, we might argue that the insurance industry somewhat depends on people not knowing their exact risk profile compared to others. Who would want to pool risks with others who are more risky? In a more and more transparent world, more and more data is available to everyone. In a world of widely available AI, not only is more data available, but more insight. This may make for a more precarious society.
4. Can we trust the output of AI? An issue that fascinates me is that even if you know an artificially intelligent system seems to make good decisions, you may not want to trust/ rely on it if you don’t know why it makes those decisions. Neural network-based systems typically suffer from a lack of inspectability. In other words, it is very hard to work out why a neural network makes the decisions it does, because it is made up of a very large number of neurons. A purely genetic algorithm-based solution, on the other hand, may go through a very complex evolutionary process, but may end up with a rather simple equation that is more inspectable. Of course, a purely genetic algorithm-based solution is unlikely to have the predictive power of a massive neural network solution. There are initiatives to help with the inspectability of neural networks, but the problem still remains.
In summary, the exciting AI journey continues apace, and is very exciting, both from an intellectual point of view and from a practical one. We need to have an educated, nuanced view of the types of AI being applied, and the (intended and unintended) outcomes of applying them. I am looking forward to watching and participating as AI evolves.
A Workday testing expert, developing & delivering global test strategies on behalf of blue-chip companies for HR transformation programmes
7 年Murakami? Good article!
Very insightful piece Dave
Retired
7 年A thought provoking article!
Solutions Sales Manager, CRM & Industry Workflows, Financial Services
7 年Great article Dave Aron
Quality & Risk Professional within the Medical Device, Automotive, Polymer & Gas Network Sectors in US UK and Ireland
7 年https://www.rawstory.com/2016/03/a-neuroscientist-explains-why-artificially-intelligent-robots-will-never-have-consciousness-like-humans/amp/