Artificial Intelligence: Doomsday Machine or Chess Grandmaster ?
I need to keep myself up to date on trends in technology. I mean, I’m aware I’m a massive geek, but it’s also part of my job. I read a lot about issues that are trending in my industry and recently I have been coming across quite a few White papers discussing Artificial Intelligence and the doomsday scenarios that advanced AI and Machine Learning is meant to bring about. There have been some quite prominent names recently speaking openly about their concerns as to the direction the technology is going. They seem to be concerned, fundamentally, that Artificial Intelligence will overtake Human Intelligence.
I have a friend, Cameron, a writer who has been working with me on these articles. He is a huge fan of the Terminator movies. The current rash of doomsday scenarios around AI technology is really paying dividends on him. Every chance he gets he starts referencing Skynet, which is the massively advanced AI program that the US government put in charge of their defence systems. All the problems occurred when the program became self-aware, and decided it didn’t need to take orders anymore. It then, obviously created the Terminators, invented time travel, took over the world and wiped out humanity.
Whether Cameron is actually convinced the machines are going to rise up crush humanity beneath their metal boots or not, those are the discussions we often end up having. I suspect he doesn’t really believe that is going to happen, but he likes playing the role of the pessimist. I have another opinion, though, and believe AI, automation and robotics are going to lead to a world far more like wall-e, where self-aware robots will fall in love, while dutifully serving our every need. Maybe I’m an eternal optimist, or maybe I’m spending too much time with Benjamin, one of my boss and I’m starting to pick up some of his personality traits.
Either way Cameron and I are often able to represent the two different sides of the argument on AI - Cameron’s pessimistic view that the world is going to end, and my far more realistic view that these advances in technology are going to end up being enormous benefits for us. They are going to make our lives better in ways even I can’t imagine just yet.
The Current State of AI
There are technology publications out there that are taking the slow, steady progress the industry is making around AI and extrapolating it rather aggressively. These magazines are predicting that Artificial Intelligence will eventually render humans redundant. It will enable advanced automation and robotics, and leave us behind.
But I just can’t bring myself to subscribe to that line of thinking. There is no doubt that the technology around artificial intelligence is advancing. Those working in that space have mastered the art of repeating procedural, routine tasks. Benjamin takes a selfie on his smart phone (often), and there are AI algorithms in place that will detect faces and prompt him to tag his friends on social media. He can then ask Siri or Cortana for directions. When I got to work this morning there was a notification on my phone that seemed to suggest my phone knew that I was parked at work, and that in 8 or 9 hours I was going to want to go home.
Now, I’m not going to deny that these services are pretty clever. The fact my phone can make the distinction between being at an address in Paris, and being ‘at work’ is pretty cool. I can ask my phone to play some thinking music, and it knows I want to listen to Metallica. I get back in my car at the end of the day and it knows I’m going home, so it plans my trip, including traffic warnings.
Decisions?
Those examples really do seem to indicate that there is a level of decision-making intelligence at work (and I’m not even talking about the machine learning that is advancing Tesla’s Auto Pilot program). And if that’s the case then can’t that intelligence make more, and better-informed decisions over time. And if we’re going to accept that those things are possible, isn’t it logical to expect that Intelligence, or some other AI even more advanced than Siri (ha, as if that’s possible) will eventually learn enough to supplement us?
Well, ultimately I don’t think that is a logical progression at all. I can understand where the anxiety comes from, but it’s not actually based on anything real (much like a lot of anxieties I believe). There’s a concept called Confirmation Bias, which explains the situation where someone (even an intelligent and articulate person) will pick out the facts that support their argument, and ignore those that don’t. When someone (I’m looking at you, Cameron) suggests that AI can even develop self-awareness, they are ignoring the simple fact that computers, even sophisticated ones, can only really do what we tell them to do.
Human Intelligence vs Artificial Intelligence
You see, there is a distinct difference between ‘Human Intelligence’ and Artificial Intelligence’. It’s the reason for the label ‘Artificial’. Let’s examine that idea a little, to see if it doesn’t satisfy you, and Cameron, that Judgement Day isn’t coming, and the machines are not going to supplant us and take over the Earth.
We’ll start by looking at the concept of chatbots. Chatbots are disrupting the way businesses work and communicate. Customer support, e-commerce transactions and responding to feedback are all functions that are being supported by chatbots. A chatbot is an example of AI. It is a program that conducts a conversation with a human intelligence, usually via text, chat or messaging services. They respond to questions, and can be taught to interpret conversational cues to understand subtleties of human speech and subtexts. The keys to this concept however, are the phrases ‘respond’ and ‘can be taught’.
Because artificial intelligence can’t create anything. No matter how sophisticated the programming, no matter how advanced the machine learning algorithms are, AI responds to what it is taught.
Here’s an example that will clarify my point.
Programmers in Paris have recently taught a machine to paint in the style of Rembrandt. They conducted substantial analysis of Rembrandts works, including his choice of colours, the manner in which he mixed his paints, the style and pattern of his brushstrokes, measurements between physical features of his subjects, and even the depth of paint that lay on the canvas. Then they fed all the results of their analysis into a very advanced AI computer (the software behind which they had developed), linked it to a robot and let it paint something. At the end of the exercise there was a new painting that had been painted in the same style as Rembrandt. The programmers in charge of the project determined that the machine had created something new. From nothing, it had painted a unique Rembrandt painting and any expert in his art would be hard pressed to identify it as anything other than that.
Except in making that declaration they have completely discounted the amount of work, experimentation and measurements that they completed in preparation for the project. They seemed to have forgotten that a human intelligence designed the machine, built the parts that mixed the paint, the arm that held the brush. All the machine really did was take the information they had taught it, which was exhaustive, and respond to the directions they had given it, to create a stylistic copy of something that a human intelligence had once created independent of anyone else.
I’m not completely blind to history. I know someone once taught Rembrandt how to mix paint, how to hold a brush, how to prepare a canvas. But no one taught him how to paint like that, how to choose his subjects, how to create the life in paint that is the result of him experimenting and developing his skill.
Now, I came home from work one day this week and my children had found some paints and decided to experiment and develop their artistic skills on the wall of my office. No one taught them how to do that either. And yes, that was infuriating, but it supports the point I am trying to make. Human beings consciously create new things. We invent ideas, concepts and models. We explore, our environment, or ideas and our consciousness. But artificial intelligence doesn’t do that.
The Limits of AI
An artificial intelligence is limited to doing things that are based on the data and instructions that are programmed into it. They can work on complex algorithms, and compute enormous numbers. They can even make decisions based on historical patterns (a concept Google, eBay and Amazon are using at a very high level). But they can’t make conscious choices without the external direction that has been initially provided by the programmer.
My children decided they were going to paint on my wall. My laptop didn’t. It is this simple, but subtle understanding of how AI works that gives me confidence in the concept. I am not afraid of AI and I never will be because of this idea. Yes, an AI can work on complex ideas, and manipulate data to uncover trends in all sorts of fields of mathematics. However, they can only make a decision in a matter that is familiar to them. They can only react to information that has been provided ahead of time.
So yes, my PlayStation can get the best of me when I’m trying to beat some of the more difficult platforms on Assassins Creed, but at the end of the day only I can open my window and throw away the machine when I get fed up with it.