Most AI isn't really that smart
Most Artificial Intelligence isn’t really very Smart
The easiest place to begin our discussion of AI is by exploring the world of robots. And, perhaps, the most obvious is the depiction of robots through the years by our friends in film and TV. In 1927, Fritz Lang’s silent classic Metropolis rocked our world with a very sexy female (looking) robot named "Hel." The plot has the inventor transform the robot into a human form -- a beautiful woman. This robot-as-human or humanoid robot theme continues to pervade movies and TV to this day, with Blade Runner (1982), Terminator (1984), The Matrix (1999), Star Trek: Next Generation (1987), RoboCop (1987), A.I. (2001), Bicenntenial Man (1999), Westworld (1973 version), Stepford Wives (1975), etc. Then we have the all-knowing but invisible intelligence like HAL in 2001: A Space Odyssey (1968), Eagle Eye (2008) or Her (2013). Next come the fun robots, Robby in Forbidden Planet (1956), C3PO & R2D2 in Star Wars (1977), Wall-E (2008) and Chappie (2015) and last but not least, the truly frightening robots like the indestructible and all-powerful Gort in The Day The Earth Stood Still (1951). Today, on the trivial side, we have the feline nemesis robot, Roomba, to sweep our floors, Domino's is testing a pizza delivery robot, and there is even a new security robot being tested in San Francisco to patrol the streets. On the more practical side we also have cars that park themselves in tight spots, stop themselves faster than humans can react, correct our wayward steering should we drift out of our lane, and, very soon, will drive themselves. Automobiles, of the self-driving and traditional variety, are built using robots; Amazon orders are filled using robots, and bombs are defused using robots. The list seems to be endless and ever-expanding. But not every robot uses artificial intelligence.
So let's define AI. True artificial intelligence is more correctly known as machine learning. Most of us believe that AI comes from a team of really smart developers who tell a machine how to behave in one specific situation, 100s of situations, 1000s of situations, etc. -- indeed, that's pretty much what is happening today with autonomous vehicles. Situation after situation is fed into the computer driving the car and solutions to those situations are programmed in. Over time, more situations will occur and be solved with more solutions being fed in. After all, there should be a finite number of driving situations. But in this example is the machine actually learning on its own or is a team of human developers doing the learning and then providing the knowledge back to the machine? The latter, I believe. This type of AI is what powers most robots in use today. A finite number of situations with finite, pre-programmed responses. While this may seem like magic to most of us it is NOT machine learning. The people are doing the learning then feeding new knowledge into the machine.
So what is real machine learning? Put quite simply machine learning occurs when the machine uses all the existing sources of knowledge at its disposal and draws its own conclusion. Just like people, that conclusion may change over time as it learns more and evaluates the correctness of its “opinion” based on empirical data.
Let's think about this a bit. We create a machine/computer/software/neural network with 100s of layers; we teach it to read certain kinds of inputs or specific types of data (like millions of medical records or billions of stock trades); we give it access to all that data plus general data from the world at large, give it one situation, and ask it "what do you think?" It turns out that the machine's answers (or guesses or diagnoses) are BETTER than the human experts' almost every time. And (here come the good part) the machine keeps getting better and better, learning more and more over time, with access to more and more data. And (perhaps most importantly) the machine will very dispassionately assess its own shortcomings and make adjustments and improvements in its own logic…every time! Well, hot dog! What are we waiting for? This is THE answer! This could save mankind! Let's get moving -- build these suckers as fast as possible. Nellie, bar the door! … Not so fast.
Most IT types are familiar with something called a black box. It's not an actual box, but rather it is software. A developer has created a piece of software that has rules about how data is passed into it (the box) and, if the rules are followed, how data (transformed) comes out of it (the box). For the most part, no one but the creator actually knows what happens inside the box. So we all simply call it a black box. Then the creator retires or quits or moves to Fiji or whatever. Now absolutely NO ONE in the organization knows what is actually happening inside the box, but we continue to use it because it continues to fulfil its original intended purpose. And, secretly, the IT folks say a silent prayer each day that "the box" will continue to work.
Here's the issue, the computer scientists who have designed and created these miraculous machine learning devices don't actually know how they work. Yes, you got it … just like the black box. They don't know how the machine learns or makes decisions or provides recommendations. And it turns out the machine can't really tell us either.
"We’ve never before built machines that operate in ways their creators don't understand. How well can we expect to communicate -- and get along with-- intelligent machines that could be unpredictable and inscrutable?" notes writer Will Knight in his recent article on the black box problem in the MIT Technology Review.
Think about us. Humans. Can you explain the exact process that you go through when making a decision? Sometimes, perhaps, but often we use "gut instinct" or act on a "hunch" or use the ubiquitous "I just know" when describing how we make decisions. Believe it or not, the same is true for these human-made machines. There is nothing in these systems, at the moment, that explains how they reached a given decision. They just did.
It turns out that this lack of "explainability" may be a showstopper. With millions of dollars or someone's life on the line, humans are going to demand an explanation. If a machine turns down someone for a credit card, a car loan or a mortgage, aren't they entitled to know why? Machine learning systems, as they are commercialized, may not always be the exclusive purview of the computer scientists, and as these systems begin to affect our everyday lives, lawmakers will have to weigh in on the legality of explainability. Let’s up the stakes, what happens if a machine turns down your child for a life-saving organ transplant? Wouldn’t you be screaming at the top of your lungs to know why? And we haven’t even mentioned the applications of AI to the military. If at some time in the future (cue the Hollywood script writers) a machine is going to make a key military decision, all of us, especially our leaders, must have a very clear explanation of who, what, where, when, why and how that decision was reached.
Ok. Take a breath. Let's lower the blood pressure a bit. What will you expect from "Siri" in 2020? Restaurant recommendations? A scolding for not moving enough? Blood sugar level readings? Oxygen saturation readings? Health warnings for 54 key bio-markers? And, based on certain health feedback, will Siri call 911 all on her own, without asking us? No question, these various opt-in personal services could improve the quality of our daily lives and not prove too disruptive.
But, make no mistake, computers have already become exponentially better in understanding the world in ways that will disrupt us and disrupt whole industries. This year, a computer beat the best Go player in the world, 10 years earlier than expected. Facebook now has pattern recognition software that can recognize faces better than humans. By 2030, computers will become more intelligent than humans.
In the near term, look no further than our century-long relationship with the car to see how disruptive machine learning will be. The scattered demonstrations of self-driving cars we saw in 2017 will increase in 2018. A mere two years later, pundits predict the entire auto industry will be disrupted. Many car companies could go bankrupt. At some point, we won't own cars but instead summon them with our phones. Except it won't be an Uber driver showing up at our locations but a driverless Uber car. Teenagers won't get driver's licenses. Vehicular fatalities will shrink to 200,000 a year from the 1.2 million lives claimed annually today. Car insurance companies will slowly disappear. City parking lots will become parks. Can you imagine this new world? As soon as you start playing out what machine learning might do, the world becomes a very different place.
Machine learning is here right now. Today. It has unprecedented possibilities to improve our lives immeasurably. But its full potential will never be realized unless humans can learn to actually trust a machine with their lives. Can you do that? Will you?
Owner at Innovisions LLC, a Photo agency
4 年Great job, George. I always rely on your expert opinion about tech stuff. This essay on the Brave New World we'll have is scary, but I see what you mean - Humans adapt well and it looks like we'll do that too.