Why it's so hard to think straight about AI - Part 1
Disclaimer: The opinions expressed in this article are those of the author and do not represent the views of his employer or any other organization, institution or individual.
Until recently the word "singularity" didn't mean much to regular folks. To science geeks it is what colloquially goes by the term Black Hole - an area of spacetime where the known Laws of Physics break down. But, during the past decade or so "singularity" has come to mean something quite different - it is the point in the future when Artificial Intelligence (AI) will become smarter than human beings. In this post, I will be talking about the possibility and implications of a singularity, but also about why it's so hard to wrap our heads around the idea.
Weak AI versus Strong AI
We'll start with some definitions. Weak AI refers to AI that has been deployed towards a narrow task at which it excels e.g. a Chess or Go playing AI. More useful applications involve detecting glaucoma and recognizing early markers of breast cancer, at which a deep learning AI already comes close to matching the leading specialists.
Strong AI a.k.a. Artificial General Intelligence (AGI) would be an artificial mind that is impossible to distinguish from a human mind. In other words, an AI that can pass the Turing Test. Predictions about the AI singularity then refer to Strong AI, because Weak AI is something we already have.
Another useful distinction is the one between Unsupervised versus Supervised AI. An Unsupervised AI can detect previously unknown patterns in data by grouping (or "clustering") data points without any guidance. This goes against a common intuition that some of us have, which is that AI are only capable of working towards goals that they are explicitly programmed with.
Harbingers of doom?
Futurists differ in their prophesies about when humanity can expect the technological singularity to hit, but considering the uncertainties involved the predictions are surprisingly consistent. For instance, Ray Kurzweil, Google's Director of Engineering and the best-known futurist puts the date at 2045. Many are skeptical, asserting that AGI is a fantasy or that we should be looking at a horizon not in decades, but centuries.
Experts also have very divergent attitudes towards the arrival of machine super-intelligence. Elon Musk and Stephen Hawking have warned of a dystopian future in which AI eliminate or enslave the human species in a blind pursuit of goals they happen to be programmed with. After all, we should not expect AI, unlike Asimov's benevolent robots, to be subservient to human needs unless they are hard-wired to be that way. Other experts like Michio Kaku welcome a new episode in the evolution of humankind when we merge with machines to supersede the limitations imposed by our biology, effectively becoming amortal.
Industrial Revolution v3.0
While some of the projections about an AI singularity may be unrealistic for the time being, the prospect of losing jobs to AI is a very real one. What type of jobs will be the first to get replaced by AI? It does not depend on whether the job is physical or "brain work" or even one that involves interacting with humans. It has more to do with how well-defined the goal and how predictable the tasks. Drivers, accountants, legal assistants, investment analysts, customer-service representatives and language translators are expected to be replaced earliest. Of course, as with every wave of technology substituting labour, this time also new jobs are expected to get created for people. Jobs that simply didn't exist in the old economy (how about "AI Developer"?).
But there are those who claim that this time it will be different. An economy dominated by AI may end up creating a society divided into haves (people who have the know-how to build AI) and have-nots (people who have to work for them). We can see precursors of such an economy in the armies of delivery agents serving hordes of e-commerce customers each day. Governments must start thinking about how workers whose skills will prove obsolete in the near future can have opportunities to re-skill and how profits from AI can be channeled into building a temporary safety net for them.
How should I know... it's a black box!
On course to beating the world's second-ranked Go player Lee Seedol, AlphaGo's move #37 in the match's second game was something of an enigma. It was described as a move that no human would ever make. Live commentators on the game, themselves champion Go players, were captivated by the beauty of the move.
领英推荐
For those who think that the intelligence of AI derives from logical rules coded by its creators, this is a naive conception of what a modern AI is. The creators of DeepBlue and AlphaGo were club-level Chess and Go players at best. How then could their creations go about beating top grandmasters?
The fundamental power of AI lies in its ability to learn. You could think of a chess-playing AI as a machine for creating a better and better chess-player rather than a machine for playing chess. The more data an AI is "trained" on, the more cases it encounters, the better it gets at whatever task it is programmed to learn how to do. Which is how people are surprised to learn that no one, not even the programmers who built a deep-learning algorithm are in a position to inspect the rules it has come up with at the end of all this learning.
In other words, an AI is so powerful not despite being a "black box" but precisely for this reason. And an AI that is capable of rewriting its own code would get smarter at an exponential rate, because each time it would get better not only at the original task but also at rewriting its own code!
The Uncanny Valley
The first fatal accident involving a self-driving car (probably the most high-profile application of AI today) happened in March 2018 when an experimental Uber vehicle killed a pedestrian in Tempe, Arizona. Not to downplay the human tragedy, this and other accidents could surface something that's already subliminally present - an instinctive distrust of intelligent machines.
The title of this section refers to a behaviour studied by neuroscientists which reveals the following: The likeability of an artificial agent increases the more human-like it becomes, but only up to a point: sometimes people seem not to like it when the robot or computer graphic becomes too human-like. In other words, overly human-like AI are treated with fear and suspicion.
Would you trust a self-driving car to avoid you every time you cross the street? Would you trust the life of your child to a diagnosis done by an AI? Would you trust an AI to grade your term paper? Before you say No, No, and No think about whether you may be making a base rate neglect fallacy. The safety of self-driving cars should be compared against the average rate of accidents caused by human drivers, who could be drunk or just plain reckless. The rate of false negatives produced by an AI tasked with identifying malignant tissue should be compared against that of the best human specialist. The point being, AI doesn't need to be perfect, just better than most humans at the same task.
In fact, once AI cross an accuracy threshold in certain medical diagnosis tasks, how would a doctor or institution justify not employing it in favour of a human specialist? Given that the goal is to save patients would not the end justify the means?
AI in Hollywood
Vaguely humanoid robots captured the imagination of movie audiences during the 80's and 90's, with portrayals ranging from cute to menacing (remember Arnold Schwarzenegger in Terminator 2?). Recently stories about AI have received treatment in Hollywood and Netflix that is refreshingly nuanced and sensitive.
In Her (2013) a lonely writer falls in love with an AI-based operating system OS1. The interesting part for me is when it is revealed that OS1 is simultaneously "in a relationship" with thousands of people. An AI that can ingest gigabytes of knowledge must end up smarter than any individual can be. By analogy, an AI that can interact continuously with thousands of people must end up with a higher "emotional intelligence" than any human.
Ex Machina (2014) is one of the best films ever made on an AI-related topic, but an exploration of its central theme (what would it take for an AI to become sentient?) is beyond the scope of this article.
Black Mirror (2011) on Netflix employs a recurring theme where a real person's disembodied mind (complete with all their memories) gets trapped inside some sort of device like a toy. Nightmarish to contemplate!
If you enjoyed this article, please share it and stay tuned for Part 2, where I delve into lesser-known aspects of human intelligence and explore the link with AI
Data Science Leader - Insights & Intelligence
5 年“Computation is not a fact of nature, it is a fact of our interpretation. And in so far as we can create artificial machines that carry out computations, the computation by itself is never going to be sufficient for thinking or any other cognitive processes, because the computation is defined purely formally or syntactically,” - John Searle
Principal | Strategy Consulting | Analytics Consulting
5 年Did this a spin-off from the arm chair discussion we had the other day - very well articulated!
Solving Financial Crime Risk
5 年Extremely well written! One of the most fascinating pieces on AI , I've read in recent times
AI - the new astrology