Artificial Intelligence - Part One: Three things we think machines can never do, but they very well might
Amane Dannouni
Managing Director & Partner at Boston Consulting Group (BCG) | Technology, Media and Telecommunications
Have you ever felt that everywhere you go, you seem to be hearing about the same thing, as if the magical fabric of the universe is trying to send you a message? This was the case for me in the last couple of weeks: artificial intelligence was part of every conversation I had, every conference I attended, every article I read.
Maybe it’s just the fact that everyone is talking about it, or maybe it’s my confirmation bias, but since I am writing my first article in a while, I'm going with the more dramatic and mystical explanation. I am now compelled by higher forces to share with you what those conversations were about and, more importantly, try and put into perspective the reality and fiction of what artificial intelligence can and cannot do.?
What is artificial intelligence, really?
Before going deeper, I want to clear up the main confusion that made many of these conversations difficult. What do we have in mind when we talk about artificial intelligence? The best definition I have heard is also the most confusing: it is the ability of machines to complete tasks we thought unique to humans. This is confusing because it is time- and knowledge-dependent. Today we don't consider playing chess “uniquely human," but it was considered so before 1997 when IBM’s Deep Blue beat Grand Master Gary Kasparov.
The technological artifacts that allowed us time after time to redefine the boundary of what is uniquely human ended up redefining artificial intelligence. In the last century it was all about algorithms in which experts distilled their knowledge in a codified manner (as in IBM’s Deep Blue). Today we have moved to algorithms with the ability to learn implicit rules from huge amounts of data — so-called “machine learning.”
Machine learning algorithms (more precisely, a subset of them called neural networks) were able to push new frontiers in the last ten years, to recognize objects in images, translate text, detect cancer, recognize and generate speech, etc. How do they do that? If you look at the widely used family of supervised learning algorithms, they do it by combining three ingredients: algorithms, data, and computing power.?
There have been attempts to generate “intelligence” in other ways, but the state of the art still relies on some variation of learning algorithms applied to massive amounts data. In the last few years however, improvements were mainly driven by the third ingredient: adding more computing power to crack new and more complex use cases. To oversimplify: the more chips you can connect your machine to, the more intelligent it is likely to become. On the surface, this might seem underwhelming. How far can we stretch the field’s achievements on such foundations?
How far can machines go?
Machines can do many things, and over the years, they have gradually taken over human tasks. Firms that were able to create the right enablers have gained measurably from automating, not only simple repetitive labor (old story), but more and more critical processes and decisions. They are using machines as exploration tools for vaccine production (Moderna), as product recommendation engines (Netflix), as workforce allocations optimizers (Uber) and, on shakier ground, as campaign management support to try a hyper-personalization of electoral messages (Cambridge Analytica).
The list is growing, and many have raised concerns that this might lead to a dystopian future where machines will develop abilities beyond what humans are able to control (iRobot, anyone?). Some suggest this might happen in 2040s (Ray Kurzweil in his book The Singularity Is Near).
This is plausible if you assume we will continue with the recent supra-linear progress of our machines' computing power. But I would like to challenge this assumption, not because I believe that some human abilities are fundamentally non-computable, although they might be, but because there are many balancing forces that some forecasters fail to consider.
领英推荐
Limits of an anthropomorphic imagination
First let’s put to rest the idea that humans are structurally unique in certain abilities. The ones I hear about most are: managing complexity, thinking creatively and demonstrating empathy.
I have worked with many companies that struggle to help their salesforce tackle the increasing complexity of product and client portfolios (more product variations to sell to more differentiated client segments). They turn to machine learning algorithms to support the sales process and generate 5% to15% productivity improvement in the process. Humans are not great at managing complexity. We have some helpful heuristics and we, of course, understand our world better, but machines can gradually approximate this intuition by relying on more contextual data about what we buy and where we go, more cameras and microphones, more sensors we implement around us or link to our bodies.
Similarly, for creativity and empathy, I don’t think there is a conceptual barrier to what machines can do. Why would ideas or emotions be non-computable? We might want to believe they are uniquely human, but that's only because the question itself is misleading. It's like asking, "Can submarines swim?". They can’t, but they have a different way to deliver the same outcome of moving under water from one point to the other. Strategically speaking, in the context of a specific objective, humans and machine can achieve equivalent answers to what Clayton Christenson calls the “job to be done.” A convolutional neural network doesn’t “see” but can, for any visual stimulus, generate an accurate text description. Is that so different?
The “job to be done” by empathy can be achieved by other means. Just as submarines don’t “swim”, a humanoid home assistant might not “be empathetic,” but can achieve the goal of making humans feel they are heard and seen by using the right word sequence and the right mimicking of facial expressions. Is that so different?
The same applies to creativity. Today, generative adversarial neural networks (example from Karras et al. 2018 below) can use a random seed of numbers or words to create paintings and even realistic pictures of human faces they haven’t seen before. How is that different from inspired art?
Back to 1950
In a superb article, published in October 1950, Alan Turing has asked the question: can machines think? He challenged the ambiguity of its terms. He shared descriptions and rebuttals for nine objections (heard at the time) to his intuition that machines will ultimately have the ability to create human-like intelligence. He also shared a prediction:
We were not there by the end of the twentieth century, we are not yet there today for any form of general artificial intelligence, and the road ahead has strong headwinds that are linked, not only to our anthropomorphic judgment, but to structural choices in how artificial intelligence systems are being designed today…
To explore the balancing forces, see you in Part II!
International Trade | Economics | Circular Economy | Sustainability
1 年I don't think machine can be human. What is so called humanoid. They lack fluid intelligence! Even though they use data and provided rules to think, they can't have the ability to think by themselves in a fluid manner ( poor data as input and purely made up way of thinking).