The Next Great Decoupling: AI Takes Control
Shelly Palmer
Professor of Advanced Media in Residence at S.I. Newhouse School of Public Communications at Syracuse University
Last night I binge-watched the latest three episodes of Star Trek Discovery, which set up the season 2 finale – spoiler alert – a battle royale between “Control” (an AI that is doing all it can to achieve consciousness, so it can wipe out all sentient life in the galaxy) and the crew of the U.S.S. Discovery, who will try to save the galaxy using only their wits, two star ships, and a time travel suit. While I have been ignoring the laws of physics (and computer science) and suspending my disbelief to improve the quality of my “Trekie” enjoyment since 1966, there was something thought-provoking about this science fantasy threat.
In his book Homo Deus: A Brief History of Tomorrow (Harper Perennial, 2017), author Yuval Noah Harari predicts a new version of the Great Decoupling is upon us. This time, instead of the economist’s version where the trend lines that reflected productivity, wages, jobs, and GDP growth seemingly decoupled, Harari suggests that we are on the verge of a new Great Decoupling: the separation of intelligence (AI) from consciousness (human). Harari is certainly not the first person to think of this, but I really like the way he writes. (He also wrote Sapiens: A Brief History of Humankind. Both books are great reads!)
In Homo Deus, Harari posits that if we successfully decouple intelligence from consciousness,
- Humans will lose their economic and military usefulness, hence the economic and political system will stop attaching much value to them.
- The system will still find value in humans collectively, but not in unique individuals.
- The system will still find value in some unique individuals, but these will be a new elite of upgraded superhumans rather than the mass of the population.
Harari builds the case for these three apocalyptic prophecies by offering as axiomatic that “organisms are algorithms,” and (to paraphrase) that the algorithms are in control.
Intelligence vs. Control
Certainly, when an app such as Waze tells us where to go, it must “think” about how many vehicles it sends on any particular route. It was created to reduce travel time for vehicles on the road. It does a very good job, which is why people use it. In practice, the more people use it, the better it gets. It “learns.” Harari’s thesis assumes the totality of Waze as a giant algorithm. While that is not accurate, let’s just go with it for the sake of argument.
Is Waze in control? It depends on your point of view. Waze is telling you how to go. But it is not telling you why. It is not forcing you to take the suggested route (although you or your autonomous vehicle might decide Waze knows best). It is suggesting a route that has the highest probability of getting you to your destination in the shortest period of time.
When you use Google to research a topic, is Google in control? It certainly controls what you see at the top of your search results page. But (at the moment) it does not tell you what to search for or why.
To my knowledge neither Waze, nor Google, nor any other AI is conscious – but I have no way to comprehend computer consciousness any more than the computer can comprehend human consciousness. I am not a computer, and computers are not human – at least not yet.
Will humans become as useless as Harari suggests when consciousness and intelligence are decoupled? Maybe. But there is something more sinister and disturbing that may occur before his predicted future arrives.
What Star Trek Made Me Think About
When will all the data (or a significant amount of data) from all the disparate, specialized, purpose-built artificial intelligence systems be hacked into a single, massive artificial control system? Or even worse, several competing massive artificial control systems? We could call it Meta-AI or Artificial Control – but whatever we call it, it won’t be good for us.
It may be achieved with digital computational devices (the computers you already know and love) that represent varying quantities symbolically as their numerical values change. Or it may be accomplished with analog computational devices (you probably don’t own an electronic analog computer as they don’t run common software), which use the continuously variable aspects of physical phenomena to solve problems. Or some digital-analog hybrid. Or we may have to wait for quantum computers (which promise a level of computational power said to be exponentially greater than previous technologies) to go online. Of course, big government, big corporations, or nation-states may corner the market on quantum computing and use it for control, but that is for science fiction writers, not me, to deal with.
Whatever technology ends up being tasked with (or seizing) artificial control, the thought of artificial control scares me way more than the thought of rogue artificial intelligence. Even Harari’s dystopian future of useless (conscious but not intelligent enough) humans doesn’t make the hair on the back of my neck stand up in the same way. Once something (conscious or not) achieves artificial control, we will be somewhere new.
Other than the purveyors of fear, uncertainty, and doubt, most people think about AI as just another tool – the way we think about a hammer or a drill. But AI is not just another tool. Hammers don’t think about human needs or consider what needs to be built. Humans control hammers.
Artificial control will be another thing altogether. It will score our human needs by positively reinforcing behaviors that help it achieve its goals (whatever they may be). Then it will give us more of what we become addicted to until it actually changes our behaviors – sort of like social media addiction. Oh, wait – a nascent version of artificial control may already be here.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.
About Shelly Palmer
Named one of LinkedIn’s Top Voices in Technology, Shelly Palmer is CEO of The Palmer Group, a strategy, design and engineering firm focused at the nexus of technology, media and marketing. He is Fox 5 New York's on-air tech and digital media expert, writes a weekly column for AdAge, and is a regular commentator on CNBC and CNN. Follow @shellypalmer or visit shellypalmer.com or subscribe to our daily email https://ow.ly/WsHcb
Student at Saint Leo University
5 年amen!
Amazonia Design Company Limited
5 年It’s here to stay and it has a name ..... get used to the singularity ;)
Technical Support Engineer at Salesforce | SFMC | 2x Certified | Ranger
5 年The Internet of Robotic Things (IoRT) market is expected to be valued at USD 21.44 billion by 2022, growing at a CAGR of 29.7% between 2016 and 2022. The growth of this market is majorly driven by the adoption of IoRT by e-commerce industry, increasing application areas owing to integration of robots with various technologies, short payback period and ROI. Get PDF brochure to know more:?https://www.marketsandmarkets.com/Market-Reports/internet-robotic-thing-market-85094927.html