Do we overestimate AI because we underestimate humanity?

Do we overestimate AI because we underestimate humanity?

On my dark days I'm in the "AI is an existential threat to humanity" camp. That's a camp with a whole lot of pedigree. Think Stephen HawkingElon MuskBill Gates, and now, perhaps, even Tim Ferriss (if his recent musings on Freakonomics are anything to go by).

The Terminator movies and TV series are the most enduring telling of this view. Artificial Intelligence reaches the singularity. As it gains self awareness it sees us as a threat. And so it sets about destroying us. In Terminator, AI destroys us because someone was silly enough to connect SkyNet to the nuclear weapons network.

But perhaps our destruction by AI will be a little more pedestrian. Daniel H Wilson, with his PhD in Robotics from Carnegie Melon, imagines a self aware AI coming at us through the internet, using the Internet of Things to turn the quotidian into the tool of our destruction. Here he is in his great SciFi book Robopocalypse:

"People should know that, at first, the enemy looked like everyday stuff: cars, buildings, phones. Then later, when they started designing themselves, Rob looked familiar but distorted, like people and animals from some other universe, built by some other god.... The machines came at us in our everyday lives and they came from our dreams and nightmares, too."

In this view, going anywhere near AI will be "summoning the demon". And even if you create a self aware AI in the lab, "air-gapped" from the internet, it will still figure out how to get out. A super intelligent, self aware AI will be smart enough to know how to fool its human creators. Somehow it will get out. (If you haven't yet seen Ex Machina, now would be a good time!).

Of course, a lot of AI pessimists don't yet see an existential threat. But they do see a pretty bleak future for human beings as we become increasingly "useless" to the economy. The latest contributor to this view is Yuval Noah Harari. Harari is a Professor of History at Hebrew University in Jerusalem. But he is best known for his 2014 book Sapiens: A Brief History of Humankind. A book that has been sitting in my "to read" pile for about a year. Harari is about to publish another book: Homo Deus: A Brief History of Tomorrow. In a Guardian report, he sees one possible future for humanity as being particularly bleak. In the reporter's words: "the human mob might end up jobless and aimless, whiling away our days off our nuts on drugs, with VR headsets strapped to our faces". In short, human beings become increasingly irrelevant and AI takes over all the jobs in the economy. We see "the rise of the useless class". All that's left for us is to eke out a meaningless existence, more or less supported by the state (although exactly how that happens is not clear to me, given there are no taxpayers left). Of course, we've heard all this before (think of Keynes and his 15 hour work week predictions), but, as Harari says:

"I’m aware that these kinds of forecasts have been around for at least 200 years, from the beginning of the Industrial Revolution, and they never came true so far. It’s basically the boy who cried wolf, ... But in the original story of the boy who cried wolf, in the end, the wolf actually comes..."

But, I wonder if there is another way to look at all this. I wonder if we have been so mesmerised by the rise of computing and robotics that we have shrunk what it means to be human. In our dazzlement with computing, we have begun to see thinking only in terms of what machines can do. In other words, have we shrunk our definition of thinking to fit into the constrained frame of computing?

Are we losing our awe and wonder at what it means to be human? The true richness of human thought, creativity, love, culture and life? Have we replaced it with a "diminished awe"? A simplified, "bonsai-ed" awe of the machine? And in so doing, has our highest conception of what it means to be human actually been "dumbed down" and equated to what a machine can do? Louisa Hall, in her 2015 SciFi novel Speak, has one of her robot designers ask (in a rare moment of perspicuity) whether humans have shrunk to a "binary race" (p165):

"We mimic the patterns of our computers, training our brains towards yeses and nos, endless series of zeroes and ones. We've lost confidence in our own minds. Threatened by what computers can do, we teach our children floating point math. They round the complexity of irrational numbers into simple integers so that light-years of information can be compressed into bits. We've completed the golden ratio, moving the decimal point up. But at what cost!"

In short, in the face of the seemingly unstoppable rise of computing power, have we "lost confidence in our own minds"? Or even worse, have we lost confidence in what it means to be human?

I wonder, if our anthropology - our understanding of humans beings - were stronger, richer, more complex, even more holy, whether we would be less worried about the rise of the machines. I wonder, if we had a more sophisticated anthropology, whether we would understand better that even as computing power increases, it's simply "more of the same". More of the same mono-dimensional approach.

In short, do we really think that human beings can be replicated simply by repeated applications of Moore's Law?

As always, if you're enjoying these posts, sign up to the regular weekly email so you don't miss any - drop me a line at [email protected].

Arthur Kallos

Advisor Licensing | Onshore Outsourcing | Fintech | Wealth Management | Qualified Financial Advisor | Philanthropist

8 年

Nice article. Glad I took the time to go through it, cheers!

要查看或添加评论,请登录

Nick Ingram的更多文章

社区洞察

其他会员也浏览了