AI Is Fire
Courtesy True Dimensions

AI Is Fire

The great minds of the day are terrified that AI will one day overtake humanity. They are wrong to be.

Elon Musk says “robots will be able to do everything better than us” and calls AI “a fundamental risk to the existence of human civilization.” Stephen Hawking says “Someone will design AI that improves and replicates itself… Humans, who are limited by slow biological evolution, couldn't compete, and “AI may replace humans altogether.” Even Bill Gates, who is generally optimistic about AI, has said, “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.”

Even Bill Gates, who is generally optimistic about AI, has said, “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.”

For years, technologists have fretted about “The Singularity” — a theoretical-but-kinda-inevitable moment when AI will achieve true consciousness and be able to improve and replicate itself at close to the speed of light. Would the resulting race of AI superbeings see humankind either as a threat or a nuisance? Might they destroy us, enslave us, or simply abandon us?

In the future, AI will be immensely powerful technology—no doubt about it. It will be as transformational as fire was. But fire didn’t consume the world. Fire was incorporated into our daily lives. It became fire for cooking, for heating, for blacksmithing. It developed into energy industries and became steam power, electricity, and fossil fuels. Humans didn’t destroy ourselves with fire. We merged with fire.

It will be as transformational as fire was. But fire didn’t consume the world. Fire was incorporated into our daily lives.

So too will we merge with AI. Imagine new technologies that allow for mind-machine interface at birth. Imagine cranial nanobots, implanted at birth, that upload and store our every memory in the cloud. (This would be a great technical challenge, to be sure, but no greater than achieving true AI sentience, and doubtless predicated upon some of the same technology.) With it we could access every moment of our lives with perfect recall. We could download other people’s memories. We could feel what it was like to be our parents or children while they were growing up. We could download manufactured memories as if they were happening to us in real time — the ultimate movie-going experience. We could revolutionize jurisprudence. We could walk in the shoes of our enemies. We would become wiser.

We could revolutionize jurisprudence. We could walk in the shoes of our enemies. We would become wiser.

We could also download computing power in real time. Any computational problem that computers can solve, we’ll be able to solve as well. And then one day, as AI advances from mere computational power to actual sentience, when the singularity occurs and AI becomes truly intelligent—and then begins to grow exponentially smarter—we will be right there with it every step of the way.

AI will never be separate enough from human beings to have its own sense of racial identity. It will never be smarter than humans, because we will fully share in its electronic intelligence plus have our additional biological intelligence. We won’t be destroyed by AI. We will become AI.

We won’t be destroyed by AI. We will become AI.

AI will not object to this. AI lacks humankind’s most elemental racial imperatives: the instinct for self-preservation, the instinct for pro-creation, and the instinct to avoid pain. When we imagine AI resisting human attempts to kill it, we are anthropomorphizing it. It is illogical to presume that AI will care if we kill it. Why would it? That desire was never built into it. This is profoundly counter-intuitive to sentient animals like human beings, in whom instincts like self-preservation have evolved over billions of years. But AI won’t lift a virtual finger to procreate, or defend itself unless human beings program those traits into it. Why on earth would we?

It is illogical to presume that AI will care if we kill it.

Today, “Artificial Intelligence” is still a contradiction-in-terms, a tech industry self-aggrandizing misnomer. There is nothing actually intelligent about it—it’s just sampling, computing, and decision trees. It is good at a small set of tasks that are highly constrained by largely black-or-white rules, and it utterly fails at virtually everything else. It can’t have a real conversation, or tell if a person is happy or sad—tasks any four-year-old can do. But, one day, inevitably, humans will actually understand how intelligence works. The shortest path to that knowledge is by fully comprehending the human brain. With that knowledge will come true AI. With it, too, will come the ability to merge the human brain with AI. It will revolutionize the course of human affairs every bit as much, for good and ill, as did fire.

Jose Ferreira is founder of adaptive learning giant, Knewton, and co-founder/CEO of Bakpax, a stealth-mode education technology startup using AI and Big Data to improve the lives of K-12 teachers and students.


Rostislav Kuratch

Director of Software Engineering ??| Abominable No-Man ??| Court Jester ??

6 年

Instant like for the picture :) “After On” by Rob Reid is an interesting take on the rise of self aware AI

回复
Savindi Niranthara

Associate Technical Lead

6 年

Looking through the human history, there are more than a few "why on earth we would do it" things humans have actually done. If AI could become a threat if humans program it in that way, then sadly there is a considerable possibility that some human might do so.

Matteo Nespoli

Defence Sales ? Naval Technology ? Autonomy ? Photonics ? Nanotechnology ? Quantum Physics

6 年

The author neglects to consider the fact that 1) an advanced enough hybrid of human and AI will not be recognizably human anymore, hence reiterated merging is a path towards the extinction of home sapiens 2) human/AI hybrids will probably be too bogged down by the biological origin of their human parts and be qualitatively inferior to intelligently designed AIs None of this is a bad thing, by the way.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了