Artificial Sentience vs Artificial Intelligence
The Architect - creator of the Matrix

Artificial Sentience vs Artificial Intelligence

In my recent essay, AI Is Fire, I wrote that AI wouldn’t suddenly race far ahead of human beings when it one day achieves sentience. We have nothing to fear from a super-advanced AI like Skynet or the Matrix in the far future. However, we should be concerned over the potential of simple AIs in the near future to unemploy large numbers of people.

There are really two meanings of “AI” and they are routinely conflated. One is the idea popularized by the likes of Kubrick and Spielberg, and warned about by Musk and Hawking, that AI will one day achieve conscious, sentient, self-aware thought, and will thenceforth improve itself at the speed of light and leave humankind, which improves at biological speed, in the dust. To un-conflate what people mean by “AI,” I’m going to refer to this as “Artificial Sentience.” Musk calls it “a deep intelligence in the network.” Hawking believes, “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” The human race is nowhere near producing AS and doesn’t even have any clear sense of how we would do so.

Then there is what people call “AI” today—basically, a variety of software that tries, tests, and auto-corrects its strategies for a given task. Such applications, and the available tools to build them, are increasingly common. They are not much different in theory or kind from the original use of computers: to calculate complex math problems. Their foundation is still the crunching of lots of numbers at great speed toward a specified goal, upon which is added algorithms to sample data, try strategies, observe and remember consequences, and adjust future strategies accordingly.

Toasters vs calculators

The threat people fear from AS is existential. The problem with AI is merely economic—it will take jobs away from people.

There is nothing truly intelligent about artificial intelligence software, any more than any other kind of software, so it is perversely named. Electronic machines are better than human beings at any number of tasks. Toasters can warm bread via heat coils better than humans can by blowing on it. Calculators have long been better than humans at calculating math. AI is better at sampling, testing, and optimizing over a huge set of data.

There is no essential difference between electronic machines and computer software. But there are some superficial differences that explain why we put them in different categories. Toasters are old technology that do something we don’t associate with intelligence. Calculators perform tasks we do associate with intelligence, but they are still an old technology whose underlying mechanics are easy to understand. So we think it’s ludicrous to think of calculators as intelligent, independent beings. AI is a new technology whose underlying mechanics are not easy to understand. Based on reliable trends in the computer industry, we anticipate AI becoming dazzlingly more powerful and complex in the future. Since it’s hard to predict the future, it’s very difficult to imagine what these complex systems could turn into.

But why should we think that improved, future AI will magically become truly intelligent? AI, like calculators and toasters, like animals and even humans, can perform marvelously at certain tasks without understanding that it is doing so—it’s what philosopher/cognitive scientist Daniel Dennett calls “competence without comprehension” When machines do something that outperforms us mechanically, we take little notice. When AI outperforms us at a mental task, it seems smart. But that’s just a cognitive trick. Cognitive tricks can be very convincing—there’s a great TV show entirely devoted to them—but they aren’t real.

AI is not smarter than humans and never will be

It’s a dumb machine, doing tedious calculating tasks better than we can or care to do ourselves. Human intelligence doesn’t work this way—we’re not even particularly good at simple calculating tasks. So it stands to reason that making AI ever better at calculating an ever wider array of tasks is not going to make it spring to life one day and become self-aware.

A great many people in science and technology fields seem to think that merely improving the power of AI will cause some mystical emergence of consciousness from its underlying programming. Why? Consciousness does not result from computing power in the human brain; in fact, it’s vice versa. So why would computing power perforce lead to consciousness in the electronic brain?

Talk to anyone on the cutting edge of AI today and they will concede that a lot of what’s called AI is pretty dumb, but they will insist that some of it is really impressive. But in my experience this group of people is easily impressed, especially by themselves. There is no basis upon which to believe that anything currently being worked on in AI will ever spring to life. Over time, AI chatbots and call center programing will increasingly be able to trick humans into thinking they’re talking to another human, but that’s not the same as actually being AS.

A very long way off

All that said—one day, if we don’t destroy ourselves first, we will indeed create Artificial Sentience. But the deep intelligence in the network is still a very long way off. Meanwhile, we have simple AI to worry about. Could it fundamentally alter the human labor market in a way never seen before? Or will labor markets respond as they always have—by finding new, more productive tasks for displaced workers to do?

Jose Ferreira is co-founder and CEO of Bakpax, a stealth-mode education technology startup using AI and Big Data to improve the lives of K-12 teachers and students. He previous founded adaptive learning giant, Knewton.

Prof. Dr. Julia Zwank

Professor & Science Communicator

6 年

Appreciate the distinction Jose!! Field initiatives required!

回复

The problem is that you are able to control your toaster, and double check the calculator. You are unable to control the works of AI as you are not able to do the same number of processes. So you have to trust AI. But this trust cannot be unconditional. There has to be fear from AI malfunctioning, or being misused by a human, and has to be hard rules in AI systems which are protecting humans. Your toaster has at least 2 important built in safety measures : -overheat protection -electrical earthing We are afraid that a symple thing as a toaster can kill us. And we put safety measures. Such a complex thing as AI can kill us much more (regardless if it is doing by intention or just malfunction) Regulate AI would be a must. We are already too late.

回复
Marc Jansen

DevOps engineer at a.s.r.

6 年

In this article the focus is on whether a computer can obtain very high forms of intelligence that surpass uncreative tasks of repetition and information processing. It seems to take for granted that the human mind is of another level. Self awareness is named as an indicator that we are 'better' than mere information processors. But I don't know if we are. The real danger might be that our perceived self awareness and consciousness itself are a result of information processing Could it be that the AI fear isn't stating that machine might one day reach a level beyond simple information processing to match and surpass human intelligence, but rather that we learn that our own minds don't have that extra something.

回复
Philippe Barraud

Leader Technologique | Excellence Opérationnelle | Entrepreneur

6 年

What if our understanding and knowledge of human biology allows us to control biological evolution? If we could boost our intelligence by manipulating our genes, why would we still need external hardware for things we can do with our brain. Don't forget that all technology are at some point or another disrupted by another more advanced or convenient technology.

回复
Alex Fokas

Software Developer

6 年

I don't think that people fear that "increasing computing power perforce lead to consciousness in the electronic brain". I'm fairly sure that they fear that the advancement in algorithms for implementing AI will lead to that, which is entirely another thing.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了