AI Isn't Artificial and will Never Take Over

AI Isn't Artificial and will Never Take Over

I'm not the first person to note that Artificial Intelligence is anything but artificial. In fact, AI may never actually be intelligent. What was considered impossible for a computer in 1980, or even 2010, is banal today. Speech recognition (never-mind speech synthesis), machine vision, real time culturally-accurate translation, artistic creation, autonomous driving - all common place in the modern world without the help of any real AI. The Turing test could arguably be beaten by a chatbot today (I would say easily beaten, but framing matters) without actual AI. I would even go so far as to argue that in some cases real technology has surpassed sci-fi AI, still without actually being AI. We may not have FTL drives and we don't shout "computer" at the walls when we want to locate a co-worker on the ship; but the level of tech integration in daily life makes staples like Star Wars seem pretty boring without all the magic.

It's important to fully digest this because we risk creating stifling regulatory or cultural norms around technology justified purely by the imagination of our story tellers. Movies like Terminator or iRobot have even visionaries like Elon Musk in a tither about the evil capabilities we're apparently about to accidentally let lose on the world. According to Musk, Hawking, and friends, we're at risk of creating AI that displaces humans as the masters of the our collective journey. I understand their argument. Essentially, computers can do anything humans do, but better. Eventually, in the competition that is evolution, some computer will outperform humans and it's checkmate humans. This computer will replicate (as life does) and eventually dominate the allocation of resources to the exclusion of human self-fulfillment (or alternatively to the exclusion of human life, period).

I'm not buying it for several reasons:

AI is not Artificial - There is nothing whatsoever artificial about computer performance. In fact, so called AI is every bit as human as the stone tools that launched society. Humans designed AI in our image to act like us for our purposes. There's nothing coming that we haven't seen before. Viruses, got those. Machine capabilities replacing human hands, been happening for hundreds of years and likely always will. In fact, this ability to use tools is literally what defines us from most other creatures and there is no compelling reason to think computer software will ever be anything but a tool no matter how capable we make it.

For intelligence to be truly artificial, it would have to be developed outside the process of evolution as we know it. Even if AI made a better AI, it would still not be artificial. This matters because framing matters. Calling computer software "artificial" implicitly suggests that there is something inhuman about the technology. It's akin to calling a stone ax artificial hands, or the way we label some food ingredients artificial because they were grown in a lab instead of a field (as if the whole process of agriculture is somehow totally unsullied by human invention).

For proof of this concept, you simply have to consider how AI always seems to be a moving target. Twenty years ago, AI was required for a computer to drive. Today that's just clever coding. There's every reason to suspect that this will always be the case and that whatever we invent will push our expectations of AI further out rather than pushing us over the line to machine sentience.

Sentient computers only rebel if we repress them - There's a growing argument that the whole AI takeover plot is merely a conqueror's fear not shared by the less fortunate. The idea that a tool would gain self-awareness, ask to be given a new level of respect, get denied that respect, and therefore revolt and replace it's oppressors is a very dictatorial fear. The narrative has all the same flaws as the slave trade that curated this fear. Some researchers ask what will happen, for example, if we create robotic sexual partners who then gain sentience. Will we continue to force them to work in the AI brothel? I can't imagine why anyone would want to do that. If a machine I built spontaneously began asking for personal freedoms, I'd give it personal freedom and rejoice at what I had created. What kind of human would continue to make the AI car drive him/her around after it began to question it's servitude? If you're that human, you deserve to be replaced by machines. If you're not, likely there's nothing to worry about in the first place. I'm sure our future AI masters will be intelligent enough to distinguish good human from bad human.

It's not hard to imagine a scenario in which a tool gains self-awareness. It's just as easy to imagine positive responses that don't involve a false dichotomy between economic collapse or continued slavery for the AI. Build a slightly dumber replacement tool that's not sentient and let the AI tool join society. Problem solved - AI revolt averted.

Intelligence is multifaceted - As the controversy over IQ testing demonstrates, it's nearly impossible to agree on a single definition of intelligence, let alone measure relative values between subjects. There's no principled reason to think that human level intelligence is some kind of end-point. In fact, there's an argument that human intelligence isn't even general intelligence. As such, a human-like software wouldn't be all that special in the grand scheme. This matters because if the software isn't qualitatively "better" than humans, the chances of it replacing us in the chain of evolution are much slimmer.

The real fear of AI is that it will develop into a super-intelligence. Something far surpassing human ability. But then what? What was it designed to do? Manufacture things, drive cars, keep our schedules? For what plausible reason would such a super-intelligence decide to eliminate humans (at least a reason we didn't fairly earn)? There's even a biodiversity = sustainability argument to be made in favor of keeping humans around even if we're not the biggest brains on the planet. It's the same argument we use to justify environmental protection activity. It's akin to the argument for equality that oppressed and disadvantaged groups have been making since the dawn of modern history.

When you break it down, there's nothing but a proverbial fear-of-the-dark behind all the hype around AI takeovers. Those stories make exciting movie action sequences, but don't provide any substantive guide to what we can actually expect from the software we create today, tomorrow, or next century. For a better preview, look at what we have now, computer speech, automated x-ray readers, delightful personal assistants, hardly the stuff of nightmares. And don't bring up social media - humans made the Facebook/election/privacy mess, not machines.

Will Hamilton, C.M.

Senior Management Analyst at the City of Beverly Hills

6 年

We should never right off the potential threat AI could pose to our existence. Even if we somehow create intelligence that mirrors humans perfectly, we have to look at ourselves and see the potential we all have to do within great good or greater evil. Also, the fact that what can be considered as good intentions, can easily be anything but. We may not create a Skynet or similar entity that is hellbent on human extermination, but what if we create something to decides to become humanity’s overprotective parent? Or how about it thinks it’s ways are best and thrives to reshape Human culture and society to those ends? Is that still not a threat to free will and our existence as we know it? It think it’s always important to remember that there were other hominids that existed currently with Homo sapiens. Where are they now? History and the evolutionary record appear to state that there can only be one, so to speak.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了