A.I. - You know it isn't real, right?!
The lie of A.I.

A.I. - You know it isn't real, right?!

Companies position themselves as "bleeding edge" technology users. They identify as trendsetters and leaders, better than their competition, and better than you could expect to find anywhere else. It's all part of the game.

The CEOs, Advertisers, and the Marketing Divisions that are pushing Artificial Intelligence (AI) are almost certainly not technologists suited to identify the single most complex endeavour in human history. They claim a close association and utilisation of "AI" even though this is almost non-existent as an area of science.

They promise us that Artificial Intelligence (AI) will:

  • Make you immune to hackers;
  • Improve customer service;
  • Eliminate repetitive tasks, freeing staff to help customers;
  • Enhance product design;
  • Predict customer needs and desires;
  • Evolve your business from local to international;
  • Leverage your Big Data;
  • Advertise one-to-one, customised for each client.

Except it's not true.

AI is not driving it; it's happening due to information gathering, sorting, and manipulating data with algorithms—not Artificial Intelligence.

What Is True

We're on the cusp of a new age, but we have no idea when it is going to mature. Enthusiasts imagine the wonders which await us, like the brief list above, but even they are severely underestimating the potential. 

Others, like Stephen Hawking (an undeniable math-genius), are stepping outside of their field of knowledge, as are Elon Musk and many others. They warn us about "The Singularity"—the Rise of the Machines—Humanity's Doom…

Famed scientist and science fiction writer Isaac Asimov once described "The Frankenstein Complex" that pervaded early SF. There was a notion that if humanity were to tread on the toes of Gods by creating life, or investigating the Universe, we would be slapped down and punished for this hubris. He said it was patently silly—and he was absolutely right.

The only constraints placed on humanity are the ones we put there ourselves. Machines are neither benevolent nor malevolent—they exist—and any morality they have will be that which we programme into them.

Failures Lead to Success

At the 2019 Turing Talk on AI, one expert (Dr Krishna Gummadi) was asked if machines can handle Emotional Intelligence, but the answer was "no".  There is no hint that non-biologics can support the emotion of any kind. You can see the whole segment here.

If it is even possible, we're almost certainly decades (maybe centuries) away from such a development. Human brains have the thalamus, and the amygdala buried deep inside. These tell us what we are feeling and how we feel about it.

But can you programme a computer to "be happy"? How about jealous, sad, angry, or disappointed? There is no flood of chemicals, like within you, to make one ball up its fists and start a bar fight, or to cling desperately to a lover, or cry at the death of its puppy. We cannot do these things for machines, and it wouldn't be desirable in any case.

Instead, we must train our models with pure, untainted ethics. That is what they require to function, and they will work just fine. Poor grade Sci-Fi would have us fear them taking over, or merely finding us irrelevant. The truth is that we're not even sure if it is possible to make them care at all, about us, themselves, or anything else. But if we design them to be ethical, that problem goes away.

Ethics is a Serious Issue

In an embarrassing experiment to get rid of the U.S. Justice System's inherent bias against people with dark coloured skin, a so-called "AI" was designed to help judges create fair sentencing. It reviewed thousands of cases to learn how sentencing worked, to understand rates of recidivism, and who was most likely to re-offend.  It then continued to recommend inappropriately harsh sentencing for dark-skinned people. What the…?

The researchers removed racial references from the data. Still, the algorithm had already formed conclusions about first and last names, low income, certain neighbourhoods, sex, and age, and continued to hand out harsh sentences. In computer terms, we call this GIGO, or Garbage In, Garbage Out.

You cannot teach an AI with faulty data. You will perpetuate the problem. If the programme had been brilliant, it would have figured this out and eliminated the problem. However, since "AI" is currently still stupid, all it did was figure out how to do what was already being done and continued to do it just as badly.

Reality Sets In

Someone may come up with an insight that allows true AI very soon, but it is equally possible that it could take ten years, or twenty! We're making progress, to be sure, edging our way closer to this amazing goal, but we're not even a substantial fraction of the way towards achieving artificial intelligence. All of the purported AI that you currently experience is not much better than a clever parrot. 

Think not? Are you convinced that Alexa, Cortana, Siri, and all the rest are examples of real Artificial Intelligence? Do you think that AMAZON?, Google?, eBay?, and all the rest are using real Artificial Intelligence? Sorry, but no, they are not.

How "Humanity" is Achieved

The first step in that process is to use a female voice about 30 years of age because it is young enough to sound attractive but sufficiently old to engender feelings of maturity, trust, and respect—it's a "Mom" voice—and works well for both men and women.

The second step is the programming-heavy portion, where the algorithms are taught to recognise keywords and the relationships between them. The key is that all questions have been asked before. For the machine, results are placed on a spectrum from "successful" to "poor" and assigned a probability.

The programme finds the best match for your version of the question and parrots the most likely answer. This can happen in milliseconds, creating the illusion of a meaningful conversation. 

No Intelligence Involved

We don't need Alan Turing and his Turing Test to see that all AIs are profound "not human". Ask Alexa "Can you pass the Turing Test?" and "she?" says "I don't need to—I'm not pretending to be a human".

Ask the Google Assistant the same question and "she" says "I don't mind if you can tell I'm not human. As long as I'm helpful, I'm good!"

This frankness sounds like something a real person would say, but it was an engineer who programmed it, not something created spontaneously by the machine. Even IBM's WATSON supercomputer, the machine that beat the human champions of the game show "Jeopardy!", was not intelligent by any stretch of the imagination.

WATSON was an amazingly expensive and labour-intensive associational database, carefully crafted to interpret the subtleties of puns, double entendres, and mixed metaphors. It was stoked with data that reflected typical "answers" the gameshow regularly provided and then programmed to convert these clues into a "question" that would reveal the relationship between the clues. The premise was simple, although the execution was costly and difficult.

Still, it was a small but significant step on the road to making AI possible. Earlier versions, like IBM's Deep Blue, were glorified adding machines, but they made chess much more challenging for humans and resulted in chess programs on store shelves that can easily beat Deep Blue. Consequent "precursor AIs", like ALPHA GO, have made advances in machine learning that will eventually help to reach true AI.

Where We Are Today

        The truth is that all companies have Big Data—usually in the form of records and information that go unused because they're unorganised and stored in too many diverse locations.

We began with rigid Data Warehouses that were orderly, but accessible via tools like SQL (Sequential Query Language). However, it was hard to see relationships unless you were very skilful at phrasing the questions properly and had the right databases available at the time you posed the question. You were obliged to pull in multiple tables to pose a question if you expected in meaningful answer.

        Hadoop? came along with the concept of Data Lakes that aggregated data from many disparate sources, such as client records and e-documents, but added image recognition, video, audio, web-scraping, and much more.  It was quite flexible compared to SQL, so the data became more generally useful for human enquiries. Still, without a good sorting/interpreting algorithm, it is still useless for AI.

Neural Networks arose to help cope with these shortcomings. Superficially designed to emulate biological neural networks found in animals and humans, these systems can associate disparate data. They are generally very low on "rules", so the programme is free to compare all sorts of things that are seemingly unrelated.

These artificial Neural Nets can find subtle relationships because they are unconstrained. It's akin to a scientist daydreaming about making a better underarm deodourant while fiddling with a ballpoint pen, and suddenly thinking "What if I made an oversized ballpoint pen with a big roller?" and inventing the roll-on deodorant (which actually happened in 1952 with Helen Barnett Diserens).

Being free to make unconventional associations makes it a better system. Once tied to real AI, it may connect the knowledge of a veterinarian in Poland, with an atomic physicist in France, and a scuba diver in Pakistan, to explain how to make Warp Drive possible. AI, when supplied with the totality of our knowledge, will find answers that have been right in front of us, but humans can't connect all the pieces.

Machine Learning

Researchers once set a proto-AI programme to work on an ancient 8-bit videogame with no instructions except to operate buttons and accumulate points. They left it overnight, and when they returned in the morning, it had mastered the game and was unbeatable.  It didn't win; it just followed its rudimentary instructions.  The researchers found they could do this with almost any game.

More recently, in a study of learning-algorithms, a team allowed their programme to play a popular old 8-bit game called Q*bert.  The AI found a glitch that had never been found by a human. It just played the same glitch, mindlessly maximising the score, "rolling over" to zero again and again. It found an exploit and didn't want to do anything else (since it has no "wants", and isn't intelligent).

The Takeaway

Unless programmed to do so, AIs will never want to "Kill all humans!" Barring some fantastic biomechanical innovation, they will probably never "want" anything or "feel" anything at all. What they can do is behave ethically, but only if we make an effort to program them with unbiased, pure, ethical standards and paradigms.

Many people characterise current AI as a hoax. In reality, it is merely a reflection of scientists talking about potential, enthusiasts extrapolating about those possibilities, and then na?ve media reporting these predictions as if they were foregone conclusions.

This isn't malice on the part of companies trying to trick you into using their products. They've been misinformed or misguided, too, by promoters misappropriating terminology and equating Machine Learning with AI. 

Except for a few scientists doing significant research in the area, there are very few people that truly understand what AI is. Now you are somewhat more informed than the average person…so spread the word—and though it is possible, it's probably best not to expect the real appearance of the first basic signs of AI before 2030.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了