There is No Such Thing as AI
"...a river suffers because reflections of clouds and trees are not clouds and trees."

There is No Such Thing as AI

If you live in the industrialized world, there isn’t a day that goes by without someone talking about Artificial Intelligence (AI).??It has spawned multibillion dollar industries spanning out in every direction, from finance to medicine, toys to space travel, and cybersecurity to predicting elections.

However, despite this global effort to make AI as ubiquitous as toasters, I must, as a computer scientist, say quite simply that in fact, there is no such thing as AI.?

On the most basic level, a computer is simply a metal container holding wires, silicon chips, glass and other materials connected in a way to detect, move and store electrical charges in the form of signals.??The signals themselves have no inherent intelligence and no matter how sophisticated the programming language and logic, at the end of the day, it’s an oxymoron to say the machine is intelligent or that it can learn or think because it can’t.??All the machine can actually do is detect, count and sort signals.?

Equations based on Bayes Theorem, named after Thomas Bayes (1701-1761), the famed British theologian and mathematician who discovered a way to use probabilities based on prior events to make mathematical guesses known as “probability inference,”and other probabilistic, highly sophisticated formulas, that can be coded as instructions for the electrical signals to compute potentialities, by definition, do not?equal?intelligence.

So, what exactly do people mean when they say “artificial intelligence?”??As precise as we are with our technical prowess, we must demand a higher level of precision when it comes to the words we use to describe such things, the meanings these words acquire when put next to other words, and a way to discern, rather than conflate, axioms with manufactured, make-shift marketing.

?Consider the history associated with the notion of AI.??Starting with the ancient Greeks more than 6000 years ago, Aristotle laid the foundation by inventing what was called “syllogistic logic,” the first formal deductive reasoning system.??Deductive reasoning was a critical cornerstone for computing, making it possible for programmers, eons later, to write computer code, i.e. instructions that can be sent to the hardware on how and where to move signals from one place to another - like this for example:

_____________________________________________________________

“If what’s in the bucket of?electrical?signals called X,?is greater than?what’s in the bucket of?electrical?signals called Y, execute a set of instructions to count and store a bucket of electrical signals called Z.?

If what’s in the bucket of?electrical?signals called X?is less than or equal to?the bucket of?electrical?signals called Y, execute a set of instructions to count and store a bucket of electrical signals called Q.?

_____________________________________________________________

This kind of logic can be “nested”, meaning the result of computer instructions like Z and Q in the example above, can be put inside of other computer code or instruction sets, for more complex calculations that?make it?appear?as though the machine is “thinking.”???

But what is actually happening is that the computer is simply executing instructions that "tell”?the electrical charges which “bucket" to go to, based on the basic roots of syllogistic reasoning rules created more than 6000 years ago.??

Fast forward about 5000 years past Aristotle and the ancient Greeks, to the year 1515, when Leonardo Da Vinci presented one of the first known robots called the “Walking Lion” as a gift to the King of France.??This wasn’t Da Vinci’s first attempt at imagining machines that could mimic people by mechanically moving and talking on their own. As early as 1495, the famous Renaissance man produced drawings for other automata, including the “mechanical Knight,” which could bend its legs, move its arms and hands, turn its head, open its mouth and, using internal mechanisms, could also “talk”.??

According to the NASA scientist Mark Rosheim, who built a working modern-day model of the mechanical Knight, Da Vinci’s “programmed carriage for automata was ... the first known example in the story of civilization of the programmable computer.”??

Fast forward another 100 years or so to the 1600s, when famed?mathematician, artist and genius of another sort, Rene Descartes, put forth the notion that the bodies of animals were nothing more than complex machines, and Blaise Pascal, another French scientific prodigy (after whom a programming language is named), created the first mechanical digital calculating machine in 1642.??

However, in 1995, about 300 years after Descartes and Pascal, world?renown?neuroscientist Antonio Demasio published a seminal work called “Descartes Error,” in which he showed that Descartes’ most famous premise, the notion that the core of our being is logic and reason, described by the iconic phrase, “I think therefore I am,” was, at best, misleading.??Instead, Demasio demonstrated that when it comes to Human Beings, the euphuism closest to describing who we are should be more accurately stated as “I?feel?therefore I am".?

By the first half of the 20th century, Bertrand Russell and Alfred North Whitehead published?Principia Mathematica, which revolutionized the formal logic born?multiple millennia?before by Aristotle. Russell, Ludwig Wittgenstein, and Rudolf Carnap went on to lead philosophy into the logical analysis of what they interpreted as “knowledge.”

Moving further forward, the definition of “knowledge” itself, morphed along with each new iteration of computational milestones – all claiming that the invention itself, in one form or another, represented or mimicked the?creation?of knowledge - a fatal flaw of the logic and reasoning of that time.

This ultimately led to one of the main reasons why the term "AI" and the?research?behind it, trailed off in the 1980s - stemming from this most fundamental “logic error” – that the definition of “intelligence” had to be changed constantly in order for it to make any sense when associated with a machine.??

The renowned English computer scientist Alan Turing, came up with the Turing Test as another way around defining “machine intelligence”; a test which in essence could only be 'passed' when people “believed” that the machine was another person.??

However, a less well-known but more powerful test of whether a machine could be reasonably considered intelligent came in the form of a thought experiment devised by the American philosopher John Searle in the 1980s.??In what became known as “The Chinese Room,” Searle argued that there can be no such thing as “understanding,” let alone intelligence, by a machine.??He succinctly recounted the argument in 1999:?

  • “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”

A number of counterarguments or replies, such as The Systems Reply, The Robot Reply and The Brain Simulator Reply, have since been made to challenge Searle’s reasoning.???

Thus far, though, every one of these counter claims has been deflated and The Chinese Room remains – albeit contentiously – the gold standard argumentation for why the machine can never be “intelligent”.?

So while the term “AI” has now been dragged out of its historical context, dusted off and resurrected; made shiny and new again by virtue of faster mechanics, larger digital storage capacities and vast sums of money to spread false?innuendos about computer capabilities, and has become a multibillion dollar business, we must remember that it is simply a marketing twist, the labeling of which takes advantage of distorted meaning and abject abuse of language to convince the mass population that the machine is doing something other than detecting, counting and sorting electrical signals, when in fact, it is not.

Nothing has changed since the days when both the term, and flawed logic behind “AI”, was moth-balled.

Moreover, it is of great and urgent importance that we correct for the notions conjured up by the term AI, “deep learning”, "cognitive computing" and other meaningless inferences about what the machine can do, as these inane contortions of cultural communications are changing the very nature of worldwide economics and the critical?decisions that?the power brokers are now making about the future of our shared humanity.??

Remember the following to keep yourself, and the machines we call computers, in perspective:?

Over 100 years ago, Einstein made one of the greatest leaps in physics when he discovered?the existence of?gravitational waves; at a time when not one single silicon chip existed on planet earth.??

Using purely his human imagination and deep-seated passion for wonderment, for being alive, he constructed thought experiments, like a man flying atop a rocket ship zooming through space and time, to think through and develop his breakthroughs.??

It wasn’t until?100 years?later?that Einsteins’ human sense and sensitivities to the natural world and the equations that flowed from his direct and felt experiences, led today’s box builders to construct a machine, that could detect empirical “proof” that gravitational waves do in fact exist. In other words, it took our latest and greatest technology 100 years to catch up to Einstein's imagination.??

The lesson here:?We should never forget that it has, and always will be, Human based wonderment without restriction on imaginings, coupled with our?direct and?unmediated?experience of our world, through all of who we are as integrated creatures born into this world as changelings; instantaneously transforming from fluid-based breathers to air-based landlubbers, that allow us to experience the emergence of enlightened insights.?

My suggestion:??Don’t allow modern-day tech company executives (snake-oil salesman in many ways) fool us into thinking that computers, can “think”, or “learn” or “teach” us anything.??

It’s not possible because the machinery inside the metal container doesn’t “know” anything . . . it’s just a box.

To help our planet, the people and all the living creatures with whom we share this fantastic home of ours, for our children’s sake, for the sake of future human beings, for the sake of the living environment itself, we need to joyfully reacquaint ourselves with direct human experiences, authentic relationships with each other and with the world which sustains us.

My call to action:?Let us welcome in a new “Golden Age of Imagination”; one in which the metal and glass boxes that we have idolized for too long, are mainly kept inside the tool boxes they belong in, and we the people, reclaim our truly authoritative stance on Mother Earth.














Dr. Karen Sobel Lojeski

Founder / CEO, Executive Advisor and Coach, Bestselling Author, Global Thought Leader, Motivational Speaker

5 年

As always Carl, your thoughts are much appreciated.? I would add that it is the trade-offs of time and experience that are just as toxic to human development as the appearance or practice of assigning intelligence to machines.? When we stop doing work that seems "basic" when looked at from a "mechanical" perspective instead of a human one, we fall into a trap of trading off the development and practice of critical knowledge formation, the very foundation of our understanding and experience.? When we start to forget what we know due to atrophy caused by the non use of lessons learned or even worse, never learn the basics at all (which leads to the state of not knowing what we don't know), our authoritative stance erodes; precariously trying to balance ourselves too much on shifting sands instead of securely standing on solid ground. Warm regards, K

Carl Eneroth

Stories that unite us | Film and Podcast | Executive Education | Documentaries | Powered by Sthlm Social Innovation Lab (SSI Lab)

5 年

Interesting as always Karen. It seems that powerful algorithms may not pass the Turing test, but they do the job on a practical scale, as if they were humans and thus replacing us. In that sense, it does not matter if AI is intelligent like a human or not, it just act as it and that is enough to make the difference. ?

要查看或添加评论,请登录

Dr. Karen Sobel Lojeski的更多文章

社区洞察