Why it's so hard to think straight about AI - Part 2
Photo by Robina Weermeijer on Unsplash

Why it's so hard to think straight about AI - Part 2

Disclaimer: The opinions expressed in this article are those of the author and do not represent the views of his employer or any other organization, institution or individual.

In Part 1 we explored themes related to the onset of an AI singularity and some of the arbitrary and irrational ways in which we tend to think about AI. In Part 2 we will try to understand human intelligence better, in the hope of revealing some of the ways in which our brains are similar to computers but also profoundly different from them. This should prepare us to start accepting the fact of AIs being, at some point in the future, seen as minds in their own right.

An AI can never surpass human intelligence... or can it?

This is stated by many as if it were some kind of axiom, too self-evident to even think carefully about. And the intuition behind it is so deep-rooted that we need to go back to the very origins of human intelligence to see why it is wrong.

First we must jettison any vague, mystical notions about what human intelligence is "for" i.e. what is its purpose? Human intelligence is an adaptation just like every other trait - heart, lungs, kidneys, limbs, eyes etc. Every adaptation is a problem-solving tool which, in one way or another proved beneficial to the survival and reproduction of our ancestors, helping them navigate a hostile environment as hunter-gatherers. The human brain being the greediest organ in the body (accounting for 2% of body weight but burning 20% of available calories) would have had to afford its owner a significant survival advantage to compensate.

While it is obvious that sensory inputs (visual, auditory, olfactory...) need a brain to interpret them, in humans the sense data stream is unusually rich - we can see 7 million colours, perceive depth in a two-dimensional image, distinguish subtle variations in pitch, tone and timbre (think music), sense minute temperature differences by touch and so on. Along with long-term memory, all this sense data goes into maintaining and updating a model of what's out there in the world and making decisions in light of past experience.

The point of this discussion is to emphasize that human intelligence must be seen against the backdrop of our evolutionary past if we are to understand its function and from there on, its limitations when put to present-day problems that are quite different from what our 50,000-year old genome codes for.

No alt text provided for this image

A Machine for Jumping to Conclusions

If evolution by natural selection had a motto it may have been this - "if it ain't broke, don't fix it". All the unique selection pressures operating in the prehistoric environment of our ancestors have shaped our minds in unique ways. The more we subject the human mind to objective, third-person inquiry the more it turns out to be full of quirks, biases and idiosyncrasies. In engineering slang it would be called a kluge - a clumsy, cobbled-together solution which nonetheless works effectively.

Daniel Kahneman, the Nobel-prize winning psychologist provides a fascinating account of the systematic cognitive biases that each one of us suffers from. For instance, studies find that when parole cases come up for hearing, the single strongest predictor of whether parole will be granted or denied is the amount of time since the Judge last had a meal! This goes to illustrate how susceptible the human brain is to priming - the effect of random and (seemingly) irrelevant environment factors on decision-making and behaviour.

The human brain is lazy by default because it has a tiny short-term memory (try getting someone's phone number right first time). It tends to arrive at conclusions with minimum effort which means in most cases, using minimum information. During times of stress, our brains tend to be even lazier and this can lead to suboptimal decisions delivered by automatic modes of thinking (what Kahneman calls "System 1").

Our brains also value coherence over accuracy. This explains why we regularly manufacture memories of things that never really happened, filling in details as we go along. It also manifests itself in the most pervasive bias of all - Confirmation Bias, or the tendency to seek out facts that confirm one's existing beliefs while ignoring ones that contradict them.

No alt text provided for this image

I think I'm gonna go with my gut on this one

It's becoming more and more clear that the thing called intuition isn't everything it is cracked up to be. Intuition is simply knowing something without knowing how you know it. In other words, a split-second computation done by your brain whose logical steps are opaque to (the conscious) you.

The seat of much of our intuition is the more ancient, smaller part of the brain, the so-called "reptilian brain" which we inherited from earlier mammals and which they inherited from reptiles. One of its components is the amygdala - the seat of fear, anxiety and aggression which triggers emergency responses. In general, our intuition excels at the type of situations it has evolved to address which are the 4 F's (fight, flight, food, mating).

Our intuition has constantly misled us in the past. It tells us, for instance, that the Sun "rises" in the East and "sets" in the West, that a feather always falls slower than a stone, that time flows at a uniform rate for all observers and so on. It's the hard-won triumphs of Science that challenge our "common sense" every now and then.

Unfortunately, magical notions about the power of human intuition tend to place it on a pedestal and lead to over-reliance on it even when better data (and better models) are available to base judgement on. Nowhere is this more evident than the hubris of corporate leadership. A top-down decision-making culture where recommendations from AI need to be "validated" by Men with Hunches (read senior execs), is just a way of ensuring that AI never tells you anything you don't already know.

No alt text provided for this image

Silicon brains versus mushy brains

We are finally ready to make a meaningful comparison between human and artificial intelligence. Let's start with the most superficial difference. Artificial brains are made of "hard" stuff (silicon and metal) while animal brains are made of "wet" stuff (protein and water). This fact is of no consequence so let's take it off the table.

Human intelligence has been shaped by eons of blind trial and error (or "generate and test" if you prefer) that have led to the accumulation of design elements beneficial to survival and reproduction. By contrast, machine intelligence is a product of human intelligent design, a deliberate process.

The human brain has a massively parallel architecture - that's how you can drive a car, listen to the stereo and talk to your co-passenger at the same time and do all of this effortlessly (though you can be conscious of only one activity at a time). It also has highly differentiated regions responsible for specialized tasks e.g. the visual cortex processes data coming in through the eyes.

And finally, according to the latest models in cognitive science, the human brain works on the principle of Bayesian Inference. That means, it combines "bottom-up" sense data with "top-down" expectations of what is out there. Sometimes incoming sense data can be "auto-corrected" based on prior expectations - check out the Checkerboard Illusion if you don't believe me.

Waiting for Homo Deus

There are a few very successful AI paradigms that exploit our knowledge of biological evolution (Genetic Algorithms) and neuroscience (Neural Networks). After all, Nature has been at it for 4 billion years - do we really think we can do better?

Actually, perhaps we can. An AI having "grown up" in a sterile environment without the burden of frequent life-or-death decisions, would be immune to the various cognitive biases described earlier. It would also have no limit on short-term memory (or "working memory").

Next, learning acquired through experience cannot be quickly transferred from one human being to another. But imagine this: a self-driving car accumulates 10,000 hours of driving experience on city streets and the next generation of cars is simply “born” with the knowledge!

On the other hand, an image recognition AI needs to be shown hundreds of images of apples to start recognizing a picture of an apple but a baby can recognize an apple after looking at just a few. How on earth is this accomplished? The short answer is we don't know yet. The human brain with over 85 billion neurons each with thousands of connections to other neurons is something that no computer currently comes close to simulating.

And that brings us back to where we started - the AI singularity. Till such time as those grand prophesies are borne out, we humans can continue to pretend that we are The Masters of the Universe.

No alt text provided for this image


要查看或添加评论,请登录

Ambar Nag的更多文章

社区洞察

其他会员也浏览了