Artificial Super Intelligence, fact, fantasy ...or justifiable fear?

Artificial Super Intelligence, fact, fantasy ...or justifiable fear?

In a recent published monograph, workers at the University of Surrey reported breakthrough progress in the development of nanoscale microarrays "small enough to record the inner workings of primary neurons". The upshot of the study, according to Science Daily is [a leap forward in the integration of man and machine]. The Science Daily review goes on to say "Machine enhanced humans -- or cyborgs as they are known in science fiction -- could be one step closer to becoming a reality, thanks to new research".

While the advent and development of technology has been the principal game changer for humankind, we appear to be heading towards a paradigm shift towards an existential threat few may yet fully appreciate. Of course we've all grown up with Arnold Schwarzenegger's Terminator franchise as the default techno nightmare. But our world's undoing needn't come from shiny cyborgs or Skynet.

On this topic, I happen to side with Elon Musk, Bill Gates, Stephen Hawking, Sam Harris and others in that, for humanity, Artificial Intelligence is likely to represent the ultimate 'Pandora's Box'. Several drivers tend to reinforce this caveat. While some may disagree, recent indications suggest Moore's Law shows no sign of slowing. Ever tinier nanoscale processors are being conceived and tested. And this is where things get squirrelly fast.

As one moves closer to atomic scale operations, Newtonian physics ends and quantum weirdness begins. Quantum entanglement, for example, occurs when pairs or groups of particles share spatial proximity such that the quantum state of each cannot be described independently of the state of the others, even when the particles are separated by vast distances. This means that the information passed between one quantum entangled particle and its partner appears to happen faster than the speed of light. Such notions strain credulity while simultaneously opening the door to mind blowing possibilities, particularly in the realm of AI.

It is all but inevitable that a key threshold will soon be crossed. Vernor Vinge christened this boundary as the AI singularity. Some predict that it will happen in fifty years while others say it could occur in the next five or ten. But when [not if] it happens, the AI will undergo a remarkable transformation, called recursive self-improvement. Simply stated, the AI will cease requiring outside input from humans. Instead, it will begin 'self-evolving', self-improving and self-expanding its intelligence -- and this is predicted to occur at an exponential rate. Here, a useful analogy posited by Sam Harris comes to mind. Imagine gathering one hundred of the smartest humans who have ever lived. Put these folks in a room and turn them loose solving equations and creating new technologies. Keep the room filled with genius level thinkers for the next ten or twenty thousand years and imagine what they might create.

It has been reliably calculated that recursive self-improving AI will most likely achieve approximately the same intellectual development in the span of just two weeks. Now stop for a moment and let that run around in your head. Twenty thousand years of invention, development and breakthroughs; occurring in TWO WEEKS. Here's another data point to consider while you're processing that tidbit. Among leading experts in neurobiology, bioinformatics, and machine-man interface, the consensus opinion is that there is nothing intrinsically special about the wet-wear in our craniums that precludes recapitulation in silico. This means that whatever intelligence and self-awareness we've developed as sentient mammals may almost certainly be recapitulated in non-living AI. This fact deflates the notion that self awareness / consciousness requires biological substrate.

Finally, an analogy drawing a comparison between super-AI, humans and ants is apropos. If you're walking down the street you might not go out of your way to crush an anthill. But if the anthill happens to stand in the way of road construction, it will be destroyed along with its inhabitants, without the slightest hesitation or regret. So, if and when we birth an artificial intelligence that shares as much in common with humans intellectually, as we do with the ants in our garden, what should we suppose happens when this super AI perceives our agenda as slightly incompatible with its own?

Scary stuff, at least if you ask me....


Geno Marcovici, Ph.D., DABAAHP

要查看或添加评论,请登录

社区洞察

其他会员也浏览了