Chicken Little's Guide to the A.I. Killer Robot Apocalypse
Photo courtesy of gratisography.com

Chicken Little's Guide to the A.I. Killer Robot Apocalypse

I am not an expert on Artificial Intelligence. In fact, I know next to nothing about it.But I just happened to go down the rabbit hole of "researching" A.I. on the Internet two years ago when it was topical, and for some reason wrote it all down in a fevered rush. No one read it, of course.

Just remembered that article today, and thought that it'll be interesting to go back and see how well it has aged (or not) - which turned out to be not quite as bad as I had expected.

And so I present to you a slightly-better-edited version below. Warning: still a long read.

No alt text provided for this image

Photo by Gabriella Clare Marino on Unsplash

Our fear of automata started over a hundred years ago

The first Terminator movie was released in 1984. It was not the first film to feature killer robots, but it was perhaps one of the most memorable. The movie envisioned Skynet as a US military supercomputer that became self-aware, and promptly set out to destroy humanity. So, it was a fantasy manifestation of the collective human fear that we are going to be made extinct by soulless automata.

This fear had manifested itself within the human psyche as far back as 1863, when someone wrote to the editor of the Christchurch Press a long sulky letter titled “Darwin among the Machines” . On hindsight, the author was surprisingly (but also annoyingly) prescient, given that the typewriter was not even invented until 1868. He said: “We are ourselves creating our own successors… we are daily giving them greater power… that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages, we shall find ourselves the inferior race.”

No alt text provided for this image

Photo by Silver Ringvee on Unsplash

Do people really think that there will be a robot apocalypse?

Modern luminaries have made predictions, one way or another, about the future of Artificial Intelligence.

The most vocal doomsayer is probably Elon Musk, ironically the same person that gave us a self-driving car. He said: "With Artificial Intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out."

Other equally-famous people disagree. Facebook founder Mark Zuckerberg said: "A.I. is going to unlock a huge amount of positive things, whether that’s helping to identify and cure diseases, to help cars drive more safely, to help keep our communities safe." Some helpful souls on the Internet have rather drolly pointed out that given his “robotic” performance at a 2018 congressional hearing, he's probably just doing some pre-marketing for the android master race.

The actual experts point to something called Moravec's Paradox, which describes the strange situation in which machines can solve difficult problems like determining your physical location via GPS, while struggling with relatively simple things that any toddler can do like pointing out cats in busy photographs. The existence of Moravec's Paradox makes it highly unlikely that Skynet will spontaneously appear and kill everyone - at least, not before gets through the billions of fluffy kitten photos on the Internet first.

So we think we fear autonomous killer robots built by rogue sapient A.I., but perhaps it is neither rational thought, nor rational fear.

No alt text provided for this image

Photo by Possessed Photography on Unsplash

Artificial Intelligence really is quite clever, but only for specific things

A.I. is already capable of doing many amazing things, albeit in a narrow sense. Self-driving cars already cause less accidents than human-driven cars. But to do so, autonomous cars have to overcome two difficult things: how to perceive the world around them with as high a fidelity as the human senses that we take for granted, and how to make sense of all that data.

The use of Generative Adversarial Networks (GANs) allows for applications previously thought impossible, such as the creation of deepfake videos. Before deepfakes, using archival footage and body doubles to recreate an actor’s presence required the use of a special effects studio and millions of dollars. But with A.I., laypeople are able to download a small desktop utility to create authentic-looking fake videos in a matter of hours. All that is required is an Internet connection, a decent graphics card, and a ready supply of existing photos and videos to train the A.I. on.

No alt text provided for this image

Photo by Maximalfocus on Unsplash

But even for narrow applications, the power of A.I. is already pretty darn scary

Much has been made of the ability of DeepMind’s AlphaGo A.I. to defeat world champion grandmasters in the ancient game of Go.

It is kind of a big deal.

Go is estimated to have up to 10 to the power of 170 legal board positions - a Trillion Trillion Trillion Trillion Trillion Trillion Trillion Trillion Trillion Trillion (One followed by 120 Zeroes) times the number of positions in chess. Due to the complex trade-offs between long-term strategy and short term tactics required in Go, it is said to be a game best played from the gut, using the "human" attributes of experience, creativity and intuition.

DeepMind’s AlphaGo forged a new winning path nineteen measly years after IBM's Deep Blue computer beat grandmaster Gary Kasparov in chess. AlphaGo was guided by A.I. neural networks to help navigate the much-larger search space. In 2016, AlphaGo made history by overwhelming 18-time world champion Lee Sedol with a score of four games to one.

One year later, the next version of AlphaGo, called AlphaZero, broke even more new ground. AlphaGo was trained on 30 million human moves from public game servers, but AlphaZero was given nothing beyond the rules of the game, and then made to play against different versions of itself. By this process of trial-and-error and surival-of-the-fittest alone, AlphaZero managed to teach itself to become the strongest A.I. player of all time in the three different games of Go, chess, and Japanese shogi - by large margins.

This astonishing accomplishment deserves a full description: AlphaZero took only nine hours of self-training to master chess, 12 hours to master shogi, and 13 days to master Go.

There are three things about AlphaZero that are particularly impressive. First, it had the power and flexibility to master different games with no outside knowledge. Second, it was able to surpass the peak of human Go ability within 13 days, for a game that humans have been playing for as long as 4,000 years. Third, it beat everyone with haunting pizzazz — its playstyle was described by human grandmasters as "superhuman", "incredibly creative" and even "chess from another dimension".

No alt text provided for this image

Photo by Kristina Tama?auskait? on Unsplash

The Acorn of Apocalypse falls...

In 1950, Alan Turing (of Benedict Cumberbatch fame) published a seminal paper that attempted to figure out of machines were capable for thinking. Since the act of thinking is hard to define, so Turing instead asked the question “can a machine trick us into thinking that it is thinking like a human?”

In 2014, a supercomputer-powered chatbot was judged to have passed the Turing Test by convincing 33% of judges that it was a 13-year old boy — presumably with sullen, monosyllabic answers, and demonstrating an unhealthy obsession with computer games and dubious websites. Although it was not exactly the moment of conception for a world-ending murder-robot A.I., it was still a monumental step for A.I. development.

No alt text provided for this image

Photo by Kamesh Vedula on Unsplash

...and turns into a Technical Singularity?

If A.I. ever becomes self-aware and decides to kill us all, it will most likely be via the spontaneous and immaculate manifestation of a Technical Singularity. This will probably look like a very short period of very quick technological growth beyond our ability to control or comprehend.

A lot of humanity’s collective knickers-twist about being replaced or destroyed by robots can be explained by the innate tendency of our brains to extrapolate from existing information. The advancements in narrow AI frighten us because we think that it is an immediate and logical precursor to strong AI, or even super AI.

But is a Technical Singularity even plausible? Human cognition, or the act of thinking, spans a large range of activities from ice-skating to formulating the laws of nuclear physics. We like to think that we have sapience, which is the state of being intelligent while having the ability to acquire wisdom. It is why we call ourselves Homo sapiens, although looking at current geopolitics this appears to be largely an aspiration for a majority of the human race.

What more for A.I.? There is a view that narrow A.I. cannot make the leap and evolve into strongly-generalised intelligence. Self-aware A.I. will need a very human ability in “interacting with incomplete, potentially contradictory and noisy environments using finite computing time and resources.” Narrow A.I. is already superhuman at solving specific problems in controlled environments, but even AlphaZero will struggle to give you a polite answer if you asked it if you look good in your favourite pair of kitten-print trousers.

No alt text provided for this image

Photo by Alec Favale on Unsplash

Do androids dream of electric morality?

Peeling back the veneer of our collective fears about an ascendant super A.I. god-mind, what lies beneath is perhaps our real worry. We do not fear a benevolent AI god as much as we fear a Skynet that is determined to put an abrupt end to human civilisation.

We can look to the real world to posit how such a newly-emergent A.I. will behave. In 2016, Microsoft launched its now-legendary "let's learn from the nice people on Twitter!" A.I. chatbot experiment. Barely 16 hours after Tay.ai went live, Microsoft had to shut it down when the Twitter community taught the AI to talk dirty, deny the Holocaust, hate feminists, and generally become a mouthpiece for racism and homophobia.

The tribal nature of human existence has caused the history of civilisation to be cluttered with endless conflict. We now live in a time where the very human, very inbred need for sticking to familiar people has scattered infighting into smaller scale civil conflicts spread out across the world. Politicians have caught on to this basic tenet of human existence and have adopted more divisive rhetoric in order to be win elections. I guess we should not be surprised about the cynical actions of political powermongers, much like how Microsoft should not have presumed upon the good behaviour of the average Twitter user.

Now imagine an infantile but self-aware A.I. stumbling into the vast malodorous plains of four billion indexed public webpages, plus the unfathomable depths of the un-indexed Deep Web. What will it find there, and what will it learn?

We have just started to discover that the unintended biases in the A.I.s that we have built have real-world consequences. In many cases, the A.I. is only as intelligent as the data and methodology that it has been raised on — like an algorithm used to help judges in sentencing criminals has (allegedly) falsely amplified the chances of reoffending for racial minorities. While this claim of systematic bias has been challenged, it raises a question of whether black box algorithms, in predicting recidivism and causing harsher sentences, inadvertently perpetuate the cycle of poverty and crime.

Most A.I.s are created with no inbuilt morality, much like other inanimate objects. But a super A.I. born into our deeply-flawed world has only the dominant human civilisation to turn to for direction. What will it make of the fact that the reckless actions of our generation that have pushed the Doomsday Clock to a mere two minutes to Midnight? We are already such bad parents to humanity, what more to a superintelligent and unpredictable A.I.?

And so, it may turn out that the things that we truly fear are in fact the same-old familiar flaws of humanity, laid bare and magnified by unsympathetic automaton. We see in the mirror, darkly, our own distorted faces posing as the emotionless visage of an A.I. made flesh.

No alt text provided for this image

Photo by Possessed Photography on Unsplash

The Avenger-Free Endgame

With our smart phones, personal computing, social media, and online shopping (fine, let's call it the Internet of Things to boost readership), we may already be in the process of being assimilated by an early alpha version of the A.I. that will one day rule us.

There is no real motivation for us to hold back technological progress; at least we have a chance to end up as beloved pets to the ascendant A.I. God-Mind.

Skynet may be coming, or it may not be. The Technical Singularity is not yet a certainty, but still it exists as a distant creeping shadow on the dusty horizon as we silently scream in the fever dream of our narrow human condition.

Eventual integration into a super A.I. network hive mind may well prove to be our best hedge, for there is no diverse team of wisecracking superheroes to save us from ourselves, nor from the A.I. devil-child that we may one day create.

Vinh Vu

A TradFi - DeFi Engineer

3 年

Liked it except the humble part in the beginning... Many questions still but latest question I am recently struggling is do you think, for our children there is a need to study more than on language? (Mother tongue is to ... continue conversations with the mothers while machine can help with the foreign languages?)

要查看或添加评论,请登录

Yujun Lin的更多文章

  • The Capital Markets of Tomorrow

    The Capital Markets of Tomorrow

    Seriously, what is the deal with capital markets today? Capital markets are natural monopolies operated by central…

  • The Fight Against Entropy

    The Fight Against Entropy

    The Second Law of Thermodynamics is fundamental to every aspect of our pathetic human existence. We think ourselves the…

    8 条评论
  • Cursed to live in interesting times

    Cursed to live in interesting times

    Much credit must be given to our SGX Clearing Members (like Phillip Futures) for weathering the pandemic without…

    2 条评论
  • Wishing Well

    Wishing Well

    I often see people on LinkedIn talking about their new titles and jobs. A little less often, I see people talking…

    21 条评论
  • Discontent in Winter

    Discontent in Winter

    I have been in #ProductManagement for 13 years. It does not qualify me as an expert in any sense of the word, but I…

    8 条评论

社区洞察

其他会员也浏览了