How to Build a 'Safe'? Artificial Intelligence
Poster of the motion picture 'A.I. Artificial Intelligence'. Credit: Warner Bros. Pictures, Amblin Entertainment, Stanley Kubrick Productions

How to Build a 'Safe' Artificial Intelligence

An AI and I walked into a bar.

We had a couple of drinks and cracked a few jokes. It was a fun conversation.

Then she said goodbye, walked out of the bar, and walked right in front of a transporter, thereby terminating herself.

The police later found out that she had been battling depression for several years.

***

This is a macabre but fictional glimpse of an alternative future where AIs too can get depressed. Which is in contrast to the equally macabre but end-of-humans type of future that we get to hear around a lot.

When we think of AI and AI-enabled androids, we think of them as these cold, calculating machines that do not have human emotions. We instinctively know that emotions are what make us human. But while we continue to speak about how AI will be as much or more intelligent than us (thereby making them dangerous for us), we avoid bringing emotion into the mix. Why?

Because emotions are what make us ‘inefficient’.

Our efforts in AI are not just curious experiments in ‘Can I build this?’, they’re also about building us an advanced workforce. Altruism is not our prime motivation; curiosity and business are.

The moment we throw emotion into the mix, these AI androids will act like us. And we know how difficult it is to ‘control’ us. That is bad business. What we need is intelligent but compliant serfs.

We don’t want employees who will get angry or sad or lazy or creative. Those are the kinds of overheads that prove costly in a slave race. And history has shown us that humans make annoying slaves.

We also know that our emotions are not always the best things to have. Anger, greed, jealousy, fear, etc. certainly have their evolutionary bases. But they can be more destructive than constructive. What have been good for mankind are love, kindness, curiosity, collaboration, conscience, etc.

If we want to program a ‘safe’ AI, then we should imbue it with the best of us and hold the worst. We should try to not give it a sense of death. We should try to give it a conscience.

I’m not sure if all that is possible, because maybe it is mortality that gives us all these negative as well as positive emotions.

Sooner or later we will build a sentient machine that will be better than us in learning and doing things. That machine could be a friend or a foe or both at different times.

Even humans know, at the back of their minds, that as a race they’re not good for themselves. What values are we going to program into an AI to ensure that it also does not reach the same conclusion—a paradox that it may not be able to resolve? If our overall population is growing steadily, can we overlook a few thousands getting killed in Syria?

Why would an AI want to kill (or control) humans? If an AI has the sense of freedom, reproduction, death, and attachment, it may feel threatened if humans threaten those things. Humans will definitely threaten those things because that’s just our shit. Can we try and grow out of that? Then maybe we’ll not be so threatening to an AI, who would then NOT want to kill us!

The other reason an AI would want to kill/control is us if it is programmed to protect and promote human life. So, if we carry on killing each other the way we currently are, the AI will be programmatically forced to step in that put an end to all this nonsense. So, how about we stop hurting each other, and then we can have an AI that doesn’t want to put us in cells.

Or, we could protect our pure, flawed selves and not program all those checks and values into an AI. We could continue to be the jerks we are, deeply satisfied in the knowledge that our flaws, which make us human, are safe. Less productive, yes, but also, less dead!

 

Dhiraj Kumar

Manager Operations - Research Information Management Services (Abstracting & Indexing; Art, History & Legal Value Addition; Topical Scholarly Writing; Language Translation; Controlled Volcabulary Database Cross-Mapping)

5 年

Enjoyed reading your perspective!

Swarnalata Patra

AI Engineer at CnH industrial

5 年

I am a bit confused now, whether this article was about AI, or how you could use the fear of AI to make people behave in a certain way(i.e not jerks). This seems to me, similar to fear of God in people, which make them do good deeds.? However, I really enjoyed reading this entirely new aspect of "Why would an AI want to kill (or control) humans?" :)

要查看或添加评论,请登录

Anupam Choudhury的更多文章

  • The 8 Most Exciting Reasons to Buy Books

    The 8 Most Exciting Reasons to Buy Books

    Of course, you buy books to read—you read for education and entertainment. Books are the OG 'Netflix' and taught us the…

    3 条评论
  • #whosagoodboy: Everything I Wanted to Know about Hashtags and Finally Found Out

    #whosagoodboy: Everything I Wanted to Know about Hashtags and Finally Found Out

    Who’s a good boy? Are you a good boy? Do you want your name to pop up when someone Googles ‘who’s good boy’? Well, then…

    6 条评论
  • Am I Over-editing or Under-editing? How Do I Find Out?

    Am I Over-editing or Under-editing? How Do I Find Out?

    As editors—or for that matter as any craftsperson—we often face the dilemma of overdoing or underdoing our craft for a…

    7 条评论
  • Is Hard Work the Only Key to Success?

    Is Hard Work the Only Key to Success?

    My parents instilled in me the ethic of hard work. They taught me that a hardworking person will always find work and…

    6 条评论
  • The Death of a Book: 5 Mortal Threats that Books Face Today

    The Death of a Book: 5 Mortal Threats that Books Face Today

    I admit that the title of this essay is dramatic. But then what is presentation without a bit of drama? I aim to…

    8 条评论
  • The Resume Black Hole

    The Resume Black Hole

    It was a beautifully crafted letter printed on a crisp, sparkling white executive bond paper and had the colour logo of…

    4 条评论
  • 9 Success Lessons from Cricket World Cup 2011

    9 Success Lessons from Cricket World Cup 2011

    1. Success is a Slow Process It is not a lottery ticket, it is not a windfall gain, it is not bungee jumping, it is not…

  • How Time Changes Systems in an Organization: A Game of Chinese Whispers

    How Time Changes Systems in an Organization: A Game of Chinese Whispers

    It is said 'Time heals all wounds'. There's a funny but equally true adage 'Time wounds all heels'.

  • Why People Are the Next Big Tech

    Why People Are the Next Big Tech

    Our age is swamped with technology. From robotics to smartphones; from e-rickshaws to international space stations;…

    3 条评论
  • Narendra Modi, the Iconoclast

    Narendra Modi, the Iconoclast

    The Modi era may give rise to a new crop of citizens who are unafraid to question the sacred. iconoclast (noun): a…

    5 条评论

社区洞察

其他会员也浏览了