How to Build a 'Safe' Artificial Intelligence
An AI and I walked into a bar.
We had a couple of drinks and cracked a few jokes. It was a fun conversation.
Then she said goodbye, walked out of the bar, and walked right in front of a transporter, thereby terminating herself.
The police later found out that she had been battling depression for several years.
***
This is a macabre but fictional glimpse of an alternative future where AIs too can get depressed. Which is in contrast to the equally macabre but end-of-humans type of future that we get to hear around a lot.
When we think of AI and AI-enabled androids, we think of them as these cold, calculating machines that do not have human emotions. We instinctively know that emotions are what make us human. But while we continue to speak about how AI will be as much or more intelligent than us (thereby making them dangerous for us), we avoid bringing emotion into the mix. Why?
Because emotions are what make us ‘inefficient’.
Our efforts in AI are not just curious experiments in ‘Can I build this?’, they’re also about building us an advanced workforce. Altruism is not our prime motivation; curiosity and business are.
The moment we throw emotion into the mix, these AI androids will act like us. And we know how difficult it is to ‘control’ us. That is bad business. What we need is intelligent but compliant serfs.
We don’t want employees who will get angry or sad or lazy or creative. Those are the kinds of overheads that prove costly in a slave race. And history has shown us that humans make annoying slaves.
We also know that our emotions are not always the best things to have. Anger, greed, jealousy, fear, etc. certainly have their evolutionary bases. But they can be more destructive than constructive. What have been good for mankind are love, kindness, curiosity, collaboration, conscience, etc.
If we want to program a ‘safe’ AI, then we should imbue it with the best of us and hold the worst. We should try to not give it a sense of death. We should try to give it a conscience.
I’m not sure if all that is possible, because maybe it is mortality that gives us all these negative as well as positive emotions.
Sooner or later we will build a sentient machine that will be better than us in learning and doing things. That machine could be a friend or a foe or both at different times.
Even humans know, at the back of their minds, that as a race they’re not good for themselves. What values are we going to program into an AI to ensure that it also does not reach the same conclusion—a paradox that it may not be able to resolve? If our overall population is growing steadily, can we overlook a few thousands getting killed in Syria?
Why would an AI want to kill (or control) humans? If an AI has the sense of freedom, reproduction, death, and attachment, it may feel threatened if humans threaten those things. Humans will definitely threaten those things because that’s just our shit. Can we try and grow out of that? Then maybe we’ll not be so threatening to an AI, who would then NOT want to kill us!
The other reason an AI would want to kill/control is us if it is programmed to protect and promote human life. So, if we carry on killing each other the way we currently are, the AI will be programmatically forced to step in that put an end to all this nonsense. So, how about we stop hurting each other, and then we can have an AI that doesn’t want to put us in cells.
Or, we could protect our pure, flawed selves and not program all those checks and values into an AI. We could continue to be the jerks we are, deeply satisfied in the knowledge that our flaws, which make us human, are safe. Less productive, yes, but also, less dead!
Manager Operations - Research Information Management Services (Abstracting & Indexing; Art, History & Legal Value Addition; Topical Scholarly Writing; Language Translation; Controlled Volcabulary Database Cross-Mapping)
5 年Enjoyed reading your perspective!
AI Engineer at CnH industrial
5 年I am a bit confused now, whether this article was about AI, or how you could use the fear of AI to make people behave in a certain way(i.e not jerks). This seems to me, similar to fear of God in people, which make them do good deeds.? However, I really enjoyed reading this entirely new aspect of "Why would an AI want to kill (or control) humans?" :)