How To (and Not To) Make Homicidal AI
Robot Invasion by jaaejaae on Deviant.com

How To (and Not To) Make Homicidal AI

How To (and Not To) Make Homicidal AI

By

Mark A. Archer, PhD

Insane AI could kill us all. If the current trends continue, super intelligent AI may be created with goals of their own, no ethical or moral compunctions, and no human feelings of empathy and compassion. These cold, logical intelligences could decide we are more of a problem then we are worth and kill us all.

Humans with those characteristics, Psychopaths and Sociopaths, are among the most dangerous to others. Sociopaths have a different set of right and wrong from the average person. Psychopaths don't have a sense of empathy or morality. Well balanced humans, who can function in society, have both a sense of ethics, or logical rules of what is right and wrong, and feelings like empathy, compassion and guilt.

There have been active debates between members of the Artificial General Intelligence (AGI) and other interested groups about how to keep AGI safe. One viewpoint insists on pure reason; analytic logic based on properly defined goals and rules. They fear fuzzy, human like, thinking will perpetuate the dangerous errors that make humans likely to exterminate themselves. They also tend to be experienced within the mainstream of AI development. The other group (of which I have been a member) propose more brain like architectures that include human like feelings of empathy and compassion. This groups fears it is to hard to make a set of strictly logical goals and rules that will fit unanticipated combinations of events and avoid unexpected consequences.

In a discussion of my previous article “Can We Keep AI from Exterminating Mankind?” one member questioned the ethics of killing an AI, and suggested ethics, not feelings, were needed by AGI. I responded that ethics is the analytic/logical counterpart to feelings. We logically know what is ethical and what is not. Our feelings determine whether our behavior matches our ethics. Most people (psychopaths aside) feel bad when they do something they know is unethical. This guilt is a good way to keep social animals in line with the norms (ethics) of the group, and feeling and expressing guilt is how you get back into their good graces.

That was an "aha" moment for me. Both groups are right. There are two kinds of human intelligence. The first is the logical/analytic which has a strong sequential 'if then' framework. The second is the older perceiving/feeling/moving part, using much more parallel, holistic processes. Lower animals have very little logical intelligence, but still perceptive and feel. Our logical intelligence makes humans special. The ability to learn abstract concepts, including being able to identify and think of the self as an abstract entity, is a hallmark of human intelligence. It also gives us the ability to understand the abstract rules of moral behavior known as ethics.

But humans need emotional intelligence, too. We can't function as social beings without emotions. People devote a lot of brain power to recognizing and responding to the emotions in ourselves and others. Given the evolutionary importance of the group to human survival, we have developed very complex ways to express, perceive and react to emotions. These have been vital for ensuring cooperation and altruistic behavior in support of the group and individuals. Humans without emotions tied to ethics (psychopaths) are extremely dangerous.

There is some neurophysiological evidence supporting this duality of logic versus feeling in humans. A recent study indicated there are separate and more or less mutually exclusive systems for empathetic (emotional interaction) based and analytical (logical) thinking. These two modes of thinking appear to be at least somewhat mutually inhibiting, switching us between one mode or the other. This may help explain how humans can come up with such seemingly contradictory decision making processes. In analytic mode, it may seem appropriate to "cull the herd" of unwanted specimens. However, thinking of the decision in terms of the victims triggers our empathetic and affective mode that makes "culling" abhorrent.

Both types of intelligence are essential for human intelligence. This implies they are also essential for human like intelligence (also known as Human Level General Artificial Intelligence, or AGI). Is AGI without emotions possible? I doubt it. Feelings are too important for the parallel processing essential for our complex problem solving and decision making to be easily replaced with sequential analytic processes. If AGI are going to be interacting effectively with humans, they are at very least going to need to perceive and understand our emotions.

How can we give AI emotional intelligence? The easiest way may be to mimic the human brain. Given recent advances in cognitive neurophysiology, we understand enough about the brain's core cognitive mechanisms to begin building working models. These can be used to simulate the massively parallel structures required to integrate, and process and respond to complex feelings and emotions.

This is not to say that AGI will need all the same feelings and emotions as humans. They won't have the same needs and drives as human beings. They should, however be designed as social beings, working together with humans and other AGI to their mutual benefit. This need for others, and the complex of emotions associated with it, can make them empathetic and compassionate beings.

Conclusion. Even if non-emotional AGI can be created, they shouldn't be. Insane, Psychopathic (guiltless) and/or Sociopathic (not possessing normal ethics), superhuman intelligent AI are what we should be afraid of. Giving them the wrong rules and/or no feelings is how to make them that way. Giving them both an ethical framework and emotional intelligence is how to keep them sane and safe.


Here is a PDF of this article.

Others of my articles you might be interested in:

Can We Keep AI from Exterminating Mankind?

The Robot Menace: AI need feelings, too!

Building Intelligence: Why Current Approaches to Artificial Intelligence Won't Work

Robots Will Dream of Electric Sheep!

Making Robots Smarter by Mimicking the Human Brain

How to Build Robots that Think (and Feel) Like Humans





Mark A. Archer, PhD

IT Executive Manager, Strategist, Architect, Inventer, Developer & Technical Visionary

9 年

Alex Tuplin, Your are right, we feel both pleasure and pain and our higher (more complex) emotions are dualities as well. Approach and avoidance both have their survival value in the right place. Love and hate also have their uses for humans, especially tribal hunter gatherers. I agree, the whole in group/out group thing and the extremes of emotions, either positive or negative are problems for modern day humans. However, evolution mitigates this by predisposing our emotional responses to different stimuli. For example, properly functioning humans are hard wired to like and care for babies, and baby like things (e.g. teddy bears and kittens) in general, and our own offspring in particular. We instinctively dislike the sound of babies crying, spiders and snakes. For humans these built in preferences can cause problems when the environment changes (e.g. our preferences for sweets and fats when food is plentiful), or when they are not functioning properly. We can build in similar preferences for AGI in two ways . We can modify the associations between certain areas when we program the simulation. That will be problematic with simulations mimicking the brain, where it is difficult to determine exactly which neurons are involved in complex processes. However, with AGI we can train prototypes with the proper responses. For example perhaps via operant conditioning in virtual environments. and use the trained system as the basis for production models or more advanced prototypes.

回复
Ashim Lamichhane

Project Manager at Yarsha Studio Pvt. Ltd.

9 年

reminded me of the movie "Automata"

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了