Making AI Safe
There are two fundamental problems with AI (or AGI). Not todays AI, but the AI we will see in a few years that is effectively as smart and capable as a human. The first problem is that all jobs are going to vanish. The second is the alignment problem.
I know a lot of people do not seem to be able to comprehend a machine taking all jobs, but that is a longer discussion I will address at a later date. I will address the alignment problem here. The alignment problem comes down to making a deal with the devil. When we ask AI to do something we need to be sure that we give it the right parameters and limitation. This is not as easy as it might seem. Think of the fable of Midas and his wish that everything he touches will turn to gold. The book Superintelligence by Nick Bostrum, outlines just how difficult these problems can be.
Anyone who has used a computer or cell phone should be intimately familiar with how our designs often function differently than we expect or intend. I have designed second sources to competitors parts and had to duplicate anomalies not intended in the original circuit, simply because customers expected to see them. Lets not pretend this is easily solvable or not a real problem. This is a difficult problem. And it gets worse, much worse.
If we give AI emotions, the alignment problem becomes unsolvable.
The good news is that we will not be able to give AI emotions until we have reverse engineered the emotional signal processing of the brain. This gives us a window of opportunity.
I do not know how to make giving AI emotions forbidden, but it seems that is the only way to solve the ultimate AI alignment problem. We can try making it illegal, but we make lots of things illegal and it doesn't stop them from happening.
___________________________________________________________________________________
Superintelligence: Paths, Dangers, Strategies is only $5 used.