The Case for Programming Morality

The Case for Programming Morality



Artificial intelligence, while having come a long way, is still in its infancy.

The first versions of artificial intelligence were single algorithms. They were well designed to give intelligent output or responses to controlled inputs. So if shown photo full of different colors, AI could spot the red or the blue.


As AI evolved, it turned into systems where a group of algorithms worked together in order to give an intelligent output. If analyzing a text document, one part would look for the meaning of a sentence using something like Word2Vec while another will attempt to find the importance of words such as using a TD-IDF score. So now, we have a group of algorithms all contributing to a more accurate answer.


In moving forward, and thinking about how we as humans make decisions and arrive at answers, there will have to be another step forward for artificial intelligence; one of layering. Rather than various programs working together, there will be base decisions and operating rules, upon which more and more specialized rules build.


This type of structure will be especially important in building successful Artificial General Intelligence.

An example in autonomous driving

Let’s take driving as an example. In order to have an AGI able to drive a car, we need some rules on operating that vehicle. We can codify rules found in various state and federal government guidelines. Rules such as

  • yellow line on the left
  • don’t cross a solid line
  • turn (or don’t turn) on red
  • etc.

The middle layer — base conditions for driving

These all work together right now to help the vehicle navigate through its environment. Using a combination of image directed deep learning to interpret what it sees, and codified rules to dictate how it acts, we get an AI-based driving system.

the top layer — special conditions driving

The next step, currently being worked on is special conditions driving. The rules used for driving in San Diego will differ when driving in Upstate New York after a snow storm. You will brake, turn, and navigate differently. These are special condition algorithms that will take over when specified conditions are met.

But how do we react with no special conditions are met, yet the circumstances have changed?

Algorithmic Layering

With special conditions and base conditions, we are beginning to see a layering of algorithms.

AI work and focus has been on the middle layer for the last few years. As we have used it more and more we have been adding special conditions. For example, San Francisco driverless cars worked great using the base conditions. However, as these cars are used in areas with adverse weather conditions, we need more and more specialized algorithms.

Where we have not seen much work is on layers beneath the base conditions. This is where I think the future of AI is heading. Specifically now that we have a lot of focus on decentralized trust technologies such as blockchain, a shareable, trustable base layer can be created and used, but more on that later.

We can ask ourselves “How deep should the layers go beneath the base conditions? What should the base layer of AGI be? What is the basis for the most base decisions we make as humans?”

I think this becomes clearer when considering the extremes of our decisions.

When your house is on fire, is your first priority self preservation or the preservation of your kids? Both are noble, but any parent will tell you — save the kids. In fact, in that moment, a man who would normally move “automatically” to self-preservation, will instinctively save his children first. A machine does not have instincts, they must be programmed. So what are we programming when we codify these?

When a car suddenly stops ahead of you and there is a person to your left, do you veer right knowing you will hit cars and endanger yourself, or veer left and hit (most likely killing) the other person, but keeping yourself safe?

These are real questions and must be addressed as we build out better AGI systems, systems that can react to new input never before seen. In order to build effective AGI, we need to understand how we as humans make decisions with new input or circumstances we are not familiar with.

Morality as the base layer of AGI

Some type of codified ruleset must be the base layer of Artificial General Intelligence. This ruleset should include preservation of humanity, of community, and then of self. On top of that, rules that govern human understanding considering trust & property to prevent against theft and cheating.

Essentially what we are codifying is our idea of morality. When conditions are at an extreme, AGI will fallback on basic a codified morality. Otherwise the middle layers will direct decisions (think basic driving rules). And then, when applicable, special conditions will take over (think rain and snow).

Decentralized Morality

Taking this even a step further, a codified morality as the basis for algorithms could be dangerous. As each individual or company builds a morality base on which to run their Artificial Intelligence, they could differ significantly. Depending corporate culture, individual contributor biases, and technique used to create the algorithms, results could widely vary.

Beyond this, without knowing what the morality structure is beneath the systems we may be using in the future in our homes, cars, workplaces, and personal devices, we would be at risk of unknown consequences. Even data scientists know not to trust black boxes.

What is needed is

  • transparency — ability to know the rules of morality
  • input — ability to contribute to these rules
  • consensus — ability to agree on these rules
  • access — ability to access the rules that have been agreed upon.

Enter blockchain. A decentralized store of information which can be agreed upon, added to, and trusted: blockchain could be the technology needed in order to store a codified morality on which all other algorithms are based and written.

Once on the blockchain, anyone will be able to see these rules and trust them. No one will “own” them and the majority will be able to create them

I see morality being key in creating larger and more accurate Artificial General Intelligence systems in the future. I think it will be interesting to see how blockchain can be used to codify rules and legal structures, and possibly even morality and human rights. As this is accomplished, we can then use these structures as a base for AGI.



This is just a high level thought experiment on AGI and Morality. I am not an expert in either, and thus I would love any feedback or ideas you have. You can contact me on twitter @joshuamschultz



Originally published at joshuaschultz.com on February 7, 2018.

要查看或添加评论,请登录

Josh Schultz的更多文章

社区洞察

其他会员也浏览了