Principles That Lead To The Ethics Of Artificial Intelligence

Principles That Lead To The Ethics Of Artificial Intelligence

No alt text provided for this image

What good is technology if it doesn't take care of well being of humans. In the field of technology & research, bodies like IEEE and some NPOs have ensured, from time to time, that an "ethics" framework is in place before there is mass adoption of any technology. Neural networks, machine learning, computer vision & natural language processing based products existed even before the times of commoditizing of Artificial Intelligence (AI). However, the breathtaking landscape of AI is solving multiple problems, yet the corporate world has pushed the envelop too far. 

The idea of putting this article out is to make leaders and industry veterans enforce and ensure that their teams are abiding by the ethics framework for building Artificial Intelligence based products/solutions. 

Experts from around the globe have put together a document called "Ethically Aligned Design (EAD1e)". I am breaking this down into simpler pieces for the ease of development of AI products/services as well as to ensure that humans genuinely flourish.

Below are the six principles for ethics framework for AI:


Principle 1: Human Rights 

Jigsaw of maps including UN Map

Traditionally, defined as "Rights inherent to all human beings, regardless of race, sex, nationality, ethnicity, language, religion, or any other status. Human rights include the right to life and liberty, freedom from slavery and torture, freedom of opinion and expression, the right to work and education, and many more."

A rubric based on known human right violations should be leveraged to simulate AI systems, to ensure no such violations are made. Typically, this can be done by a deep learning system and also improved upon by human feedback.  

Thus, per ethics framework, an intelligent and autonomous system shouldn't violate any of the human rights, dignity, privacy, and freedom.

Principle 2: Well-Being

Per WHO, "when every individual realizes his or her potential, can cope with the normal stresses of life, can work productively and fruitfully, and can make a contribution to her or his community that is a state of well-being."

No alt text provided for this image

So, an intelligent and autonomous system should always action keeping "well-being" as the top priority and use the widely accepted & best metrics that represent well-being.


Principle 3: Data Ownership (Agency)

No alt text provided for this image

GDPR, as a methodology, will take care of this principle quite well. Data Agency demands to enable individuals to own and have full control of their data. Autonomous and intelligent technology should assess data-use requests by external parties and service providers. Such a technology shall provide a form of digital supremacy.

Principle 4: Effectiveness

Every technology should obey effectiveness which is measured by a set of meaningful metrics and KPIs. There are many government & non-government standards and quality check bodies. These bodies define the effectiveness of the systems/technologies. By checking the risk tolerance by identifying the right thresholds, they ensure that the system doesn't pose any risk and also, guard principles 1, 2, 3. 

No alt text provided for this image

In this process, the AI's creator and operator should ensure they have put the right logic in the AI system for it to provide evidence of its effectiveness and relevance of the purpose of the AI recommendations and actions.

Principle 5: Transparent

Most of the current AI-driven systems don't provide their end consumer a window into recommendation made and action taken by the AI. Be it an autonomous vehicle or a health care bot; everything is equally critical for an end consumer. Thus the AI system should be designed such that every recommendation made, insight generated and action taken by the AI should be traceable.

No alt text provided for this image

The best way to achieve this is to enable a decentralized blockchain driven audit trail where a smart contract gets executed after each AI stage.

Principle 6: Accountability

Principle 4 and 5 enable us to make the creator of the AI system more accountable. The goal here is to achieve a logic for every decision made without ambiguity. Remember, a "fake news" article published by an unaccountable AI can take tons of lives while a "fake news" detection accountable AI can save a country!

No alt text provided for this image

Probability based system with minimum research shouldn't be applied to critical verticals until the owner/creator of the AI takes complete accountability. The AI should be built by understanding the causation and not on mere correlations between the historic actions and the metrics. 

The 6 fundamental principles are covered above but there are a couple of more recommendations: 

Recommendation 1: Awareness of Misuse

I am quoting J.Robert Oppenheimer, "We knew the world would not be the same. A few people laughed; a few people cried. Most people were silent." He said this in the context of nuclear bombing. 

If the creator of the technology is not aware of the misuse or if the creator can't guard the intelligent system or autonomous system against all the possible risks and abuse, then they shouldn't operationalize such a system. Ensure that as an owner or creator of an AI system, you know the possible risks it poses and that the system is not misused.

Recommendation 2: Competence Support

Each AI system requires the right skills and knowledge for it to be run safely and effectively. The creators of these systems should specify the necessary competency for the operators of the AI, and the system design should be such that the operators of the AI systems are enforced to adhere to it.

I would request every leader out there, who is trying to change the industry by leveraging artificial intelligence, to advise their teams to build AI systems responsibly using this framework as a reference.

No alt text provided for this image

Happy building responsible AI!


Vijay Ram Surampudi

AI Consultant, Gen AI @ Google Cloud Consulting

5 年

Very good article! Much needed component of AI development

Rahul Khode

Cloud Practice Head / Director - Digital Transformation | Generative AI | Open AI | Microsoft Azure | AWS | GCP | IIOT | Analytics | Blockchain | ML

5 年

Just how we comply with the "security framework" for building software applications, there should be a wider acceptance of complying to the "ethics framework" for building Artificial Intelligence-based products/solutions. Great writeup Sridhar Seshadri. Very thought invoking!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了