A humane approach to building responsible AI

A humane approach to building responsible AI

 All of us trust Google Maps to navigate us through traffic. We rely on an Alexa to remind us of our appointments. Do we ever pause to ponder over what gives us the confidence to place our trust on these machines? It’s what we call ‘Responsible AI’ built by responsible humans.

 With the development of sophisticated AI, we trust more and more AI-based machines to make the right decisions for us in a wide range of domains, from healthcare, transportation, education, and government bodies. Going by the August 2020 IDC report, which predicts the global AI spending to reach $110 billion in 2024, AI will soon be an integral part of nearly every human being.

 As part of an organization that is deeply involved in the development and application of AI in industries, I believe, it is our responsibility to ensure that we build ‘Responsible AI’ capable of giving unbiased, reliable, and accurate outcomes.

 Here are some questions that AI architects must always ask themselves before building an AI-based system.

 Are we augmenting human capabilities at the expense of data exposure or misuse?

Data is the building block to training AI models. And most often, this involves a large amount of data. Yet, this very vastness could hide flaws that an AI scientist can easily miss. Usually, to flatten the variations or the biases that may exist in the data, it is generalized, again leading to the possibility of skewed outcomes. Questions around the collection, storage, ownership of data leading to invasion of privacy is another aspect that needs to be taken care of. The usual dangers of data being misused for profitable gains or discriminatory social practices are omni-present challenges.

 Government bodies and international organizations are drafting universal standards and guidelines on data usage which is a crucial step towards building responsible AI. The GDPR in Europe is a key initiative to ensure the privacy of data of European citizens. Europe is also leading the way in building AI with a ‘human-centric’ approach to ensure its ethical AI. Organizations must abide by these laws and implement best practices that ensure that human biases don’t creep into the data resulting in erroneous AI-based outcomes.

 Are we training and retraining AI with the right data?

 We have heard of recruitment tools rejecting women candidates because of partial data fed into the system to train it. Sometimes AI can teach itself the wrong lessons, not understanding the social or the human impact of such decisions based on the data on which it is built. If historically, people from a particular community or gender tend to fail in a specific subject in schools, AI will make predictions based on that assumption.

 An excellent way to alleviate this error is to do a rational evaluation of the data. In cases where hackers can use auxiliary information to break into an algorithm, random and harmless noise is introduced into the data to build hard-to-crack models. However, to decide on ethical and human angles, it is best to have a team of people work on the data. Algorithm audits is a good practice to identify areas where the AI needs to be retrained. The best algorithms are those that can self-learn and build on fresh input from real-data. At Infosys, we rely on a multi-disciplinary, research-based approach to build algorithms.

 Are we building AI systems that can explain its outcomes?

 When an AI model provides a decision, it is important to trace back the factors that contributed to the decision. From a user perspective, it is about dependability, objectivity, and neutrality. The more transparent an AI system is, in terms of its scope of operation, the more explainable it will be. The more relevant the factors used in the decision making, the more accurate will it be.

 AI recommendations should come with a certainty measure and based on problem analysis. This helps generate confidence in a particular research and outcome. AI modlers should be able to trace the input data that has been used as evidence to reach the conclusion and also be able to understand the reasons behind the exclusion of other alternatives.

 Are there adequate data-risk awareness and practices against security risks?

 Hackers are a common threat across cyberspace not excluding AI. Hackers can jeopardize data and, therefore, the results of any AI system with a single breach. They can reverse engineer sensitive information from the training dataset by sending a set of pre-meditated series of queries to an AI model.

Sample data sets are vulnerable to creating blind spots in the algorithm. Organizations need to be careful about these gaps and ensure they implement the right best practices around security and privacy to prevent incidents.

 I believe we can create AI that will significantly improve human lives. However, unless we take the responsibility to protect and nurture the AI systems in a secure and unbiased environment, this phrase will continue to remain an intellectual dream.

 +++

 1. Infosys recently announced its applied AI offering. Learn more here:

https://www.youtube.com/watch?v=bw0NtKhwEDo&feature=emb_logo

2. Some of these issues were also covered by Ben Evans when I spoke with him last week - you can see this here

https://www.infosys.com/services/applied-ai/insights/applicability-ai-enterprise-context.html

3. I couldn't resist having this as the accompanying picture https://en.wikipedia.org/wiki/Man_at_the_Crossroads#Man,_Controller_of_the_Universe


Interesting thoughts!!!

回复
Nagesh Daram

AVP- Enterprise Quality Engineering at CHUBB Business Services India(CBSI)

4 年

Explained complex aspects in very simple way to understand. Got better understanding on what is responsible AI all about.

回复

Loved the article and a much needed reminder to the Real Intelligence aka humans, that AI is what we make of it and just like children, it will grow up with traces of our biases and our humanity. I tell people to not fear the tool we build but fear who builds it and how it is built ! Well done Mohit Joshi .

Akash Shrivastava

Strategy & Business Design @ Deloitte Consulting | Ex-Client Partner | Ex PwC

4 年

Couldnt agree more. Building AI because we can vs building fit for purpose responsible AI is a mindset shift. Knowing when to augment human ingenuity with AI/machine output is paramount for the success for an AI program

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了