Can AI be Ethical?

Can AI be Ethical?

In a previous article, I discussed how trust will be a key driver of widespread and faster adoption of artificial intelligence (AI). But how does an organization foster trust? In this piece, I take a closer look at that question by exploring how responsibility and ethics are fundamental to building confidence that AI is used appropriately and to the benefit of all.

Four core principles behind responsible, ethical AI

Let’s start with the basics. At the core of responsible, ethical AI are four basic principles.

1.??????The first is the elimination of bias. It’s critical that any organization using machine learning (ML) algorithms to make decisions be able to continuously monitor what’s happening to ensure there’s no intrinsic bias in the data used to train the algorithms. This is all about making decisions that are fair and just.

2.??????A related principle is the context of an AI’s decision. Take, for example, the issue of gender. In the context of AI’s assessing candidates’ qualifications for a particular job, gender data is typically excluded because it could lead to biased hiring decisions. But in some cases—for instance, a clothing retailer using AI to tailor marketing campaigns and messages to customers—gender is relevant and appropriate for the algorithm to consider.

3.??????The third principle is the “explainability” of how an algorithm arrived at its decision. An AI model can’t be a “black box” that spits out a verdict everyone is supposed to simply accept as correct. AI models do make mistakes, and it’s vital that a company be able to understand the process AI used to make a decision to determine why the model went astray—and make sure it doesn’t make the same mistake again.

4.??????Finally, there’s privacy and security. The more data that’s available to AI, the better AI’s performance. But some of that data could be considered sensitive—for example, personal information about customers. People won’t trust AI if they don’t believe it’s using their data in a way that safeguards their information and doesn’t compromise privacy.

If these principles are not in place, a company could face considerable reputational, financial, and legal risk if its algorithms make biased or inaccurate, potentially harmful decisions.

Ownership, training, and technology are key success

Recognizing how critical these principles are, leading companies are intentional about how they embed them into their organization. In essence, these companies are “designing for ethics” to help ensure ethical AI is the default. This begins with ownership—who’s responsible for setting and executing the ethical AI agenda. Because of the massive risk to the business if AI gets things wrong, ownership should ultimately rest with the chief executive officer of the business. An enterprise-level team, led by a “chief AI officer” reporting to the CEO, should be charged with enacting the agenda throughout the company. This team is responsible for ensuring the right policies, practices, governance, and tools are in place to create a culture of ethical AI.

A critical element is training. As is the case with any major initiative, training not only accelerates the adoption of AI, but helps drive the knowledge and behaviors among employees needed to embed ethics into everything AI touches. Workers must be trained on how to use ML, make sure AI’s decisions are fair and just, put lessons back into the feedback loop so the algorithm can continuously learn, and correct relevant data sets on an ongoing basis. Training must strike the right balance between helping employees understand AI’s immense potential to drive innovation and growth, and giving them the information and tools they need to use AI responsibly. Human oversight will continue to play a key role in carefully planning, deploying, and sustaining AI solutions in a way that generates significant business outcomes and value while minimizing risk.

Technology also can be a big help in designing ethical AI. Consider Google Cloud, for instance. Google is a well-known leader in the use of AI, and has made significant, ongoing investments in the AI tools that are baked into its Google Cloud platform. The company brought in leaders from a variety of industries to build specific AI-based industry solutions for the platform that incorporate design-for-ethics principles and practices. Additionally, Google’s cloud-based TensorFlow, a core open-source library that helps companies develop and train ML models, includes responsible AI practices that can be incorporated at every step of the ML workflow. Google’s tools provide a solid foundation on which companies can build responsible, ethical AI.

?With more and more companies looking to use AI to propel their business, everyone has the responsibility to make sure we get it right. This will require an ongoing process of learning and iteratively correcting and improving AI’s performance. And it’s a topic that should be front and center in every CEO’s mind.

Learn more about how TCS and Google Cloud can help you innovate and reimagine your business for purpose-led sustainable growth: https://www.tcs.com/tcs-google-cloud

This is the second in a series of articles authored by?Nidhi Srivastava , Vice President and Global Head, Google Cloud Business, TCS.?

For more information, contact?[email protected]

This presupposes that TCS knows what ethics is.. Can the caste system be called ethical with its institutionalised discrimination which is rife not just in India but among Indians the world over. Google and read the publications. Less hypocrisy please.

回复

nice article can you shed more light on how do you plan to accomplish the third principle "explainability"?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了