AI Needs a Foundation of Trust

AI Needs a Foundation of Trust

Thoughts about digital transformation and AI for enterprise leaders and their legal & compliance advisors

These posts represent my personal views on enterprise governance, regulatory compliance, and legal or ethical issues that arise in digital transformation projects powered by the cloud and artificial intelligence. Unless otherwise indicated, they do not represent the official views of Microsoft.

With great power comes great responsibility. Over the past months in this blog I’ve cited many examples of what AI can do. On the basis of those examples, I’ve argued that AI will be the defining business technology of the 21st century.

But a tool this powerful can do harm as well as good. Wielding it safely requires a new kind of trust, a trust that faces in two different directions at the same time:

  • on the one hand, you must earn the trust of your stakeholders that you will use AI for their benefit and will protect them from possible collateral harms;
  • on the other hand, your AI technology partners must also earn your trust and justify your confidence that AI will not harm your interests.

The two sides of AI trust are as indissociable as the two sides of a mathematical equation. You cannot have one without the other. If you cannot trust the partners who help you build AI, your own stakeholders will not be able to trust your AI.

What are some of the harms that poorly controlled AI might cause? Here are two significant examples that Boards concerned with corporate governance should ponder:

  • Safety. An AI that makes wrong predictions can cause accidents. Self-driving cars are an obvious case. But remember: the right standard by which to judge AI is not perfection (zero errors), but whether it actually does a better job than humans in the same situation. When self-driving cars are mature enough to emerge from their current experimental stage, they will certainly not be accident free. But if they nevertheless prove significantly safer than human drivers, they will save many lives.
  • Bias and unfairness, especially toward groups who have historically suffered discrimination. An AI algorithm cannot “intend” to discriminate, because it is simply an algorithm. But when AI is trained on data that reflects historical biases in society, it can produce unfair results. As we have often discussed in this blog, face recognition systems trained mostly on photos of white men have been shown to have high error rates for women and people of color. These errors can be corrected by paying proper attention to the quality of training data, but this requires management vigilance and most likely some extra cost.

When managing AI risks, enterprise leaders must understand that they have skin in the game. Often there is no zero-risk option. For example, a bank that fails to use AI to evaluate loan opportunities will lose business to rivals or make bad bets that compromise the bank’s viability. But because the AI loan system must be trained on past data, its decisions may unintentionally perpetuate historical discrimination against certain classes of applicants. Fairness suggests adjusting the algorithm to counteract such discrimination. But recent work by AI researchers at Berkeley shows that this may result in loans being made to applicants who can’t repay them, thus doing them more harm than good. Finding the right balance between no adjustment for past discrimination and too much adjustment is a delicate question that cannot be entrusted to AI alone. Ultimately the choice must be made by leaders who listen to the data, continually reassess both algorithms and outcomes, and accept responsibility before stakeholders for the decisions taken.

An AI application that fails on safety or fairness can do great damage to your organization’s reputation, as well as leading to unpleasant legal consequences. Minimizing these risks requires proactive management of your AI by people who understand what causes AI systems to fail and who have the authority to intervene.

But even if AI does not harm your reputation, nor cause you to lose money, nor lead to adverse legal action, you still have a broader responsibility to ensure that your use of AI benefits society as a whole. Perhaps the clearest example is the impact of AI on jobs.

Today’s AI is not exclusively about producing more goods and services with less human labor. It can also result in dramatically better goods and services for the same amount of labor—for example, better health outcomes for sick patients or jet engines that consume less fuel.

Yet there is no denying that often AI is all about advanced automation that will result in existing jobs being lost and require new skills for future jobs. We have already discussed the value of an AI learning culture that equips your employees with the knowledge needed to draw lasting competitive advantage from your unique data assets. This knowledge is specific to your enterprise and cannot be bought off-the-shelf. But you should also be concerned with enhancing the skills and future employability of people in your organization who will need extra support to transition to new ways of working. Existing employees who stand to lose their jobs and job candidates whose future chances of being hired are reduced by AI are legitimate stakeholders whose interests must not be neglected. You can serve their interests by sponsoring appropriate reskilling and training programs, both within and outside the walls of your organization.

No enterprise can carry the burden of responsible AI alone. Just as you owe an accounting of AI’s safety, ethical, and social effects to your stakeholders, the technology partners who help you build AI owe you a similar accounting. Here are three practical steps that Boards and CEOs should take to build the two-sided trust that responsible AI requires:

  1. Appoint a senior cross-functional AI team with the power to intervene in AI projects to take corrective action.
  2. Make AI’s broader footprint in society a required component of your internal AI education program, in addition to the technical and business strategy components.
  3. Evaluate your technology partners carefully and insist that they meet the same high standards of AI managed for the benefit of all as you apply in your own organization.

AI’s promise is great. You must learn to manage its potential for the benefit of all stakeholders. As Microsoft CEO Satya Nadella reiterates at every opportunity:

“As technologists and decision-makers we need to keep in mind the timeless values that drive what we do. How are we going to use technology to empower people?”

Microsoft has published a book about how to manage the thorny cybersecurity, privacy, and regulatory compliance issues that can arise in cloud-based Digital Transformation—including a section on artificial intelligence and machine learning. The book explains key topics in clear language and is full of actionable advice for enterprise leaders. Click here to download a copy. Kindle version available as well here.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了