AI and Ethics: We Will Live What Machines Learn
? 2017 SAP SE or an SAP affiliate company. All rights reserved.

AI and Ethics: We Will Live What Machines Learn

By Dan Wellers and Timo Elliott

Big Data analytics, machine learning, and other emerging artificial intelligence (AI) technologies have, in a very short time, become astonishingly good at helping companies see, and react to, patterns in data they would otherwise have missed. More and more, however, these new patterns carry difficult ethical choices. Not every connection between data points needs to be made, nor does every new insight need to be used. Consider these embarrassing real-world examples’:

  • One company sent “congratulations on your new baby” announcements to women who weren’t ready to reveal their pregnancy.
  • Another company disproportionately targeted ads implying that the recipient had an arrest record toward people with names suggesting they belonged to minority ethnic groups.
  • A ride-hailing company showed guests at a corporate party records of customers who had traveled to an address not their own and late at night, and then to their own homes early the next morning, with a nudge-and-wink suggestion of what they might have been doing in between.

The evolution of technology inevitably includes mistakes, but the fact that these algorithmic failures were unintentional didn’t make them any less painful. It also didn’t make them any less ethically questionable. And we can’t blame the algorithms for that. They were only doing what they were taught to do.

Similarly, Microsoft had a mortifying experience in early 2016 with “Tay,” a chatbot intended to be a fun experiment in training an AI to understand conversational language. However, when trolls coordinated their efforts on Twitter and in messaging apps GroupMe and Kik, they were able to teach Tay to respond to them in appallingly racist ways, forcing Microsoft to take the AI offline after just 16 hours.

Artificial intelligence has already progressed to the point that we’re already asking it to automate not just business processes, but ethical choices. However, as the Tay incident shows, it still lacks the context and empathy of human intelligence and intuition. And all too often, neither the organization using AI nor the people affected by its decisions understand clearly how it arrives at its conclusions, or have any recourse to correct those conclusions when they’re wrong.

AI on the ethical edge

As AI advances exponentially, we urgently need to understand and mitigate its ethical risks, not in spite of the technology’s possibilities, but because of them. We’re already giving AI a great deal of power over decisions that are not only consequential, but potentially life-changing. Here are just a few examples:

Credit scoring algorithms, originally intended just to assess lending risk, are now commonly used to decide whether someone should get a job offer or be able to rent an apartment. Insurance underwriting algorithms determine whether someone can get coverage, how much, and at what cost, with little recourse for the applicant who disagrees. An insurer or potential employer might use health care algorithms to penalize people for the possibility that they might get ill at some point in the future, even if they never do. And as data scientist Cathy O’Neil explores at length in her best-selling book Weapons of Math Destruction, law enforcement decisions, from where to focus police activity to what kind of court sentences are handed out, are notorious for their racial bias.

If those issues aren’t complex enough, there’s the so-called “trolley problem” facing engineers working on self-driving cars: what do they instruct the car to do in an accident situation when every possible outcome is bad? (For a sense of how difficult this task might be, visit The Moral Machine, an MIT website that lets you choose among multiple scenarios a self-driving car might encounter.) How they will make these decisions is, to put it mildly, a difficult question. How society should react when machines start to make life-changing or even life-ending choices is exponentially more so.

Guilty until proven innocent?

We can’t expect AI to know right from wrong just because it’s based on mathematical equations. We can’t even assume it will prevent us from doing the wrong thing. It turns out it’s already far too easy to use AI for the wrong reasons.

It’s well known, for example, that students often struggle during their first year at college. The University of Texas in Austin implemented an algorithm that helps it identify floundering freshmen and offer them extra resources, like study guides and study partners. In her book, O’Neil cites this project approvingly because it increases students’ chances of passing their classes, moving ahead in their field of study, and eventually graduating.

But what if a school used a similar algorithm for a different purpose? As it turns out, one did. In early 2016, a private university in the U.S. used a mathematical model to identify freshmen who were at risk of poor grades — then encouraged those students to drop out early in the year in order to improve the school’s retention numbers and therefore its academic ranking. The plan leaked, outrage ensued, and the university has yet to recover.

This may be uncomfortably reminiscent of the 2002 movie Minority Report, which posited a future world where people are arrested proactively because computers predict they'll break the law in the future. We aren’t at that dystopian point, but futurists, who make a career out of speculating about what’s coming next, say we’re already deep in uncharted waters and need to advance our thinking about the ethics of AI immediately.

Current thinking, future planning

There’s no way around it: all machine learning is going to have built-in assumptions and biases. That doesn’t mean AI is deliberately skewed or prejudiced; it just means that algorithms and the data that drive them are created by humans. We can’t help having our own assumptions and biases, even if they’re unconscious, but business leaders need to be aware of this simple truth and be proactive in addressing it.

AI has enormous potential, but if people don’t feel they can trust it, adoption will suffer. If we simply avoid the risks, we also lose out on the benefits. That’s why businesses, universities, governments and others are opening research and engaging in dialog around AI-related concerns, principles, restrictions, responsibilities, unintended outcomes, legal issues, and transparency requirements.

We’re also starting to see the first explorations of ethical best practices for maximizing the good and minimizing the bad in our AI-infused future. For example, a fledgling movement is emerging to monitor algorithms to make sure they aren’t learning bias, and what’s more, to audit them not just for neutrality, but for their ability to advance positive goals. In addition, there’s now an annual academic conference, Fairness, Accountability, and Transparency in Machine Learning (FATML), launched in 2014 and focusing on the challenges of ensuring that AI-driven decision-making is non-discriminatory, understandable, and includes due process.

But making machine learning more fair, accountable, and transparent can’t wait. As the AI field continues to grow and mature, we need to act on these steps right away:

First, we must think about what incentives AI algorithms promote, and build in processes to assess and improve them to ensure they guide us in the right — by which we mean the ethical — direction.

We must also create human-driven overrides, avenues of recourse, and formal grievance procedures for people affected by AI decisions.

We must extend anti-bias laws to include algorithms. Civilized countries put controls on weapons; when data can be used as a weapon, we need governmental controls to protect against its misuse.

Most importantly, we must see the question of AI and ethics less as a technological issue than as a societal one. That means introducing ethics training as part of both formal education and employment training, for everyone from technologists creating AI systems to vendors who market them to organizations deploying them. It means developing avenues through which developers and data scientists can express dissent when they see ethical issues emerging on AI projects. It means creating and using methodologies that incorporate values into systems design.

Fundamentally, AI is merely a tool. We can use it to set ethical standards, or we can use it as an excuse to circumvent them. It’s up to us to make the right choice.

Read the executive brief Teaching Machines Right from Wrong.

To learn more about how exponential technology will affect business and life, see Digital Futures in the Digitalist Magazine.

This post first appeared on digitalistmag.com

Howard Fields

Senior Executive | COO/CIO/CRO - Full Time, Fractional & Interim | Strategic Consultancies | Crucial Initiatives | PE/VC-Backed Start-Ups | High Growth

7 å¹´

San: Great summary.

赞
回复

要查看或添加评论,请登录

Dan Wellers的更多文章

  • How Future Batteries Could Save Civilization

    How Future Batteries Could Save Civilization

    This article was first published on Digitalist Magazine Dan Wellers and Michael Rander The future of humanity may well…

    2 条评论
  • Circular Economy: Reshaping the Industrial Ecosystem

    Circular Economy: Reshaping the Industrial Ecosystem

    Dan Wellers and Christopher Koch This article was first published on Digitalist Magazine Though it doesn’t make any of…

  • The Human Factor In An AI Future

    The Human Factor In An AI Future

    By Dan Wellers and Kai Goerlich As artificial intelligence becomes more sophisticated and its ability to perform human…

    1 条评论
  • Pulling Cities Into The Future With Blockchain

    Pulling Cities Into The Future With Blockchain

    By Dan Wellers, Raimund Gross, and Ulrich Scholl The next wave of the digital economy is just over the horizon, and it…

  • A New Computing Paradigm: Conversational AI For Consumers And In The Enterprise

    A New Computing Paradigm: Conversational AI For Consumers And In The Enterprise

    By Dan Wellers and Till Pieper Instant messaging apps have taken over. WhatsApp, iMessage, WeChat, Signal, Slack…

    3 条评论
  • Making The Next Moves With Blockchain

    Making The Next Moves With Blockchain

    By Dan Wellers and Raimund Gross For a technology that's still in its comparative youth, blockchain has become…

  • 7 Surprising Innovations For The Future Of Computing

    7 Surprising Innovations For The Future Of Computing

    Moore’s Law posits that the number of transistors on a microprocessor — and therefore their computing power — will…

    1 条评论
  • The Super Materials Revolution

    The Super Materials Revolution

    Thousands of years ago, humans discovered they could heat rocks to get metal, and it defined an epoch. Later, we…

  • Reimagine Your Business Model For Exponential Change – 4 Steps

    Reimagine Your Business Model For Exponential Change – 4 Steps

    Over the last year or so, we’ve been discussing digital futures and the ways they might affect business and life in the…

  • Triggering Quantum Leaps In Human Performance

    Triggering Quantum Leaps In Human Performance

    Sports records used to stand for years. Today they fall regularly, and by wide margins.

社区洞察

其他会员也浏览了