The Ethics of Artificial Intelligence: An Interview of Kurt Long

The Ethics of Artificial Intelligence: An Interview of Kurt Long

In recent years, there have been tremendous advances in artificial intelligence (AI). These rapid technological advances are raising a myriad of ethical issues, and much work remains to be done in thinking through all of these ethical issues.

I am delighted to be interviewing Kurt Long about the topic of AI. Long is the creator and CEO of  FairWarning, a cloud-based security provider that provides data protection and governance for electronic health records, Salesforce, Office 365, and many other cloud applications.  Long has extensive experience with AI and has thought a lot about its ethical ramifications.

SOLOVE: There is some confusion and disagreement about the definitions of artificial intelligence (AI) as well as machine learning. Please explain how you understand these concepts.

LONG: AI is essentially the science and machine learning is what makes it possible. AI is the broad concept, and machine learning is the application of AI that’s currently in use most widely. Machine learning is one way to “create” artificial intelligence.

Stanford says machine learning is “the science of getting computers to act without being explicitly programmed”.

Here is an interpretation from Intel’s head of machine learning, Nidhi Chappell: “AI is basically the intelligence – how we make machines intelligent, while machine learning is the implementation of the compute methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter. So the enabler for AI is machine learning.”

SOLOVE: What are some of the things that have gone wrong with AI and that could potentially go wrong? 

LONG: While AI has the potential to solve big global challenges, there have been numerous cases where AI has produced troubling results, even if the technology was well intended.

  • Microsoft released an AI chatbot onto Twitter. Engineers programmed the bot to learn from and speak like other users on the social network, but after only 16 hours, the bot was shut down for posting racist tweets (link).
  • Uber’s AI-powered self-driving car failed to recognize six red lights and ran through one of them where pedestrians were present (link).
  • The Houston Independent School District used the EVAAS (Educational Value Added Assessment System) program to rate teachers by comparing student test scores against state averages. The district’s goal was to fire 85% of “low-performing” teachers. The teachers took the case to court (and won) on the basis that the software developer would not disclose how its algorithm worked (link).

Poor data quality poses big risks when it comes to AI. Even if you have a robust amount of data, you can still run into problems if bias is inherent in training sets) or if data is inaccurate. It is up to the human to train the machines with ethics and human alignment to solve challenges.

SOLOVE: Why are ethics necessary for AI?

LONG: AI is a powerful tool that stands to greatly benefit society and our quality of life, but, the intended use of the technology and ethics need to be established before implementing it.

It’s everyone’s obligation to make sure that these technologies are being used to further an ethical goal or objective. The Institute of Electrical and Electronics Engineers (IEEE) addresses the importance of ethics in AI in section 7.8 of their policies, highlighting the need to avoid bias and bribery and to ensure health and safety.

SOLOVE: What types of ethical rules would you recommend?

LONG: In addition to the ethical rules highlighted in IEEE’s policies, there are three principles that can be generally applied when considering the ethical and legal use of AI.

The first one is transparency. There should not be a concept of “black box algorithms” where the machines decisions are only understood by the machine. There needs to be a level of transparency affiliated with machine learning systems. This applies to consent, the intended use of data, the data used to train machines, and how the machine makes decisions (refer to EVAAS).

The second consideration we need to make is whether AI is aligned with values at scale. AI must be aligned with the values of the technology’s recipients and participants, the technology’s vendor and users, and the law. This eliminates the risk of unexpected outcomes or inadvertent results.

Lastly, there should be a human in the loop. These learning systems need to be supervised to ensure that we understand how they are drawing conclusions, and a final determination of actions taken should be made by a human.

SOLOVE: Are there disagreements about the ethical rules that are needed? Or are you seeing consensus?

LONG: People are becoming more aware of what can be done with their personal data, whether directly or inadvertently, with regulations like GDPR and incidents like Facebook’s potential privacy violations sprouting up.

As we move toward a society that’s more conscious of how these technologies are being used, I think there is a general consensus that these technologies should be used for the good of society, and therefore we need ethical rules for using AI.

That’s not to say that the researchers, think tanks, and thought leaders associated with this movement all agree on exactly what those ethical rules are and how they should be framed, but we are definitely seeing a strong movement toward standards and principles that can be applied globally

SOLOVE: One of the challenges with ethical rules is that they are often voluntary. Some creators of AI technology might not follow them. Should there be laws rather than voluntary ethical rules? Are there any dangers with using laws as opposed to ethical rules to govern AI?

LONG: When it comes to law and the ethics of AI, the two should be commingled. It’s complex since an organization may not intend to use AI maliciously but may still carelessly cause harm due to false data sets or a lack of transparency or human involvement.

So there is still work to be done with determining bad intentions versus willful neglect, but I do believe that laws can – and should -- be used to enforce ethical rules

Either way, vendors and consumers alike should be educated on the considerations of using AI, and they should be held accountable for the outcomes of machine learning. That use should be transparent and legally defensible in court.

SOLOVE: How do we keep AI and machine learning under control when they are continually evolving in ways that are unexpected? 

LONG: This goes back to maintaining transparency and understanding the technologies you are using while keeping a human in the loop. AI is not something that should be unleashed to derive outcomes from whatever it evolves into. AI should be an extension of human work and used to empower this work – not to undermine it or replace humans.

Thanks, Kurt, for discussing this topic with me.

If you liked this interview, you might be interested in Kurt’s essay, Aligning Your Healthcare Organization with AMA’s AI Policy Recommendations

Daniel J. Solove is the John Marshall Harlan Research Professor of Law at George Washington University Law School and the founder of TeachPrivacy, a privacy awareness and security training company. He is the author of 10 books and more than 50 articles. 

The Privacy+Security Forum (Oct 3-5, 2018 in DC)

GDPR and Privacy Awareness Training by Professor Daniel Solove

Click here to see our new course catalog.





Gerard Stegmaier

Practical Problem Solver, Trusted Consigliere + Lawyer

6 年

I always love the assumptions inherent in certain types of questions. Asking how do we “control” AI begs the question of why must it be controlled any more than why we must “control” humans’ thoughts and expressions. If the answer is when the net utility to society is lower than if we failed to control it or if the creators aren’t forced to internalize the negative externalities resulting from their use of AI or, if in its application it somehow violates some specific principle of law (or actual law) like discrimination that is legally prohibited then “control” may be appropriate. But let’s not begin every conversation of new technology with the assumption that more and new regulation should be the natural state of things.

Mani Tulasi CISSP CISA CCSK

Innovator, mentor, enabler, business champion, challenger, creative thinker

6 年

Interesting article and thanks for sharing. Us humans have always struggled with ethics and technology since the invention of the printing machine or even way beyond that. While I don't disagree with there being a need to use AI in an ethical manner, the issue I struggle with is where does one set the bar. Ethics while safe guarding humans should not hinder the development of the technology. Some of the major research in Al are in area such as black box algorithm with the aim to push AI beyond natural human intelligence. We are no where near understanding how AI truly works other than in some control environment where humans dictate what the machine should learn. Until then, we should work on a set of higher ethical boundary such as AI should not harm humans, if it does it should revert back to a safe mode, and it can't it should self destruct. We have invented equally destructive technologies such as nuclear science and gene cloning. The knowledge we have acquired from these technologies have now allowed us to set the boundaries within which nuclear science and gene cloning can be used. We should take a same approach to AI and not shackle it before understanding it.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了