Where is the Accountability for AI Ethics Gatekeepers?
Photo by Zeyu Duan on Unsplash

Where is the Accountability for AI Ethics Gatekeepers?

This article was originally published on?Grit Daily?and is reproduced with permission.

In July 2020,?MIT took a frequently cited and widely used dataset offline?when two researchers found that the ’80 Million Tiny Images’ dataset used racist, misogynistic terms to describe images of Black and Asian people.

According to?The Register,?Vinay Prabhu, a data scientist of Indian origin working at a startup in California, and Abeba Birhane, an Ethiopian PhD candidate at University College Dublin, who made the discovery that thousands of images in the MIT database were “labeled with racist slurs for Black and Asian people, and derogatory terms used to describe women.” This problematic dataset was created back in 2008 and if left unchecked, it would have continued to spawn biased algorithms and introduce prejudice into AI models that used it as training dataset.

This incident also highlights a pervasive tendency in this space to put the onus of solving ethical problems created by questionable technologies back on the marginalized groups negatively impacted by them. IBM’s recent decision to?exit the Facial Recognition industry, followed by similar measures by other tech giants, was in no small part due to the?foundational work of Timnit Gebru, Joy Buolamwini, and other Black women scholars. These are many instances where Black women and POCs have led the way in holding the techno-elites accountable for these ethical missteps.

Last year,?Gizmodo reported?that ImageNet also removed 600,000 photos from its system after an art project called ImageNet Roulette demonstrated systemic bias in the dataset. Imagenet is the brainchild of Dr. Fei Fei Li at Stanford University and the work product of ghost workers at Mechanical Turk,?Amazon’s infamous on-demand micro-task platform. Authors Mary L. Gray and Siddharth Suri in their book, “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass” describe a global underclass of invisible workers who make AI seem “smart” while making less than legal minimum wage and who can be fired at will.

As a society, we too often use elite status as a substitute for ethical practice. In a society that is unethical, success and corresponding attainment of status can hardly be assumed to correlate with anything amounting to ethical behavior. MIT is the latest in a growing list of elite universities who have positioned themselves as experts and arbiters of ethical AI, while glossing over their own ethical lapses without ever being held accountable.

Whose Ethics are These?

Given the long history of prejudice within elite institutions, and the degree to which they have continuously served to uphold systemic oppression, it’s hardly surprising that they have been implicated in or are at the center of a wave of ethical and racist scandals.

In March 2019, Stanford launched the?Institute for Human-Centered AI?with an advisory council glittering with Silicon Valley’s brightest names, a noble objective of “to learn, build, invent and scale with purpose, intention and a human-centered approach,” and an ambitious fundraising goal of over $1 billion.

This new institute kicked off with?glowing media and industry reviews, when someone noticed a glaring omission.?Chad Loder?pointed out that?the 121 faculty members listed were overwhelmingly white and male, and not one was Black.

Rather than acknowledging the existence of algorithmic racism as a consequence of anti-Blackness at the elite universities that receive much of the funding and investment for computer science education and innovation, or the racism at tech companies that focus their college recruitment at these schools, we act as though these technological outcomes are somehow separate from the environments in which technology is built.

Stanford University by its own admission is a?$6.8 billion enterprise?and has a $27.7 billion endowment fund with 79 percent of the endowment restricted by donors for a specific purpose. After being at the center of the college admissions?bribery scandal last year, it was again in the hot seat recently because of its callous response to the global pandemic, which has left many?alumni disappointed.

MIT and Stanford are not alone in their inability to confront their structural racism and classism. Another elite university that has also been the recipient of generous donations from ethically problematic sources is the venerated University of Oxford.

Back in 2018, U.S. billionaire?Stephen Schwarzman, founder of Blackstone finance group, endowed Oxford with?$188M (equivalent of £150M) to establish an AI Ethics Institute. The newly minted Ethics institute sits within the Humanities Center with the intent to “bring together academics from across the university to study the ethical implications of AI.” Given Blackstone Group’s well-documented?ethical misdeeds, this funding source was of dubious provenance at best.

Schwarzman also?donated $350M to MIT for AI research?but the decision to name a new computing center at the school after him?sparked an outcry?by faculty, students mainly because of his role as ex-advisor and?vocal support for President Donald Trump, who has been criticized for his overtures to white supremacists and embrace of racist policies.

Endowments are an insidious way for wealthy benefactors to exert influence on universities, guide their research including policy proposals, and it is not realistic to expect donors to fund any academic initiatives to reform a system that directly or indirectly benefits them.

This wasn’t the first high-profile donor scandal for MIT either. It had also accepted funding from the late Jeffrey Epstein, notorious sex offender who was arrested for federal sex trafficking in 2019. The MIT-Epstein reveal led to?public disavowals?and resignations by leading researchers like?Ethan Zuckerman, who stated publicly on his blog, “the work my group does focuses on social justice and on the inclusion of marginalized individuals and points of view. It’s hard to do that work with a straight face in a place that violated its own values so clearly in working with Epstein and in disguising that relationship.”

Evgeny Morozov, a visiting scholar at Stanford University, in?a scathing indictment?called it “the prostitution of intellectual activity” and demanded that MIT shut down the Media Lab, disband Ted Talks, and refuse tech billionaires’ money. He went on to say, “This, however, is not only a story of individuals gone rogue. The ugly collective picture of the techno-elites that emerges from the Epstein scandal reveals them as a bunch of morally bankrupt opportunists.”

We have a reasonable expectation for elite schools to behave ethically and not use their enormous privilege to whitewash their own and the sins of their wealthy donors. It is also not entirely outrageous to require them to use their enormous endowments during times of unprecedented crisis to support marginalized groups, especially those who have been historically left out of the whitewashed elite circles, rather than some billionaire’s pet project.

It’s not enough to stop looking to institutions that thrive and profit off deeply unequal, fundamentally racist systems to act as experts in ethical AI, we must also move beyond excusing unethical behavior simply because it is linked to a wealthy, successful institution.

By shifting power to these institutions and away from marginalized groups, we are implicitly condoning and fueling the same unethical behaviors that we supposedly oppose. Unless we fully confront and address racial prejudice within the institutions responsible for much of the research and development of AI and our own role in enabling it, our quest for ethical and responsible AI will continue to fall short.

Authors:

Mia Shah-Dand?is CEO of?Lighthouse3, a research & advisory firm focused on responsible innovation with emerging technologies. She is also the founder of?Women in AI Ethics initiative?and creator of the “100 Brilliant Women in AI Ethics” list.?https://www.dhirubhai.net/in/miadand/

Ian Moura?is a researcher with an academic background in cognitive psychology and Human Computer Interactions (HCI). His research interests include autism, disability, social policy, and algorithmic bias.?https://www.dhirubhai.net/in/ianmoura/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了