Power and privilege in the AI industry
I recently spoke to Meredith Whittaker, co-director of the AI Now Institute. We discuss how technology is shaped by the people who build it, and how sustainable, ethical work cultures support sustainable, ethical technological progress.
Listen to our full conversation here.
The power relations are reflected in the way in which technologies are used. Who has the power to use these technologies, and on whom are they used?
Meredith Whittaker: "If you look across the AI industry, there are an increasing pile of examples where we see that these systems embed biased and discriminatory logic. In almost every case, these biases are effectively replicating histories of discrimination against people who have been historically discriminated against. So against women, against black people, against trans people. I have never seen an AI system that is biased against white men as a standalone category. There was a paper just published out by some machine learning researchers at Google, that showed that sentiment analysis software that uses natural language processing to identify hate speech, or negative sentiment in text. So this is AI that's used for things like content moderation, and that use case in itself has a number of problems. These systems were consistently flagging discussion about disability, and people with disabilities as negative or even violent."
It falls back to this question of who is doing the designing?
Meredith Whittaker: "This is an issue of power, that's exactly right. These are issues that are not going to be fixed by simply tweaking an algorithm, or in many cases, even augmenting the datasets that are being used. These are problems that are going to be fixed when the culture that would think it was normal to collect only samples of white skin to train a melanoma detection algorithm, when that culture changes, when those structures change. Because that is the logic that's feeding into these systems, that is being used to develop these systems, which you know ultimately are blinkered, are developing these systems in ways that may work for Chad, and Brad, and Brandon and their friends, but not work for people outside of their social orbit."
Do you need to effect that change through culture change, or can it be done through regulation, and then the compliance to that regulation, and the enforcement if those regulations are not complied with?
Meredith Whittaker: "We certainly need regulation. We needed regulation yesterday. We needed regulation ten, twenty years ago. The AI Now Institute has suggested common sense regulations, such as, if you are using AI technologies to make socially-significant decisions such as whether someone should receive a job or whether they should receive benefits or resources, then those technologies should not be protected from scrutiny under trade secrecy. Trade secrecy should be waived, so that we are able to examine you know the mechanisms at work within these systems and some of the claims that are made by the people selling these systems often to governments and large businesses. We have also recommended truth in advertising laws be applied and enforced to these systems. If you say it can do something, then it needs to do that or there needs to be penalties."
Listen to our full conversation here.
Aeronautical Engineer as Adviser and Consultant at Conection Enterprises LLT
4 年Yes.
Business Owner / Freelance Journalist
4 年The rise of the artisanal workforce! I loved the podcast, and for that matter, all of your podcasts. I'm always singing your praises to friends and other like-minded people here in Ottawa. There's a thriving tech startup scene in this town ( e.g. Shopify) with quite a number of those are at the bleeding edge of AI innovation. Perhaps you can come visit one day soon - I should see if Invest Ottawa?will invite you to speak.?