Weapons of Math Destruction: Algorithms and the Spread of Inequality

Weapons of Math Destruction: Algorithms and the Spread of Inequality

Interview with Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Published on November 7 in the online edition of Le Monde. Translated from French by myself for the purpose of making this material available to English speakers. No copyright infringement is intended.

**********************************************************************************

Education, justice, jobs, politics... Algorithms permeate every part of society, with sometimes tragic consequences. Such is the assessment made by Cathy O’Neil, a former Wall Street analyst, who was struck by the role of these “weapons of math destruction”, as she dubbed them, in the 2008 financial crisis. Since then, the US mathematician has been denouncing the perverse effects of these computer programs.

How did you come to realize that algorithms can constitute a threat?

I used to work in finance, in a role directly linked to risk modeling. And what I realized is that algorithms were never conceived to be truly correct: because if the risks were underestimated, then we made more money. That was one of the main causes of the 2008 financial crisis. 

Do you remember the mortgage-backed securities with a triple-A rating? This rating was supposed to state that mathematically, these securities had a low risk of collapsing. That was false. But people still believed in it, because they trust mathematics.

Is it the financial crisis that awakened you to the impact these algorithms can have on people’s lives?

I didn’t necessarily measure the impact at that time, but I knew that the consequences were silent and invisible. There was a long interval between the loans and the loss of homes, the connection wasn’t that simple. What’s more, the borrowers were ashamed: they were singled out for having bought houses that were too expensive for them. The social mechanism of shame overtook the debate on algorithms.

What is in your view the most striking example of a “weapon of math destruction”?

In the United States, judges use an algorithm called Compas that assesses the probability of a defendant being arrested again within the next two years. However, we know that in our country, people get arrested for all kinds of reasons: for using drugs, for being poor and urinating on the sidewalk because they don’t have access to a restroom, or for being black and smoking marijuana – black people get arrested for this much more often than whites, even though their consumption is the same. There are many reasons people get arrested that have nothing to do with violence, but with the fact that they’re poor or from a minority.

And these people, who are considered “high risk” by the algorithm, stay longer in jail, which ironically increases the risk, once they get out, of having less social interaction, hence less chances of finding a job… and therefore a greater risk of going back to jail.

You’re not the first person to denounce the bias of algorithms supposed to prevent repeat offences. Why are they still being used in the United States?

Unfortunately, there’s a sort of political divide between the people who trust these algorithms and the others. Which is ridiculous, because this is supposed to be science based on facts. But that’s precisely what I want to demonstrate: it’s not a science, we don’t have the means to test the hypotheses, and even when we could, we don’t do it!

What’s more, this algorithm, exactly in the same way as the one supposedly predicting where to deploy police officers to prevent crimes, is based on the hypothesis that we have crime-related data [based on which these algorithms make predictions]. However, the data we have is not on crime but on arrests.

In your book, you explain that the poor are the first victims of algorithms. Is that true in fields other than justice?

The poor are the biggest losers of the age of big data. When I worked as a data scientist, my job was to distinguish between high-value and low-value consumers. For example, I am myself a high-value consumer, especially for knitting websites, because I love that. I am therefore vulnerable to offers on cashmere. 

But poor people are targeted by much more predatory industries, such as online universities whose only purpose is to generate profit. They particularly target people who are so poor that they receive State subsidies to pay for their studies, and who don’t really understand the system. These are vulnerable people, who are told that all their problems will be resolved through online education.

And it’s algorithms that determine the value of people...

Yes, that’s what big data does. It’s about using information that doesn’t seem pertinent at first sight, such as “likes” on social networks, retweets, Google searches, or the kind of medical website that you visit… All this information is used to create a profile and see whether you’re a privileged person in real life. If they decide that you are, then these algorithms will turn you into someone even more privileged. And inversely, they exacerbate inequalities.

In your book, you go as far as stating that algorithms threaten democracy. In what way?

I think first and foremost about the social networks that provide us information. The problem is that their algorithms don’t focus on providing accurate information, but information that we would like to see, based on the clicks of other people who are similar to us. These are the famous filter bubbles.

The problem is when these algorithms are combined with targeted adverts. In the last hours of the 2016 US presidential campaign, Donald Trump’s team targeted Afro-Americans on Facebook with ads encouraging them to abstain from voting. These are highly personalized attempts to manipulate people. I’m not manipulated in the same way that you would. One could imagine, for example, an ad telling me that I’m fat, on election day, to prevent me from going out to vote. And that could work!

These algorithms prevent us from accessing good information because they manipulate us emotionally. They deteriorate democracy.

Do you believe conversely that algorithms could be used in a positive way?

Some algorithms, depending on how they’re used, can truly help people or destroy the system. Take healthcare: your data can be used by a doctor to tell you if you display certain risks, which kind of treatment you should follow, and so on. But the same program can be used by insurance companies to get rid of people who are at risk of becoming ill and costing them too much.

A good algorithm is one that strives for equality. For that, we must ask ourselves every time, who does this algorithm harm?

In your book, you refer to a “silent war” whose “victims lack the necessary weapons to fight back”. What can we do on an individual basis?

That’s a very important question, and unfortunately I don’t have a good answer. Imagine that a hiring algorithm rejects your application because you’re a woman. How will you know? How will you find other people who were treated unfairly? And how will you get together, organize and fight? It’s impossible.

What about corporations, governments?

Enforcing existing laws should be the first step! We have laws that make it illegal to discriminate against women in the hiring process. There are other laws on finance and justice that are not enforced. On the other hand, on the question of information, on filter bubbles, I believe that there’s a need for new laws, even if it’s hard to say what they should look like.

Facebook, for example, should open itself to researchers. Today, they authorize some of them every now and then, but they select the projects, they have control over the experiments, and they decide whether or not to publish the results. They shouldn’t hold that power. We have enough reasons to suspect Facebook of affecting democracy, so we need to know exactly what we’re facing.

Another possibility would be to ban targeted political ads. You could distribute political ads, but without deciding who gets to see them. Everyone should be seeing the same thing.

Do you think the situation could worsen?

It’s possible. Just look at China! Efforts such as [the European Union’s] General Data Protection Regulation send a positive signal, but they don’t deal directly with algorithms, only data. Europe’s legislation lags behind what is happening today.



Prasant A.

CMO | Digital Transformation | Global Marketing

4 年
回复
Miguel Araújo Lopes

Vice President at OutSystems

5 年

Thanks Miguel for sharing and translating the article, this is important to raise awareness

要查看或添加评论,请登录

Miguel Quintana的更多文章

社区洞察