Algorithms can't be inherently racist. Programmers on the other hand...
Murugappan Meiyappan
Helping e-commerce brands grow without burning money on ad spends
Twitter has recently been in the news for all the wrong reasons. Many publications of high repute have been calling Twitter's algorithms problematic and racist. But are they?
The social media giant has long been cropping images to ensure pictures don't take up too much space on the main feed, and to accommodate multiple images in a single tweet.
An interesting experiment surfaced a few weeks ago which exposed a major flaw in Twitter's cropping algorithm. Twitter user @bascule wondered if the image-cropping algorithm would favor Barack Obama or Mitch McConnell in the image below. The user also uploaded another image with Obama on top and Mitch McConnell on the bottom. In both cases, Mitch McConnell's face was the only thing on the thumbnail. Obama's face couldn't be seen until people made a conscious effort to click the image in order to enlarge it.
Multiple permutations later, Twitter turned out to have a Mitch McConnell bias of epic proportions. No matter how the test images were positioned, irrespective of the number of Obamas in them, Mitch's face made it to the thumbnail every single time.
A myriad of experiments by users from all over the world indicated that this was neither an accident nor a co-incidence. This was by design. There was an inherent bias favoring white people. You know how things escalate from there. Theories were made. People started calling Twitter racist.
Techies around the world came to Twitter's defense: maybe the algorithm chose the brightest part of the image as its thumbnail to increase visibility. A few programmers were adamant that tech noobs who didn't know what they were talking about should simply stop chiming in, since it's nearly impossible to find a large, diverse enough data-set to train the algorithm.
As someone who's worked in image-processing projects in the past, I know that's true. But if a company like Twitter can't train its algorithm to treat people of color the same way it does white people, who can?
If competent coders / testers who are also people of color were part of the development team, there would be no room for a blunder like this.
Twitter Support issued a response to the criticism: they essentially said that the team had tested the model for bias before shipping it and didn't find any evidence of bias, but as pointed out by the users, there's clearly a lot of room for improvement. They also added that they'll keep people updated on their learnings and open-source it for public scrutiny.
I think it's a great response. But it remains to be seen if they're going to come through on their promise or they just happen to have a really good PR team on board.
Recently, another (white) techie created an algorithm that determines the trustworthiness of a person based on their facial features. According to the algorithm, features corresponding to people of color (particularly, black people) such as wide noses or thick lips implied that the person in question was not to be trusted.
Both of these incidents reminded me of a hilarious plot from Better Off Ted, where a fictional company named Veridian Dynamics invents an extremely insensitive solution to deal with their faulty motion sensors being incapable of detecting black people.
AEM Solution Architect/Lead Developer
4 年Programmers are conditioned living in the same society everyone else lives in; so the biases/prejudices that prevail in the society; which they inherit; are bound to consciously/unconsciously seep-in in the models/algorithms they create.. So when the society as a whole evolves slowly; programmers also change and the models/algorithms they create will also be more considerate.. the change will happen/is happening slowly but surely.. and this article is part of that change.. ??
Founder @LastBench | Brand Building, Films, Social Impact |
4 年Superbly summarised and to the point. Loved your take on this Muruga. And, great finish there with the BOT! ;)