Move over Judge Judy AI is here
Photo by Conny Schneider on Unsplash

Move over Judge Judy AI is here

Although many conversations on AI focus on how to reduce bias, others have pointed out limitations in framing the problem of AI as one of fairness or bias.?Kinjal David?asks why we need to talk about algorithmic bias and sexism when we aren't talking about race or gender.?It is critical to view AI from different perspectives and through different lenses.

AI facial recognition algorithms are notoriously fraught with bias, especially when detecting people of color. When governments or police departments deploy these algorithms that seem perfectly fair on paper, they could unfairly target minority groups who require the most protection. Cathy O'Neil, author of Weapons of Math Destruction, suggests that the problem is far worse than this, as the data used to train these systems reinforce inequalities. Luckily, we are seeing big tech trying to improve these poor outcomes. Google recently introduced the Monk Scale Tone, pioneered by Harvard's Assistant Professor of Sociology, Dr. Ellis Monk, to combat skin color bias.

One possible reason for algorithmic bias is insufficient training data. Data containing only white individuals may be biased due to the underrepresentation of other groups. As a graduate student, Joy Buolamwini found an AI system that better detected her face when wearing a white mask, prompting her research project Gender Shades. This project uncovered the bias built into commercial AI in gender classification showing that AI facial analysis technology has a heavy bias toward white males. Joy fought back by founding the Algorithmic Justice League to move the industry toward equitable and accountable AI.

Moreover, proxies enable algorithms to be "blind" to sensitive attributes. For example, using proxies such as weight and height may generate a bias against an overweight or obese population -69% of Americans. This is a particularly troubling problem for algorithm designers. For this reason, it's vital to build solutions centered around?human computer interaction (HCI)?mitigating bias and ensuring the best possible outcome. This critical human involvement is necessary because it provides continuous feedback on the efficacy of bias mitigation efforts.

Using a transparent system may help AI improve fairness and trust while minimizing discrimination. But transparency won't solve everything and can introduce additional vulnerabilities ripe for bad actors to take full advantage of, known as the AI transparency paradox. It's essential to recognize that using AI for social good doesn't necessarily imply neutrality and comes with significant risks. A transparent process can help identify and minimize these risks and improve the overall performance of the AI system.

Regardless of its potential benefits, algorithmic decision-making requires the systematic identification of bias that can lead to unfair outcomes.?Algorithmic bias?can stem from the design of an algorithm, improper use of data, and unintended decisions. Ultimately, algorithmic decisions may reinforce existing prejudices and leave protected groups even more vulnerable. Consequently, algorithmic bias mitigation requires proactive measures on the part of both the developer and operator of the algorithm.

As AI and machine learning systems become more sophisticated, biases will continue to emerge. These biases can have significant consequences. In the context of politics, political aversion is generally acceptable. However, in the context of machine learning, the bias may be detrimental and lead to illegal and unfair actions. Furthermore, they may lead to dangerous conditions. It is essential to understand the potential consequences of biases in machine learning systems and their social and economic impacts.

Why should you care??Your digital footprint (i.e., social media interactions, posts, comments, videos, pictures, emojis, and in my case, this article), small or large, is continuously analyzed by models. Their outputs are factored in critical decisions that can have devastating effects on you as a person. For example, should you get hired, approved for a loan, admitted for emergency surgery, or are you date-worthy? In essence, these algorithms will decide -classify- if you are deemed?"trustworthy" while they are vulnerable to a myriad of bias afflictions, rendering them, in many cases, untrustworthy.

Dr. Ruth Starr

Program Analyst at GSA

2 年

I agree with the AI ML gender and skin color biases in the study. More shades of brown skintones need to be added to be more inclusive of more ethnicities across the spectrum. Aidan Anne Sneider Jackie French Asif Haider

回复
Asif Haider

COO at AxeGENAI | Host of AYAYAYai Podcast | AI/ML Upskilling, Lifecycle, Workshop Facilitator, Prototype Developer, Auditor, Assessment | Agile | CMMC | Ethical Hacker | Data ETL | Career Readiness Coach at FourBlock |

2 年

Bassel Haidar excellent point! Also chuckled at the idea that neural network "weights and bias" can help with these types of biases. Sharing for others' awareness.

Christina Ward

Consultant at Guidehouse

2 年

Great, thoughtful article connecting links between social and computer science regarding the myth of “unbiased” networks with both humans and AI entities.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了