What Does Artificial Intelligence Have to Do with Unconscious Bias?

What Does Artificial Intelligence Have to Do with Unconscious Bias?

More and more organizations are incorporating AI into their systems, from facial recognition software to healthcare allocation and everything in-between. And while perhaps AI has allowed certain organizational practices to operate more smoothly, we must remember that human-created AI software inevitably reflects human flaws, including our unconscious biases. Let’s look at a few examples:

1. Amazon and Gender Bias

In the past, Amazon created not one but two different AI systems, both of which were created to assess résumé submissions. A seemingly innocent application, both of these AIs taught themselves gender bias. Historically speaking, the tech industry has been dominated by men. Although demographics have been changing in recent years, the AI was “trained to vet applicants by observing patterns in résumés submitted to the company over a 10-year period,” the majority of which came from men. Because of this historical gender imbalance, the AI began “penaliz[ing] résumés that included the word ‘women’s,’ as in ‘women’s chess club captain,’” pushing their applications lower on the hiring chain. It also demoted candidates who attended all-women colleges.

Amazon’s second AI had a similar issue, also born from the historical gender imbalance in the tech industry: “the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as ‘executed’ and ‘captured.’” In short, a historically higher number of résumés from men led these AIs to teach themselves that résumés from women were flawed. Bizarre, isn’t it? Although such consequences were unintentional, these software systems were programmed by humans and thus incorporated the unconscious gender bias of humans into their workings.

2. AI and Racial Bias

One of the most prolific issues afflicting AI systems is unconscious racial bias. The list of examples feels neverending, but two of the most life-threatening instances of AI’s unwitting racial bias appeared in systems that calculated prison recidivism and healthcare allocation.

“Recidivism” refers to the tendency of a convicted criminal to reoffend. An AI created by Northpointe, Inc. was meant to assess how likely an incarcerated person would become a recidivist, but the software was unwittingly underlain by racial bias. The program misclassified Black defendants as “higher risk” recidivists at twice the rate of white defendants; misclassified white defendants who did reoffend as “low risk” at nearly twice the rate of Black defendants; and “even when controlling for prior crimes, future recidivism, age, and gender,” the system falsely determined that “[B]lack defendants were 45 percent more likely to be assigned higher risk scores than white defendants.” Again, this bias was wholly unintentional! But we must remember that the lack of intent does not negate the harm this software facilitated by perpetuating misleading stereotypes of Black violence.

AI racial bias also manifested in a healthcare allocation system. Researchers from UC Berkeley discovered that an AI allocating care to 200 million people was assigning lower-risk scores to people in the Black community, despite that Black patients were “statistically more likely to have comorbid conditions and thus… experience[d] higher levels of risk” related to health issues. The consequence? Black patients were receiving lower standards of care, which decreased their access to necessary treatments and ultimately risked their lives. When we think of healthcare and medicine, our first association should be life-saving treatment, not higher risk of death.

3. Facial Recognition and Misgendering

Numerous studies have been dedicated to how AI recognition systems can result in racial profiling, but we must consider that gender- and sexuality-based discrimination walks right alongside it. AI recognition systems work with simplistic assumptions to “determine” a person’s gender, which can leave individuals more vulnerable to transphobia and gender-based discrimination, regardless of if they are transgender or not. How? Well, keep in mind that AI “uses information such as… whether or not a person wears makeup, or the shape of their jawline or cheekbones,” and so forth to “determine” that person’s gender. However, basic logic tells us a person with a more square jawline, for example, is not necessarily going to be a man. As a result, the issue of AI technology misgendering a person affects both transgender and cisgender people. Additionally, AI facial recognition operates on a binary: man or woman. This dichotomy erases many nonbinary identities, especially people who do not see themselves on male-female spectrum at all.

While this type of software can and has been used on a broad social level, such as security cameras or other forms of identificatory practices, this type of gender- and sexuality-based bias perpetrated by AI can harm people on more personal levels, too. Giggle, for example, is a “girls-only” social media/networking app where to register, people must upload a selfie that is evaluated by an AI called Kairos to determine if they are “actually” a girl. This software not only risks excluding trans women, but it also risks excluding cisgender women who don’t wear makeup or who don’t appear “traditionally” feminine in other ways, either. As a result, AI recognition software perpetuates unconscious gender bias born from the understanding of “gender” as male/female. In doing so, it harms not only the LGBTQ+ community but also anyone who does not appear “traditionally” masculine or feminine, including many cisgender people.

What Now?

AI is an exciting realm full of opportunity, I won’t deny that. AI may also one day be able to make the world an easier, more accessible place for people of all identities and abilities. All the same, artificial intelligence remains artificial. It is created by humans, meaning the likelihood of AI being free from human bias anytime soon is low. In other words, when we see AI being incorporated into levels of any organization, from the corporate world to healthcare, we must always keep in mind the human biases these artificial softwares may unintentionally perpetuate.


Dima Ghawi is the founder of a global talent development company with a primary mission for advancing individuals in leadership. Through keynote speeches, training programs and executive coaching, Dima has empowered thousands of professionals across the globe to expand their leadership potential. In addition, she provides guidance to business executives to develop diversity, equity, and inclusion strategies and to implement a multi-year plan for advancing quality leaders from within the organization.

Reach her at DimaGhawi.com and BreakingVases.com.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了