Unconscious Bias and AI: Navigating the Intersection of Technology and Human Prejudice
AI Generated

Unconscious Bias and AI: Navigating the Intersection of Technology and Human Prejudice

Dawn Grzena and Megan Mundell

Unless you haven’t logged into anything in the last year, we are confident that you have heard of the burgeoning notion of Artificial Intelligence, or AI. This is not a new concept; AI for good and evil has been in fiction, movies, conversations, and yes, you got it in one if you said videos on the various platforms out there. Stories have presented AI as following the lead on how it is programmed?—?meaning how humans have implemented code for the AI to follow.

In recent years, artificial intelligence (AI) has made remarkable strides, transforming industries and altering the way we live, work, and interact. However, as AI systems become increasingly integral to our daily lives, a critical issue has come to the forefront: unconscious bias. This invisible force can significantly impact the effectiveness and fairness of AI systems, often in ways that are not immediately apparent. Understanding and addressing unconscious bias in AI is crucial for developing ethical, equitable, and reliable technologies.

Ever wonder what the impact is on having humans, with all of our inherent bias write the code for AI? We did, and then we began to think about the lack of dialog on how unconscious bias impacts AI. Before we dive in, note that we leveraged AI for this blog along with our human brains. Any component that is strictly AI will have a notation at the beginning of “AI”.

AI from AI’s perspective (Google Gemini)

OK, let’s get started! First, let’s define AI and unconscious bias.

“AI” states that Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The core goal of AI is to enable machines to perform tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, solving problems, and making decisions.

AI can be categorized into the following main types:

Capability-Based Types:

  • Narrow AI (ANI): This is the most common type of AI we have today. It is designed to perform a specific task, such as facial recognition, playing chess, or driving a car. It excels at its designated task but lacks the ability to perform other tasks.
  • General AI (AGI): This hypothetical type of AI would possess intelligence equal to a human being. It would be able to understand, learn, and apply knowledge across a wide range of areas.
  • Super AI (ASI): This is another hypothetical type of AI that would surpass human intelligence. It would be capable of solving complex problems, making decisions, and learning at a superhuman level.

It’s important to note that while we have made significant progress in Narrow AI, AGI, and ASI remain largely theoretical at this point.

Functionality-Based Types

  • Reactive Machines: These AI systems can only respond to the present situation without considering past experiences or future consequences. Examples include early chess-playing computers.
  • Limited Memory: These AI systems can use past data to inform their decisions. Self-driving cars, for instance, use this type of AI to make real-time adjustments based on traffic conditions and road data.
  • Theory of Mind: This is a hypothetical AI that would understand and respond to human emotions, beliefs, and intentions. It would require a deep understanding of human psychology.
  • Self-Aware: This is the most advanced and hypothetical type of AI, possessing consciousness and self-awareness. It would be able to understand its own existence and goals.

Defining unconscious bias

Now let’s define unconscious bias. Unconscious bias, also known as implicit bias, refers to the attitudes and stereotypes that color our decisions and actions in an unconscious manner. Candidly everyone has unconscious biases about various social and identity groups, so, yes, this applies to all of us.

These biases are embedded in our subconscious and often contradict our conscious beliefs. They can influence how we see others and make decisions without us even realizing it.

Unconscious bias is created in at least four different ways:

  • Automatic: They occur automatically, without conscious awareness.
  • Unintentional: They are involuntary and unaligned with our conscious beliefs or declared intentions.
  • Universal: Everyone possesses unconscious biases, and no one is exempt from them.
  • Shaped by Experience: They are developed over time through exposure to direct and indirect messages, such as media portrayals, family influences, and other cultural and societal interactions.

The thing about unconscious bias is in the name. It’s unconscious. To be clear, from our perspective, that is not an out for anyone. It’s time for us to acknowledge not only that this exists but it must be discussed and fought against. Unlike explicit bias, which is deliberate and conscious, unconscious bias operates automatically and is often rooted in deep-seated beliefs and societal norms. These biases can influence everything from hiring decisions to medical diagnoses, often disadvantaging certain groups based on race, gender, age, or other characteristics. Addressing these biases involves recognizing their existence, continuously challenging our assumptions, and implementing systematic changes in how decisions are made.

Most seem to understand how unconscious bias impacts the world at large, and yet it is still quite prevalent. We see this in civil unrest, political views, race, gender, or sexuality. In recent years, we have quickly become such an angry society with undertones of surrender, with many thinking they can’t do anything to change the norm. (Note: not true, but we’ll get to that later in another blog.) The recent pandemic of COVID-19 with the shutdowns caused a lot of deeply rooted beliefs to skyrocket to the streets. Then take the events of the Black Lives Matter movement due to the deaths of Breonna Taylor and George Floyd. A recent example of the anger that seems to be prevalent to anyone who doesn’t fit another’s ‘norm’ is the rock star who shot a case of beer over a picture on a beer can. This last example showcases how thin-skinned some people are and how quickly it can turn to violence. Think about this for a moment: a trans-woman on a beer can sparked someone to shoot a case of beer. I recall thinking in my bewilderment of how this hurt you? We live in a country (US) where we are free to be who we are (as long as we don’t harm others) and have our own opinions and thoughts. Yet, this is quickly vanishing.

While it might not seem that these examples are important it does show how we are interacting in our society. And this same society is writing the code for AI. When we asked the AI, “Are you biased?” the response was:

(AI): As an AI, I don’t have personal feelings or beliefs, but the algorithms that power me can exhibit biases based on the data they’ve been trained on. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available information. These sources can include inherent biases, which might be reflected in the output I generate.

Efforts are continually made to minimize these biases by using diverse training datasets and refining algorithms to be as neutral and fair as possible. It’s important for users to critically assess AI-generated information and consider multiple perspectives, especially on complex issues.

A recent example of potential unconscious bias within AI is Scarlett Johansson’s voice and ChatGPT’s “Sky” personal assistant. While it could be debated if this is true, there is enough evidence that yes, the “Sky” personal assistant not only sounded like Johnasson’s, but other actresses as well. According to reports, notably NPR, it was discovered that Johansson’s voice is more similar to Sky than 98% of the other actresses. This is after she declined the opportunity to have her voice be used as “Sky’’ personal assistant. It makes us wonder if this is accurate did the engineers do this because of their unconscious bias towards Johansson’s voice? There are times when unconscious bias might be seen as a compliment, but to be clear, without express consent from Johansson, this is unacceptable. A key question for legal experts will undoubtedly be whether the similarity between ChatGPT’s voice and Scarlett Johansson’s was intentional or the result of unconscious bias. (reference: Voice analysis shows striking similarity between Scarlett Johansson and ChatGPT, NPR).

If we delve into this further and make some discoveries, there are several concerns that even NASA and other space organizations have. It is no secret that NASA and most countries’ space organizations are looking into leveraging AI for space travel. AI technologies have the potential to enhance spacecraft autonomy, optimize mission planning, improve data analysis, and assist in decision-making processes during space missions. NASA has been conducting research and implementing AI algorithms in projects such as autonomous rovers, spacecraft navigation, and data analysis. NASA is increasingly integrating artificial intelligence (AI) into various aspects of space travel and exploration. In fact, NASA has been using AI for years not only to support missions but also to facilitate research and innovation in space travel. (ref: Artificial Intelligence?—?NASA).

According to Michio Kaku, a well-known theoretical physicist and futurist, there are various ways we can leverage AI for space exploration. He often discusses how AI, along with robotics and nanotechnology, will transform future space missions and could be integral in exploring and settling on other planets.

Kaku has highlighted that AI could be used to handle the complexities of interstellar travel and the management of life-support systems in space environments, which are too intricate for humans to operate manually over long durations. He envisions AI as being crucial for automating the monitoring and maintenance systems aboard spacecraft, ensuring that life-supporting conditions are maintained during long-term missions.

Let’s take a moment to think about this: NASA and other space agencies are leveraging AI for space travel and exploration. If we decide to send an AI-driven robot, or autonomous robot, on a mission to interstellar space, this robot could be the first contact that we earthlings have with another planetary species. We’ll give you a minute to let that sink in.

Why Does This Matter?

Unconscious bias is present in everyone, including the humans who code AI. To prevent this bias from influencing AI behavior, we need to not only check the AI data. We need to facilitate proper testing of AI’s decision-making.

The National Institute of Standards and Technology (NIST) recently released a document (NIST.AI.600–1.GenAI-Profile.ipd.pdf April 2024) outlining best practices for responsible AI development. This highlights the growing awareness of the need for ethical AI, which arrives somewhat later than the launch of many large language models by companies like OpenAI and Meta. Just like companies often offer diversity training to new employees, addressing bias requires continuous effort throughout the AI development lifecycle.

Think we are making this up? Check this out:

  • Hiring Tools: Some AI-driven hiring tools have shown bias against women. For instance, an AI system used by a large company was reportedly biased against female applicants because it was trained on historical hiring data that favored male candidates, thus learning to replicate these preferences.
  • Healthcare Algorithms: Research has shown that some AI algorithms used in healthcare have favored white patients over black patients in prioritizing care management. One particular algorithm was found to be less likely to refer Black patients to programs designed for patients with complex medical needs than white patients with similar health conditions.
  • Credit and Lending Algorithms: There have been instances where algorithms used in determining creditworthiness have inadvertently discriminated against minority groups. This can happen when the AI systems use zip codes or other location data, which can correlate with racial demographics, inadvertently incorporating historical biases related to these areas into their decision-making.

These examples show the importance of careful consideration and testing of AI systems to ensure they do not perpetuate existing biases or create new forms of discrimination now and down the line as additional models are developed (a process called “bottlenecking,” where a single point of discrimination/exclusion replicates itself downstream in other applications (NIST.AI.600–1.GenAI-Profile.ipd.pdf pg. 9). Addressing these biases involves using diverse training datasets, regularly auditing and updating AI systems, and implementing ethical guidelines to govern AI development and deployment.

When AI is coded with a lack of diversity and understanding of the impacts, it can lead to a reputation crisis. For example: EEOC’s lawsuit against iTutorGroup. The company was accused of using AI recruitment software that automatically rejected older applicants, which is a clear example of age discrimination. This case resulted in a settlement of $365,000. This case highlights the growing concern about the potential for AI to amplify existing societal biases. As AI becomes increasingly integrated into various aspects of our lives, it’s likely that we will see more lawsuits of this nature. (EEOC Settles Over Recruiting Software in Possible First Ever AI-related Case?—?Akin Gump). This case reflects the growing legal scrutiny over AI and machine learning systems, especially regarding how their use affects fundamental rights and complies with existing anti-discrimination laws.

There are also risks surrounding AI in the inferences the model can make on regular people, even without specific information within training data or inputs from the end user. For example, NIST highlights that generative AI models can “[Stitch] together information from a variety of disparate sources. This might include automatically inferring attributes about individuals, including those the individual might consider sensitive (like location, gender, age, or political leanings).” ([invalid URL removed] pg 6 rows 19–22). This easily leads to unconscious bias in outputs and must be carefully controlled. These risks necessitate a high level of media literacy in the end user, as it empowers them to critically evaluate the information generated by AI and identify potential biases.

Now that we have raised the alarm on unconscious bias (as many have before us) now what? How do we address this risk and mitigate it? Well, the first step in problem identification is admitting that we have an issue. Once we complete that step, then we need to:

  • Ensure diverse and representative data: Training AI models on diverse datasets can help mitigate bias.
  • Regularly audit and test AI systems: Identifying and addressing biases early on is essential.
  • Implement transparency and explainability: Understanding how AI systems make decisions can help uncover biases.
  • Promote ethical AI development: Developing guidelines and standards for ethical AI can help prevent harmful outcomes.

We live in a tough, ever-evolving world. AI, in our opinion, is a welcome addition and a tool we can leverage not only for our jobs but also for everyday life, art, and problem-solving. It can unleash our imagination, reduce repetitive tasks, and complete data analysis quickly. Yet it can make mistakes, be biased, and needs to be checked. In the words of Ronald Reagan, “Trust but verify.”

Until next time, Dawn & Megan


要查看或添加评论,请登录

社区洞察

其他会员也浏览了