Navigating Algorithmic Bias in the Age of Artificial Intelligence
Shantha Mohan Ph.D.
III, CMU SV : : Author: Leadership Lessons with The Beatles : : Cofounder, Retail Solutions (Now part of Circana) : : Mentor : : Author, "Roots and Wings": : DTM : : Non-Profit Board Experience
"We can at least try to understand our own motives, passions, and prejudices, so as to be conscious of what we are doing when we appeal to those of others. This is very difficult, because our own prejudice and emotional bias always seems to us so rational." — T. S. Eliot
A close family member, a person of color, experienced a distressing implicit bias or stereotyping when someone looked at her and her baby and asked if she was the child's nanny. The assumption that a person of color with a fair-skinned child must be a nanny rather than the child's parent is based on preconceived notions or stereotypes about race, class, and caregiving roles. It reveals underlying biases that link race and socioeconomic status with specific roles or relationships, often without conscious intent.
Wikipedia defines bias as follows:
"Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error. Statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average."
While one may be unable to avoid many deep-rooted biases, one can try to minimize them by becoming self-aware, listening to diverse perspectives, and being mindful. One can reduce bias by using structured processes and well-defined criteria in decision-making. There are bias training programs one could participate in, such as the many online unconscious bias courses available on Coursera.?
Algorithmic Bias
"AI is good at describing the world as it is today with all of its biases, but it does not know how the world should be." — Joanne Chen, Partner, Foundation Capital, at SXSW 2018
While human bias is a subject with vast knowledge, today, we find ourselves rushing toward a coexistence with AI. In 2025, we are talking about AI agents working on problems alongside humans. These AI agents are systems that introduce algorithmic bias in their actions.
As Wikipedia defines, Algorithmic Bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the algorithm's intended function.
Several recent examples of algorithmic bias include facial recognition, hiring tools, and medical diagnostics. When these biases occur, the companies whose systems created them find their image and reputation tarnished. The impact could be devastating for individuals affected by the bias.
The article, 8 Shocking AI Bias Examples, lists several examples of bias in AI solutions. One of these examples is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm, which was designed to predict the likelihood of a defendant reoffending. An investigation found that the algorithm was biased against Black defendants. It was more likely to incorrectly classify Black defendants as high risk than white defendants. The consequence is that Black individuals could get harsher sentencing and parole decisions.
Sources of Algorithmic Biases ?
Algorithms act on data. If the data used to train the AI algorithm is biased, so are the outcomes. Data is inherently biased since humans created it over time. That does not mean the human biases need to be further propagated or amplified by computers.
Developers who create the algorithms could introduce bias with flawed assumptions and reasoning.
Additionally, there is an opportunity for amplification of the bias through the feedback mechanisms that are part of the machine learning process. For example, suppose the algorithm is learning from hiring data with fewer women than men (which has been true historically). In that case, it might associate male candidates with more aptitude for engineering roles, which leads to more male hires, which then gets added to the existing data. The algorithm sees a stronger correlation between males and aptitude for engineering, and the amplification continues.
Types of Algorithmic Bias
Bias in historical data used for training is a basic type of bias.
In addition, if the training data were not chosen properly, sampling bias is introduced. For example, in a health study, if the population included is predominantly healthy humans, the outcome will reflect this bias.
If the data collected does not represent the specific intent, it introduces misalignment bias. For example, an employer might want to predict employee productivity but use keyboard activity as a proxy, which doesn't represent actual productivity.
Another type, aggregation bias, occurs when data is collected across diverse groups without accounting for the differences between sub-groups. A typical example is medical data across a population of men and women. Symptoms in each of the sexes could be very different, for instance, in heart attacks. Not accounting for this in the model would lead to underperformance or ineffective outcomes for women in the algorithm.
Identifying Biases in Algorithms
You can identify bias with multiple actions if you have inherited an algorithm or use a third-party tool.
Preventing Biases in Algorithms
If you are lucky enough to create your own algorithm and start from scratch, you can be proactive about minimizing biases. Here are some actions you can take throughout the algorithm lifecycle:
Leading without Bias
There have been many discussions about how leaders should lead without traditional human bias. Self-awareness, surrounding oneself with people with diverse perspectives, and using structured decision-making processes can all help leaders deal with it. Today, they have to deal with another type of bias — algorithmic bias introduced by AI. Leaders can mitigate it by championing deliberate attention to the problem, similar to dealing with security. By carefully planning, developing, and testing the algorithms and continuously evaluating them, leaders can reduce the impact of algorithmic bias.
Note: This article was first published in the CEOWold Magazine.