Methods for Mitigating Predictive Analysis Bias
A father and his son are involved in a car accident. Sadly, the father dies at the scene. The son is taken to the hospital by ambulance. However, the surgeon in the operating room looks at the boy, pulls away, and says, "I can't operate on this boy. He's my son."?
What makes this possible? Of course, the surgeon is the boy's mother. Kudos to those who got it right! Shame on those who didn't. Now you know! The riddle illustrates the unconscious bias in our brains. We bake the same tendencies into the AI model we develop, whether we realize it or not. You can't reverse it once it is trained on partial data using algorithms created by developers with implicit bias. And this can be detrimental for businesses and society as a whole. Let's go deeper into the world of predictive bias to see what we can do to mitigate it.?
Bias in Predictive Analysis
Companies are turning to machine learning in all facets of life, from profiling criminal risk assessment scores, healthcare risk assessment data, recruiting new talents, and evaluating credit for loans. However, they can be susceptible to the "garbage in, garbage out" syndrome more often than not. Biased data is garbage in this case. When biased data are used for predictive modeling, the machine learning model will only propagate and amplify those inequalities. Here are a few famous instances of bias-driven predictive analysis:
Amazon Hiring
One of the most famous bias cases in the predictive analysis is Amazon's failed recruitment program [1] which taught itself to prefer male candidates for software development and other technical posts because Amazon mostly trained it on men's resumes. Additionally, it penalized resumes that included "women's" ("women's club captain," etc.).?
Healthcare Risk Score?
Researchers found [2] that healthcare algorithms designed to determine the amount spent on care were biased towards African Americans as the data encoded the healthcare costs to assess patient risk. Due to the structural inequalities in the US Healthcare system, African American diaspora was generating lower costs than Caucasians. As a result, the Algorithm predicted that they were less sick even when not the case.?
Criminal Justice
A risk assessment score predicts how likely an individual is to commit another crime in the US Criminal Justice system. A study conducted by ProPublica [3] discovered that the Algorithm underestimated the likelihood of a Caucasian's re-offense risk and overestimated an African American. The Algorithm was based on the defendant's age and the number of previously committed crimes and had no way to determine if the past arrests were issued due to a human/system bias which resulted in the biased predictions.?
How to mitigate biases
To avoid biases in their models, developers and researchers need to understand the root cause of this bias. Let's take a look at some ways of eliminating biases:
领英推荐
Be aware of biases
Various biases exist, and being consciously aware of them is the best way to mitigate them - cognitive bias results in both selection and sample bias. Analysts decide what data to include or exclude, creating a sample bias. On the other hand, if the collected data does not represent the population, it leads to selection and analytical bias.?
Similarly, there are cases of outlier and confirmation bias. Outlier bias occurs using data outliers that differ significantly from the samples and can be corrected by determining the median as a closer representation of the whole data set. In contrast, Confirmation bias usually comes into play while looking at the results of a dataset.
Promote Team Diversity
It becomes easier to detect bias if the analysts and technicians are from varied backgrounds. Diversity of culture brings new ideas and fresh perspectives. However, organizations must promote a culture of diversity, equity, and inclusion (DEI).?
Establish policies for responsible development?
Standard checks and policies must be deployed to actively mitigate data bias before each stage of the algorithm development process. It is essential to ask about ethics, data gathering, and who benefits from the data collected.?
Monitor The Data
Any biases present in the data will eventually show up. If an algorithm is designed to predict the ideal pet, and the sample only contains dog and cat owners, then the AI model will be unable to confirm other kinds of pets. Analysts must consider if the data presented in the sample creates misleading results or if any critical data is missing.?
Deploy Dummy Data Before Implementation
It is a safe option to deploy dummy data before implementing the Algorithm. This way, rectification can be done if the data is skewed or contains any bias. Running simulations on the same dataset that built the model can have inherited limitations, therefore deploy as many simulations based on real-world applications as possible, and test against comparable data from external sources.?
Use Algorithm to remove bias
There is no better way to remove algorithm bias than by using the Algorithm itself. Models can be used on biased data to offset biased results. If done correctly, predictive modeling can identify the biases present in human decision-making. There are many ways [4] to use algorithms to remove bias that developers and analysts can utilize.?
Conclusion
Bias in predictive analysis can adversely impact the lives of the people affected by the outcomes of the data in terms of safety or infringement of civil liberties and the unfair allocation of opportunities and reinforcing prejudices. From a business perspective, it can risk brand reputation and lead to inefficiencies, government penalties, or economic losses.?
Although removing bias from predictive analysis is a difficult task, a proactive approach can reduce data bias and make the world a more equitable place.
Communications Strategy and media management, content management, writer and editor
3 年highly appreciated for this economically smart, but also philosophically insightful message. Very inspiring. Can't stop thinking and questioning myself. Do I really want utopia overcoming all those stupid humanly biased world or just leave it and stick to this dystopia cooperate in the middle of somewhere, repeating myself no worries and that's what we do as usual...kinda.
Founder and Managing Partner | Comprehensive Solutions for Growth
3 年References: [1] Amazon scraps secret AI recruiting tool that showed bias against women, https://reut.rs/3nMHA7D [2] Dissecting racial bias in an algorithm used to manage the health of populations, https://bit.ly/3AhJJx2 [3] Machine Bias, https://bit.ly/3KvLefL [4] Preventing Machine Learning Bias, https://bit.ly/3IqfxD2 [5] Graph reference link, https://bit.ly/3qJfPi9