Real World Biases Mirrored by Algorithms
Rupa Singh
Founder and CEO at 'The AI Bodhi' and 'AI-Beehive' | Author of "AI ETHICS with BUDDHIST PERSPECTIVE"| Top 20 Global AI Ethics Leader | Thought Leader| Expert Member at Global AI Ethics Institute
We are going to grab the bull by its horns by tackling first the most challenging type of algorithmic bias: Cognitive Biases. The bias caused by biased behaviors in our real world. This is the most difficult bias because it seems absolutely correct, as this algorithm acts statistically correct and it absolutely mirrors and represent the real world.
What is cognitive bias?
"A cognitive bias is a person's tendency to make errors in judgment based on cognitive factors, and is a phenomenon studied in cognitive science and social psychology. Forms of cognitive bias include errors in judgement, social attribution, and memory that are common to all human beings. Presence of such biases drastically skew the reliability of anecdotal and legal evidence. These are brought to be based upon heuristics, or rules of thumb, which people employ out of habit or evolutionary necessity."
How does cognitive biases affect the decision making in machines?
An umpteen number of research have aimed to bridge the gap between artificial intelligence and human decision-makers in AI assisted decision making, where humans are not only the ultimate decision-makers in high stake decisions, but also are the consumers of AI model predictions.
However, the presence of cognitive biases such as confirmation bias, anchoring bias, availability bias, heuristics to name a few, often distort our perceptions and understanding of the real world scenarios.
Cognitive biases and their effects on decision-making are well known and widely studied. As AI-assisted decision making presents a new decision making paradigm, it is important to understand this new paradigm, both analytically and empirically.
In a collaborative decision making setting, perceived space represent the human-decision maker. There are two interactions introducing cognitive biases in the perceived space :
1). Interaction with the observed space which contains feature space and all the information of the task acquired by the decision maker.
2). Interaction with the prediction space that represent the outcome of the AI model, which could be AI-decision or explanation of the AI outcome.
Cognitive biases that influences the decision-makers perception with the data collected with their interaction with the observed space are : Confirmation bias, availability bias, the representativeness heuristics, or bias due to selective sampling of the feature space are mapped to the observed space.
However, anchoring bias and the weak evidence effect are mapped to the prediction space.
Confirmation Bias:
From a psychological perspective, confirmation bias is explained as a cognitive process that perpetuates a specific mindset. Generalizations and stereotypes are often at the root of preconceptions.
When one is presented with the information that challenges their preconceptions, they are likely to be:
??Paying no attention to that information.
??Acknowledging the information, but labelling it as false.
??Acknowledging the information, but labelling it as an exception to the rule.
??Giving more weightage to the information that confirms their preconceptions.
Driving While Black:
The confirmation bias not only occurs in workplaces but also in everyday life. Some officers may be racially prejudiced and so consciously target minority drivers. "Driving while black" phenomenon describes a diffuse tendency to stop black drivers at higher rate as compared to white drivers. Minority citizens have argued that they are more likely to get singled out for traffic law enforcement and are at a greater risk for more invasive investigations. Driving while black refers to the practice of targeting the drivers of color, especially African Americans, for unwarranted traffic law enforcement.
Biases often have a "winner takes all effect". An initial bias starts to tweak reality and the effect becomes self-reinforcing and even self fulfilling.
领英推荐
Let's consider a simple example, here's a company ABC that has only 10% women employees and it has a boys club culture that makes it difficult for women to succeed.
The hiring algorithm is trained on current data and based on current employee success, it scores women candidates lower.
Although, it is fairly representing that women have difficulty succeeding in this company because of boy's club culture. But, the net consequences of this is that the algorithm starts to score women candidates lower.
The result is company ends up hiring even fewer women candidates. This is an algorithmic vicious cycle and it arose because of bias in the algorithm. The bias in algorithm was because of the biased data that it was trained on.
Repeating the wrong answer thrice, doesn't make it right
If we are faced with a situation like this that we considered in the above example, it is crucial to first identify the elephant in the room. The algorithm is not biased - it is the unbiased mirroring of the reality which is horribly flawed due to human bias.
To mitigate such biases, it is insufficient to fix the algorithm - it is requires a conscious effort to fix our own biases and prejudices.
The challenge is that the algorithm which mirrors the biases of the society, now perpetuates the existing bias, fueling ever-increasing discrimination and injustice.
Here we are grappling with deep philosophical and ethical issues. What might seems right and just in one culture, might not be considered right or ethical in another culture. And unfortunately, there is no common agreement on what is universally accepted right or wrong.
Following recommendations by researchers to mitigate algorithmic bias due to human biases:
?? Considering a Bayesian framework for modelling biased AI-assisted decision making and identifying well known cognitive biases within our framework based on the sources of cognitive biases.
?? Allocating more time and focus to a decision reduces anchoring in the AI-assisted decision-making.
??Formulating a time allocation problem to maximize human-AI team accuracy accounting for the anchoring-and-adjustment heuristic and the variance in AI accuracy.
References:
Thankyou for reading my article. Hope you find this informative. I would love to know your feedback and learning.
Happy Learning!!
Best Regards,
Rupa Singh
Founder & CEO(AI-Beehive)
www.ai-beehive.com
Head of Data & Analytics Quality Assurance | AI Ethics Lead @ Roche Dia | Sci. Advisor, Trustworthy AI | Data & AI Policy | Keynote speaker
1 年I am writing article on the "Vicious cycle of data bias" and while reading landed on your article. I really like your lucid style of writing and the effective articulation of the points. Great work. Can I link your article in mine?
AI Ethics Advisor to CXOs, Data Scientists ? Framework for AI Risk? ? Model Risk Management ? Speaker ? Airplane Pilot
2 年Associating 'blacks' with the term minority is itself...a bias and form of discrimination. We are not a minority. Stop saying this. Blacks are the majority globally. Whites in the usa will soon be a usa minority population. Labeling us as a population minority is a form of minimizing our rights since after all....there are only a few of them and we cannot afford to always address every obscure use case. Are Indians a "minority" ? Should we universally refer to Indians as 'minority" even inside of India or Google where clearly they are not a minority pct of the population. Should we use ai metrics based on "minoritizing" only some people and not others who are actual global minorities? Whites are actually a global minority.?
Ik breng rust en structuur, wat complex is maak ik eenvoudig | Interim management | kwaliteitsmanagement | opzetten kwaliteitssystemen |
2 年Izabella Schwierz
?? Senior AI Scientist at CCE ?? | ?? Researcher in Gen AI ?? | ?? Top AI ML DS Voice| ?? Ph.D. in AIML ?? | ?? Consultant | ???? Trainer | ??? Speaker | ?? Mentor | ?? Coach in DS/ML/AI | ?? Thought Leader in AI | ??
2 年Great Insights
Associate Professor at SoME, Shiv Nadar Inst of Eminence deemed to be University, member thinktank Gobal AI Ethics Ins
2 年Thanks for sharing this very informative article.?