Real World Biases Mirrored by Algorithms
image credit : Google

Real World Biases Mirrored by Algorithms

We are going to grab the bull by its horns by tackling first the most challenging type of algorithmic bias: Cognitive Biases. The bias caused by biased behaviors in our real world. This is the most difficult bias because it seems absolutely correct, as this algorithm acts statistically correct and it absolutely mirrors and represent the real world.

No alt text provided for this image

What is cognitive bias?

"A cognitive bias is a person's tendency to make errors in judgment based on cognitive factors, and is a phenomenon studied in cognitive science and social psychology. Forms of cognitive bias include errors in judgement, social attribution, and memory that are common to all human beings. Presence of such biases drastically skew the reliability of anecdotal and legal evidence. These are brought to be based upon heuristics, or rules of thumb, which people employ out of habit or evolutionary necessity."

How does cognitive biases affect the decision making in machines?

An umpteen number of research have aimed to bridge the gap between artificial intelligence and human decision-makers in AI assisted decision making, where humans are not only the ultimate decision-makers in high stake decisions, but also are the consumers of AI model predictions.

However, the presence of cognitive biases such as confirmation bias, anchoring bias, availability bias, heuristics to name a few, often distort our perceptions and understanding of the real world scenarios.

Cognitive biases and their effects on decision-making are well known and widely studied. As AI-assisted decision making presents a new decision making paradigm, it is important to understand this new paradigm, both analytically and empirically.

Different interactions in human-AI collaboration


In a collaborative decision making setting, perceived space represent the human-decision maker. There are two interactions introducing cognitive biases in the perceived space :

1). Interaction with the observed space which contains feature space and all the information of the task acquired by the decision maker.

2). Interaction with the prediction space that represent the outcome of the AI model, which could be AI-decision or explanation of the AI outcome.

Cognitive biases that influences the decision-makers perception with the data collected with their interaction with the observed space are : Confirmation bias, availability bias, the representativeness heuristics, or bias due to selective sampling of the feature space are mapped to the observed space.

However, anchoring bias and the weak evidence effect are mapped to the prediction space.

Confirmation Bias:

From a psychological perspective, confirmation bias is explained as a cognitive process that perpetuates a specific mindset. Generalizations and stereotypes are often at the root of preconceptions.

When one is presented with the information that challenges their preconceptions, they are likely to be:

??Paying no attention to that information.

??Acknowledging the information, but labelling it as false.

??Acknowledging the information, but labelling it as an exception to the rule.

??Giving more weightage to the information that confirms their preconceptions.

Driving While Black:

No alt text provided for this image

The confirmation bias not only occurs in workplaces but also in everyday life. Some officers may be racially prejudiced and so consciously target minority drivers. "Driving while black" phenomenon describes a diffuse tendency to stop black drivers at higher rate as compared to white drivers. Minority citizens have argued that they are more likely to get singled out for traffic law enforcement and are at a greater risk for more invasive investigations. Driving while black refers to the practice of targeting the drivers of color, especially African Americans, for unwarranted traffic law enforcement.

Biases often have a "winner takes all effect". An initial bias starts to tweak reality and the effect becomes self-reinforcing and even self fulfilling.

Let's consider a simple example, here's a company ABC that has only 10% women employees and it has a boys club culture that makes it difficult for women to succeed.

The hiring algorithm is trained on current data and based on current employee success, it scores women candidates lower.

Although, it is fairly representing that women have difficulty succeeding in this company because of boy's club culture. But, the net consequences of this is that the algorithm starts to score women candidates lower.

The result is company ends up hiring even fewer women candidates. This is an algorithmic vicious cycle and it arose because of bias in the algorithm. The bias in algorithm was because of the biased data that it was trained on.

Repeating the wrong answer thrice, doesn't make it right


If we are faced with a situation like this that we considered in the above example, it is crucial to first identify the elephant in the room. The algorithm is not biased - it is the unbiased mirroring of the reality which is horribly flawed due to human bias.

To mitigate such biases, it is insufficient to fix the algorithm - it is requires a conscious effort to fix our own biases and prejudices.

The challenge is that the algorithm which mirrors the biases of the society, now perpetuates the existing bias, fueling ever-increasing discrimination and injustice.

Here we are grappling with deep philosophical and ethical issues. What might seems right and just in one culture, might not be considered right or ethical in another culture. And unfortunately, there is no common agreement on what is universally accepted right or wrong.

Following recommendations by researchers to mitigate algorithmic bias due to human biases:

?? Considering a Bayesian framework for modelling biased AI-assisted decision making and identifying well known cognitive biases within our framework based on the sources of cognitive biases.

?? Allocating more time and focus to a decision reduces anchoring in the AI-assisted decision-making.

??Formulating a time allocation problem to maximize human-AI team accuracy accounting for the anchoring-and-adjustment heuristic and the variance in AI accuracy.


References:

  1. Epley, N. and Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors.
  2. Psychological science, 12(5):391–396. Epley, N. and Gilovich, T. (2006). The anchoring-and-adjustment heuristic:
  3. Why the adjustments are insufficient. Psychological Science, 17(4):311–318. PMID: 16623688. Fernbach, P., Darlow, A., and Sloman, S. (2011).
  4. When good evidence goes bad: The weak evidence effect in judgment and decision-making. Cognition, 119:459–67.
  5. Antony, L. (2016). Bias: Friend or Foe? In Brownstein, M. and Saul, J., editors, Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, pages 157–190. Oxford University Press.
  6. Mustafa Abualsaud and Mark D Smucker. 2019. Exposure and Order Effects of Misinformation on Health Search Decisions. In ROME 2019 Workshop on Reducing Online Misinformation Exposure. ACM.
  7. Azzah Al-Maskari and Mark Sanderson. 2010. A review of factors influencing user satisfaction in information retrieval. Journal of the American Society for Information Science


Thankyou for reading my article. Hope you find this informative. I would love to know your feedback and learning.

Happy Learning!!

Best Regards,

Rupa Singh

Founder & CEO(AI-Beehive)

www.ai-beehive.com

Adarsh Srivastava

Head of Data & Analytics Quality Assurance | AI Ethics Lead @ Roche Dia | Sci. Advisor, Trustworthy AI | Data & AI Policy | Keynote speaker

1 年

I am writing article on the "Vicious cycle of data bias" and while reading landed on your article. I really like your lucid style of writing and the effective articulation of the points. Great work. Can I link your article in mine?

回复
Pamela Michelle J.

AI Ethics Advisor to CXOs, Data Scientists ? Framework for AI Risk? ? Model Risk Management ? Speaker ? Airplane Pilot

2 年

Associating 'blacks' with the term minority is itself...a bias and form of discrimination. We are not a minority. Stop saying this. Blacks are the majority globally. Whites in the usa will soon be a usa minority population. Labeling us as a population minority is a form of minimizing our rights since after all....there are only a few of them and we cannot afford to always address every obscure use case. Are Indians a "minority" ? Should we universally refer to Indians as 'minority" even inside of India or Google where clearly they are not a minority pct of the population. Should we use ai metrics based on "minoritizing" only some people and not others who are actual global minorities? Whites are actually a global minority.?

回复
Ada Goverde

Ik breng rust en structuur, wat complex is maak ik eenvoudig | Interim management | kwaliteitsmanagement | opzetten kwaliteitssystemen |

2 年
Dr. Arpit Yadav

?? Senior AI Scientist at CCE ?? | ?? Researcher in Gen AI ?? | ?? Top AI ML DS Voice| ?? Ph.D. in AIML ?? | ?? Consultant | ???? Trainer | ??? Speaker | ?? Mentor | ?? Coach in DS/ML/AI | ?? Thought Leader in AI | ??

2 年

Great Insights

Kaushik Chaudhuri

Associate Professor at SoME, Shiv Nadar Inst of Eminence deemed to be University, member thinktank Gobal AI Ethics Ins

2 年

Thanks for sharing this very informative article.?

要查看或添加评论,请登录

Rupa Singh的更多文章

  • EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    Because of the rapid growth of technological advancements and innovation, governance and regulatory mechanisms are put…

    2 条评论
  • Angulimala in Our Algorithmic World!!

    Angulimala in Our Algorithmic World!!

    Once upon a time, in the lush forests of ancient India, there lived a fearsome bandit named Angulimala. His name struck…

    10 条评论
  • AI Ethics Approach is Reactionary instead of Proactive

    AI Ethics Approach is Reactionary instead of Proactive

    In the recent past, AI solutions are pervasively deployed and at scale in many application areas of societal concerns…

    8 条评论
  • Discriminatory Hiring Algorithm

    Discriminatory Hiring Algorithm

    Algorithms do not build themselves. They often rely on human input and the choices they make about the outcomes.

    6 条评论
  • Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    AI Ethics should not be treated as an aftermath, rather organizations must prioritize the incorporation of AI ethics at…

  • Auditing for Fair AI Algorithms

    Auditing for Fair AI Algorithms

    With the widespread deployment of AI systems, there has also been valid concerns about the effectiveness of these…

    4 条评论
  • Fairness Compass in AI

    Fairness Compass in AI

    An interesting experiment was performed around two decades ago on fairness and inequality on two Capuchins monkey that…

    2 条评论
  • 5-Steps To Approach AI Explainability

    5-Steps To Approach AI Explainability

    The concept of explainability in AI is often related to transparency, interpretability, trust, fairness, and…

  • Is Your AI Model Explainable?

    Is Your AI Model Explainable?

    Why don't we just trust the AI models and accept the decisions made by the machines, if the machine learning model…

    3 条评论
  • Multicollinearity in Linear Regression

    Multicollinearity in Linear Regression

    Multicollinearity Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple…

    5 条评论

社区洞察

其他会员也浏览了