On one hand, we have IA biased behaviors:
- In April, 2024, the New York Post reported that OpenAI’s ChapGPT was displaying images of the roles of “CEO” and “Financier” primarily as men, whereas images of roles such as “Secretary” were mostly women, for which OpenAI received strong criticism. Retraining the AI algorithms to eliminate that and other biases, and implementing mechanisms to prevent biases is expensive, resource intensive, and time consuming.
- Also in April 2024, The Australian reported that some AI recruitment tools were inadvertently filtering our women candidates who had taken maternity leave. Women affected couldn’t find jobs, the organizations involved incurred in significant monetary losses, and employers loss the opportunity to hire talented people.
- In July, 2024, Reuters reported that Workday —an HR software provider— was under a class-action bias lawsuit because its AI-powered job application screening tool was discriminating applicants based on race, age, and disability. Although the amount of the settlement wasn’t disclosed, together with its financial impact, there was also loss of reputation for Workday.
On the other hand, the three most common biases decision-makers face with regard to Artificial Intelligence (AI) are:
- Automation Bias. Decision-makers tend to trust AI outputs without question and assume the outputs accurate, instead of treating them with the same skepticism as when outputs come from people. An example of the cost of Automation Bias is the 2010 Flash Crash were close to $1 Trillion in market value was erased in a few minutes as result of a “rapid market decline” from automated trading systems.
- Confirmation Bias. Decision-makers often seek or interpret AI outputs in a way that confirms their existing beliefs or expectations, and tend to ignore or diminish contradictory information. For example, some investors tend to hold onto losing stocks longer than they should as result of a false confirmation belief.
- Overconfidence Bias. Many decision-makers assume that AI is infallible and tend to ignore their limitations or lack of contextualization. The Review of Financial Studies reported about overconfident traders underperforming the market around 2% annually as result of excessive trading.
These biases are particularly critical because they directly impact the quality of decisions made and reduce the potential benefits of AI-assisted decision-making. Biases increase risks. Other common biases when making AI-assisted decisions are anchoring, availability heuristic, framing effect, overconfidence, status quo, and groupthink.
How can we overcome those biases?
- To overcome Automation Bias, encourage Critical Thinking. Question the AI outputs and validate them with the same rigor as you do with outputs from people. Validate them against other sources of information and with human experts, particularly so with critical decisions. Instead of relying on one AI, use diverse AI systems and cross-check their outputs.
- To overcome Confirmation Bias, seek contradictory evidence. Research data that challenges existing beliefs or assumptions and discuss them. Use structured decision-making such as devil’s advocacy or red team reviews to ensure diverse perspectives are included before finalizing decisions. Also, leverage data analytics for a broad range of data, including opposing viewpoints, to have a holistic view of the situation.
- To overcome Overconfidence Bias, adopt Humility in decision-making by encouraging self-awareness among leaders and by highlighting the limitations of individual knowledge and predictive abilities. Simulate outcomes by using scenario analysis or predictive models to evaluate the risks and uncertainties of potential decisions. Monitor decision accuracy by keeping track of past decisions and their outcomes to identify patterns of overconfidence and learn from mistakes
Consider diversifying your teams and including them in your decision-making groups. Their diverse backgrounds and expertise will bring a rich set of? perspectives. Also consider awareness and training in the form of workshops on cognitive biases to help decision-makers recognize and address their own biases.
Encourage slow thinking by implementing processes to slow down decisions, such as requiring justification for choices, which reduces reliance on mental shortcuts. Establish transparency in AI by ensuring that AI systems provide explanations for their outputs, making it easier for decision-makers to understand and question the rationale behind the outputs.
Recognizing these biases is crucial for leaders to ensure balanced and effective integration of AI into decision-making processes.
By being aware of these tendencies, decision-makers can implement strategies to mitigate their impact, such as fostering a culture of critical evaluation and maintaining a balance between AI insights and human judgment. That way, your and your decision-makers can significantly reduce the impact of AI behavior biases and of human cognitive biases, resulting in better informed and balanced choices.
Include how to overcome biases in AI systems and in people as part of your organization’s strategy and tactics.