Unravelling AI Cognitive Biases
Munaf Sheikh
??Futurist ??Leader-not-manager ??Creative Technician ??MSc AI. Imagineer
Abstract
This article provides an in-depth exploration of cognitive biases and logical fallacies, with a focus on how these mental models affect decision-making processes. The piece also discusses potential ways to overcome these biases, including systematic thought processes and the use of AI. With references to everything from sports analogies to AI parameter counts, the article offers a comprehensive guide to understanding and navigating the complex landscape of cognitive biases in everyday life.
Introduction
If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.
You may have heard the common adage "If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail." Abraham Maslow, 1966. Wikipedia itself doubts the exact origin of the phrase. That doesn't stop everyone from reusing it, accurately or otherwise, and applying it to all manner of situations. It's like using a hammer to fix everything just because it's the tool you know best, even when a screwdriver or wrench would be more effective.
This particular phrase, fondly referred to as "The Law of the Instrument", "Law of the Hammer", or "Golden Hammer", is an example of a cognitive bias. Cognitive biases are essentially mental shortcuts or patterns of thinking that can sometimes lead us to make irrational decisions. They're like little glitches in our thinking that can cloud our judgment.
As awful as it sounds, cognitive biases are not always negative. Some are adaptive. Others help to speed up decision-making. Some are:
"a by-product of human limitations, resulting from a lack of appropriate mental mechanisms, the impact of an individual's constitution and biological state or simply from a limited capacity for information processing."
The Problem
Translating this quote into machine language, we quickly conclude that AI is as prone - if not more so - to cognitive biases as humans are.
How many billion parameters does ChatGPT need to be even mildly functional? Several! GPT-1 had 117 million parameters. GPT-2 had 1.5 billion parameters, while GPT-3 required 175 billion parameters. With GPT-4, reports have it there are more than 1 trillion parameters. If that's not "limited capacity for information processing", then I don't know what is.
Thank you for 700+ subscriptions! That's 700+ people getting this newsletter delivered directly to their inboxes. That is a huge honor!
If you're reading this, chances are you know someone who could benefit from this information. I ask that you share it with them.
There are very many cognitive biases. Some biases comprise others. As an example, the anchoring bias, which is the tendency to rely too heavily on one piece of information acquired on a subject, includes:
In short, it gets very complicated, very quickly. So how do we self-examine and reflect on our thoughts?
An Algorithm
One solution could be to systematize the thought process to a checklist of all the biases that could apply to a given topic. Then we could determine the extremes, and generate polarized statements for each extreme. This data now becomes training data against which we can compare thoughts.
The analogy I like to use is one of sport. We will start simple, and then build up to a more complex, multi-dimensional scenario:
领英推荐
Manual Solution
The manual process, a step-by-step evaluation of this information would involve a systematic comparison of each bit of information - as well as meta-information, which relates to information about the information, until we can conclude:
In this simple example, there are only two sides to the query. In reality, there could be many more sides. There could be a gradient of outcome. One or more sides could have much more information than the others. Adding just one more dimension greatly complicates the scenario.
Software Solutions
As we continue to evolve and develop AI, it's crucial to remain conscious of these biases and ensure that they are not replicated in the technology we create. Even more crucial, I’d argue that users of the systems in which AI may be incorporated are much more susceptible to forming opinions that are plagued with AI-generated cognitive biases. And worse still, creating and sharing content in a polarised, highly opinionated world.
One of the key challenges in leveraging AI to tackle cognitive biases is ensuring that the AI itself does not become biased. This involves careful design and implementation of AI systems, as well as constant vigilance and monitoring. After all, an AI is only as unbiased as the data it is trained on and the algorithms it uses.
Key Takeaways
What does that mean in practical terms? Well, imagine having a smart assistant that can point out when you're falling into the trap of a cognitive bias. It might say, 'Hey, it seems like you're overly focused on the first piece of information you received You might want to consider other factors before making a decision.'. My favorite smart assistants are Gmail, Notion, Grammarly, IntelliJ and VS Code for code generation, and a smart Chrome search extension called MaxAI. I am by no means vouching for these applications. I use these tools daily - and you might be using them too.
Currently, smart assistants don’t provide information about these biases. They require explicit instructions (prompts) to list all applicable cognitive biases, consider the outcome of polarization, and then cross-examine all information generated.
A prompt
It's possible to sufficiently prompt AI to do all that hard work for you. But you can’t get away from double-checking the response. To do that you need to know what you’re looking for. We also need to remember that while AI can help us identify cognitive biases, overcoming these biases requires conscious effort on our part. AI can provide us with insights and recommendations, but ultimately, we are the ones who need to make the decisions.
If you are looking for a prompt that helps overcome some of the challenges presented, here is one:
When analyzing and presenting information, please apply a multi-dimensional approach that accounts for common cognitive biases. Critically evaluate information from multiple perspectives, challenge initial assumptions, and provide balanced viewpoints. Specifically, avoid anchoring by not relying solely on the first piece of information; counter confirmation bias by considering evidence that contradicts the initial hypothesis; mitigate the bandwagon effect by evaluating the popularity of an idea against its intrinsic value; and be mindful of overconfidence by acknowledging the limits of the data. Where relevant, highlight the potential impact of biases such as the Dunning-Kruger effect, hindsight bias, and optimism bias on the interpretation of information. Aim for an objective and comprehensive analysis that encourages informed decision-making.
Conclusion
Exploring the intersection between cognitive biases and AI opens up a world of possibilities. AI, with its inherent ability to process massive amounts of data, can help us identify and mitigate these biases. However, it's important to remember that while AI can play a key role in overcoming cognitive biases, it's not a silver bullet. AI is just a tool, and like any tool, its effectiveness depends on how we use it.
Thank you for 700 subscriptions. I trust you know someone who could benefit from the information here. I humbly ask that you forward, share or repost.
Follow me on LinkedIn: