A few months back, my esteemed colleague and friend, Prof
Stephen Muggleton
, a leading figure in logical AI research, recommended me to read the book “Thinking, Fast and Slow" by Daniel Kahneman. The book explores the two systems that drive how we, as human, think: System 1, which is fast, intuitive, and operates automatically, and System 2, which is slow, deliberate, and requires conscious effort. The book explains cognitive biases, heuristics, and and the boundaries of human decision-making. Kahneman's work sheds light on how our thought processes can lead to errors and offers insights into how to improve decision-making in various contexts. The book highlights very well the strengths and limitations of human brains in making decisions which, in my view, can be used to develop AI algorithms especially suitable for safety critical applications such as autonomous driving. Here are some takeaways that I thought of:?
- System 1 and System 2 Thinking: Kahneman explains the distinction between the fast, intuitive thinking of System 1 and the slow, deliberate thinking of System 2. In the context of AI, this understanding helps developers design AI systems that mimic both types of thinking, with intuitive, automated processes for routine tasks, and deliberative, rule-based approaches for complex decision-making. In the context of autonomous driving, an AI-based autonomous vehicle can apply System 1 thinking to react quickly and intuitively to unexpected objects on the road, like a pedestrian suddenly crossing. System 2 thinking, on the other hand, is applied for complex scenarios such as navigating through busy traffic intersections where multiple variables must be evaluated and a rule-based approach is required.
- Cognitive Biases and Heuristics: The book explores various cognitive biases and heuristics that influence human decision-making, leading to errors and irrational choices. For AI development, this highlights the importance of recognising and addressing biases in data and algorithms to ensure ethical and fair AI decision-making. In context of autonomous driving, when collecting data, there may be an unconscious bias towards gathering information from urban environments more than rural ones. This can lead to AI algorithms that perform less effectively in rural environments. The cognitive biases and heuristics from data collection should be acknowledged and corrected to ensure fair and balanced performance in different settings.
- Loss Aversion and Prospect Theory: Kahneman's research on loss aversion and prospect theory reveals how people's preferences are influenced by potential gains and losses. AI systems can be designed to consider similar principles in optimising decisions, particularly in contexts where avoiding losses is critical. In the context of autonomous driving, autonomous vehicles should prioritise safety above all else, aligning with the principle of loss aversion. This means, for example, the AI may choose to brake hard and risk minor rear-end collision rather than risking a potential fatal accident by running over a pedestrian.
- Overconfidence and Planning Fallacy: The book discusses how individuals tend to be overconfident in their predictions and planning, often underestimating the time and resources required for tasks. AI developers can integrate mechanisms to counteract overconfidence, enabling more accurate predictions and realistic planning. For example, in the context of autonomous driving, an AI model may predict that a route will take a certain amount of time based on current traffic. However, the model needs to be designed to account for unexpected changes in conditions (like a sudden rainstorm), to prevent overconfidence and underestimating time and resources needed.
- Anchoring Effect: Kahneman describes the anchoring effect, where people's decisions are influenced by initial reference points. AI developers can learn from this concept to ensure that AI systems consider a wide range of data and do not disproportionately favour one piece of information as the anchor.?In the context of autonomous driving, for instance, if an AI system uses the speed of the first few vehicles it encounters to set an 'anchor' and base all further driving speed decisions on this, it might not respond effectively to changing speed limits or varying traffic conditions. Instead, the AI should continually adjust to a wide range of inputs.
- Regression to the Mean: The book explains the tendency for extreme events to be followed by more average outcomes, known as regression to the mean. In the development of AI models, understanding this phenomenon can help in evaluating and adjusting algorithms to avoid overreacting to outliers and better predict future trends. An example, in the context of autonomous driving, is that after an extreme event like a near-miss with a pedestrian, an AI might overcompensate by being excessively cautious. However, understanding the concept of regression to the mean can prevent this, with the system recognising that such extremes are not the norm and adjusting its responses accordingly.
- Availability Heuristic: The book explores the availability heuristic, which is the tendency of individuals to rely on readily available information when making decisions, often underestimating the importance of less accessible data. Kahneman explains that this mental shortcut can lead to biased decision-making since information that is more vivid or easily recalled may not be representative of the overall reality. AI developers can mitigate the impact of availability heuristic by ensuring that models have access to comprehensive and representative datasets, reducing the risk of biased decision-making based on easily accessible but limited information. In the context of autonomous driving, If an AI model is trained primarily on data from daytime driving, it may perform poorly at night. By ensuring the model has access to comprehensive data across both day and night conditions, it can make better decisions.
- Framing Effects: Kahneman and his research partner Amos Tversky investigated framing effects, which occur when people's choices are influenced by how options are presented, rather than the actual content of the choices. The book discusses how individuals tend to make different decisions depending on whether a situation is framed as a potential gain or a potential loss. Rational AI systems can be designed to recognise framing effects and adjust decision-making accordingly. By presenting data in a neutral manner, AI can avoid being influenced by the way information is presented. For instance, in the context of autonomous driving, If sensor data on an obstacle is presented in a way that over-emphasises the threat, the AI might react more drastically than necessary. The data should be framed neutrally to ensure a proportionate response.
- Confirmation Bias: Confirmation bias is the tendency for individuals to favour information that confirms their preexisting beliefs or hypotheses while disregarding or downplaying contradictory evidence. Kahneman explains that confirmation bias can lead to overconfidence in one's own beliefs and can hinder objective and rational decision-making. AI developers can incorporate mechanisms that actively seek out diverse data sources and viewpoints, promoting more balanced analyses and avoiding the reinforcement of existing biases. In the context of autonomous driving, for example, If an AI model is primarily trained on data where the car is driven cautiously, the model may overly favour cautious driving and may not perform optimally when faster, yet safe, driving is possible. To counteract this, the AI model should be trained with diverse data sources to ensure a balanced driving style.
- Hindsight Bias: The book delves into hindsight bias, which is the tendency for people to perceive events as more predictable than they were before they happened. After an event occurs, individuals tend to believe they knew the outcome was inevitable, even when that was not the case. Kahneman's work highlights how this bias can distort our understanding of past decisions and prevent us from learning from experience. To avoid hindsight bias, AI models can be developed using historical data without knowledge of the future outcomes. In the context of autonomous driving, for example, If an AI model is trained on data from a collision, it could become overly cautious or reactive when encountering similar situations. The AI model should use historical data to learn from these events but without the knowledge of the event's outcome to avoid hindsight bias. It should also validate its performance based on unseen data to ensure accuracy in real-world applications.
Incorporating these insights from "Thinking, Fast and Slow" into the development of rational AI systems especially for autonomous driving can lead to more robust, ethical, and effective AI applications that better align with human reasoning and decision-making processes.
Saber Fallah is a professor of safe AI and autonomy at the University of Surrey and the Director of Connected Autonomous Vehicles Research Lab (CAV-Lab).