Rationality and biases in complexity and uncertainty

Rationality and biases in complexity and uncertainty

This blog post will explore some ideas at the crossroads between the rationality discourse and notions of complexity and radical uncertainty.??

An exploration of this kind might be relevant because we are surrounded by a narrative about how biased and irrational our minds are, and as it has entered the boardroom we need to make sense of what this could mean for our ways of deciding and thinking, especially in the face of the inevitable complexity and uncertainty that we clearly see now. Not only the world is hopelessly complex - on top of that we are hopelessly irrational, too! What chances do we have to make sound decisions? My stance is not that pessimistic and this blog will explore six concepts at the intersection between rationality, biases, and complexity.?

Claims of competence and incompetence?

What I don’t know. I am not a psychologist and have not carried out original experimental research myself. I may be making simplified arguments and quick assumptions of the kind “has no one thought about X?” if I have not read the book where someone thinks about X. I would be appreciative of corrections to my claims and further references.?

What I do know. I have read a dozen books on the subject of decision making and biases, and I work with complexity a lot, helping people and teams work with complexity and causal opacity -which is?a subject matter where I do have some knowledge.?

My hope is that these six ideas can shed a light on topics that need further exploration. Does the notion that we are hopelessly biased even make sense in complexity??Let's jump right in.


First idea.?

Causality in complexity, what does it even mean to be biased??

In a complex world the notion of causality is radically different to that of a simpler world. If you lived in a laboratory, under controlled conditions, exploring events for which the link between cause and effect is known or knowable, you can clearly ascertain causes by observing effects and even predict effects from known causes. But complexity shows us a different world, one which is un-ordered as Snowden explains cogently with the Cynefin framework. In the words of Karl Popper, it is a world of propensities, of patterns that tend to happen given certain constrains and environmental contingencies, but it does not mean that we can reliably predict what will happen. Against this backdrop, what does it even mean to say that someone is biased in a decision or an assessment about a specific future outcome, when two equally rational agents could either be wrong or right about a certain hypothesis due to luck and chance and not skill of their judgment? When can we reliably tell that someone is biased (vs rational) if they are making conjectures about an unknown future? There will be times where people could hold on to a certain "reference class" to gauge what probability an event has. But in some situations we don't even have those life boats, and navigate in completely unknown territory. Which brings us naturally to our next point.??

References: Kay and King, Radical Uncertainty. Snowden’s Cynefin Framework. Popper, A world of Propensities. Alicia Juarrero, Dynamics in Action. Anne Pendleton Jullian, Design Unbound.?


Second idea.?

The all-seeing-eye. Is there an all-knowing scientist who holds the answers??

You have heard the bat and ball story a dozen times. Kahneman recounts the experiment of asking participants how much each item costs, and people consistently give a quick but often wrong answer. Given enough time to do the math, people can easily see their mistake. Contrast that with the following episode. On 28 February 2020 one of the proponents of the biases school, Cass Sunstein, wrote an editorial explaining to the rest of us how irrational it was for people in the US to be fearful of the new corona virus, given that at that time the number of known cases was abysmally small. These two examples are fundamentally different. In the bat-and-ball case you can imagine an all-knowing experimenter who holds the right answer, whereas in the corona virus prediction the columnist made a prediction about a future outcome that was unknown to everybody in the US, himself included, that turned out to be hopelessly flawed (we would have been better off worrying more -not less- about the corona virus). These events are radically different in their nature. 1) There are events where an answer is knowable and known (often to the researchers who are assessing how biased the participants of a study are); 2) There are events that are unknown to all, researchers included, partly due to conditions of high complexity explained above. When I read books about irrationality and cognitive biases, often the authors conflate the two types of events as if they could be treated the same way. Except for a few authors (Kay & King), the rationality debate runs the risks of treating all events as if there were always an “all-seeing-eye” that knew better, and from that watchtower was thus able to judge the irrationality of us biased mortals. The case of rationality researchers who dismissed some people as "irrational" because they "panicked" for being scared of the corona virus in February 2020 is a sobering reminder. If there is no such thing as a known-in-advance answer available to some, it is more helpful to see us all as navigating the same causal fog. The next question is, at what cost??

References: Kay and King in their book Radical Uncertainty make that useful distinction. Snowden’s Cynefin framework is again super helpful.?Prof. Gigerenzer has also written about the notion of heuristics that have been shaped by evolution to help us navigate a necessarily un-knowable world, and is not that pessimistic about considering us all hopelessly biased. I recommend your read his articles and books.


Third idea.?

Material consequences of being wrong are not the same (Taleb’s Fat Tony problem).?

Imagine we are in a somewhat equal playing field where no one had the answer. Uncertainty comes as a companion to our less-than-perfect knowledge of the world. And risk is closely connected to uncertainty. There are domains for which the uncertainty that comes with our imperfect knowledge of the world is immaterial. I predicted green light at the next junction, but it’s red now. Does it matter? Often it doesn't: I will arrive home sixty second later. For some situations, our inaccurate predictions and our biased thinking can cost a lot. Taleb talks about asymmetries we are exposed to, both the positive ones (if we start many companies, do we increase our chances to hit the jackpot?) and negative ones (if we get rewarded 10K dollars every time we play Russian roulette and survive, is it “rational” to play? And how often?) According to Taleb’s pragmatic character, the investor Fat Tony, it does not matter to have the perfect picture of the world -because we can’t anyhow. What matters more is to take the safest route and expose oneself to positive asymmetries and away from risky ones in the face of tail risks. Again on the corona debate, everybody has been wrong on many sides of the debate, but to equate the wrong predictions of the Great Barrington Declaration advocates to the covid-zero advocates is simply bad faith. If I predict herd immunity via infection will be reached swiftly and with minimal losses in society, and you predict that covid will kill three times as many people in UK as it actually did, these predictions are equally wrong, but staying on the safest side would have saved us more lives, whereas believing an optimistic prediction in conditions of imperfect knowledge (and risk!) cost us a lot of suffering. Taleb and high character were right: everybody is biased, what ultimately matters even more are the consequences. And it was wise of Robert Luis Stevenson, too, to remind us that “Sooner or later, everybody sits down to a banquet of consequences”.??

References: Taleb’s Incerto is a great series of books that explores what the consequences of our uncertainty are.?


Fourth idea?

Rationality? It depends on the level of analysis.??

Something that is deemed as irrational at one level of analysis is instead rational and even worth doing at another level. When do these considerations apply in the biases and rationality debate? Imagine I want to start a new tech business in Malm?, Sweden. Given the complexity and uncertainty we talked about, my chances are not clearly known. However, as Kahneman showed us over the years, we can use reference class forecasting to have a sense of how competitive the environment around me is. Say that from city statistics I learn that 80 out of 100 of similar enough businesses in Malm? fail within the first three years. (I made up the numbers but entrepreneurs know how difficult it is).?Then why would a reasonable entrepreneur do such irrational thing as starting a new company? For the individual player, they have incentives in the positive asymmetries at play: if my business fails, I will never sleep under a bridge (thanks Swedish social democracy) but if I win, I could hit it big. Now consider other levels of analysis: is it rational for a city to invest in entrepreneurship? At a bigger level, it benefits a city, a region, and an industrial ecosystem to support startups, for instance with business incubators, because many?parallel attempts are all trying to succeed, and even if the individual can be seen as irrational in not weighing their chances accurately, the collective has a lot higher chances to innovate and better society. In conditions that require innovation, lateral thinking, and a lot of diversity of thinking, the notion of reducing “noise” and zeroing in on reducing biases for everybody may not be that helpful, or even be counter productive. An investor could be irrational, an entrepreneur could be foolish, and a team may be creating a wacky prototype that does not hold promise, but many investors experimenting with multiple portfolios, and many teams innovating in novel product areas scan the system with a wider array of attempts, and even if some individual ones can be dismissed as “irrational”, at a higher level it makes a lot of sense to let these experiments run (as long as we can learn from them).?

References: To be fair, Kahneman’s book TFAS recounts an example in which there were different incentives for people. While all the managers were afraid of taking a risk, the CEO wanted all of them to try, as it would have been beneficial for the organization as a whole. I try to think of larger levels through the lens of complexity theories (cities, regions, ecosystems, etc.)?Under which circumstances the notion of reducing noise does more harm than good?


Fifth idea.?

What are the boundary conditions of System 1 and System 2??

If we had better information or more time to deliberate, our biases should go away. That at least is a tenet of System 1 and System 2. Kahneman taught us a great deal of lessons about our biased minds. For simple problems it may well be that people can easily spot the error in their reasoning and correct their view accordingly. For more complex matters, especially for strongly held opinions that people invest a lot of their identity on, it seems unclear to me that people with a bit more time and analysis will understand how erroneous their beliefs are. If that were the case, it seems very hard to see how people who create elaborate conspiracy theories spend hours in connecting dots out of thin air and drawing unicorns out of stars that are not even remotely aligned. There could be at least two reasons for why the notion of System 2 does not hold so well in situations of complexity. For one thing, often there is no such thing as a definite, final answer, as complex systems lend themselves to multiple and at times equally coherent interpretations, unless we can subject these interpretations to some sorts of severe tests. But in complexity, as Max Boisot said and as it was clearly explained by Snowden and Klein, sensemaking is not merely about connecting the dots or figuring out the riddle. There are so many dots that one can conjure almost any idea, no matter how implausible. The second reason is that we may invest a lot of our identity on certain "biases", and we will hold on to our beliefs much stronger than arguing about a simple math mistake we admit we made. For instance, research on our “tribal” and “political” minds seem to suggest that people with a high level of education, and with a lot of free time to investigate facts accurately, do not necessarily come to less biased conclusions about the world. There is robust evidence that shows just the contrary.?We may get trapped by simple stories in complexity due to our motivated reasoning or by wanting to protect our sense of identity, in spite of all the time and counter-evidence available to us.

My research question: under what conditions can we easily, without substantial cost make our biases go away? In situations where our identity is at stake, my guess is that we dedicate a lot of time and deliberate reasoning to create confabulated explanations, and not less.?

References: Kahneman’s Thinking, Fast and Slow holds the view of System 1 and System 2. Snowden and Thaghard speak about the notion of coherence in complexity, in Snowden's blog posts (search either coherence, Thagard, or sense-making) and in Thagard's on coherence here and here. There is literature about motivated reasoning which provides evidence against the notion of System 2.?Jennifer Garvey Berger's book Mindtraps explains may ways in which we could be inclined to create simple stories in complexity to protect our sense of self.

?

Sixth idea.?

It’s not only what the irrational belief is, but what the irrational belief does.?

The heuristics and biases school of thought brought forward by the most prominent researchers such as Kahnemann, Ariely, Pinker, etc. seems rooted in a worldview that epistemic rationality contributes to our wellbeing. This “traditionalist view” as Prof. Bortolotti calls it, holds that we cannot be happy and well functioning if we hold on to incorrect beliefs about the world. We said that not all biases are born alike in terms of material consequences for holding an incorrect or irrational belief. Furthermore, some biases can shape action in a way that can be even beneficial. Take for instance the notion of optimistic biases about our health, romantic relationships, and our chances of succeeding at something. There is empirical evidence that such irrational beliefs not only hold some psychological and epistemic benefits, but also that they can contribute to our motivation and can under certain conditions fuel a self-fulfilling prophecy. While we can still hold a view that these beliefs are clearly false or inaccurate (and in some conditions we can judge them as such), prof. Bortolotti convincingly argues that there are some boundary conditions in which optimistically biased beliefs actually shape our self-esteem, our agency, our actions in a way that creates future behavior. So much so, that we can even close the gap between our incorrect assessment and that future reality. For instance, a person may be over-confident about his prospects of finding a job for which is he is under qualified. Research suggests that in some conditions this overconfidence can shape his motivation to such an extent that makes his pursuit of the job resistant to setbacks, even to a point that makes his goal objectively more likely over time.?Audaces fortuna iuvat. Even when our initial "audacity" would be deemed as objectively irrational by some.

References: Author Bortolotti holds nuanced and very rich views on this and does not claim that unwarranted optimism is always a good thing. I recommend you read her great little book, especially chapter 6.?The Epistemic Innocence of Irrational Beliefs.

-----

This blog has explored six simple notions that problematize the idea of rationality, biases, and irrationality in situations of complexity and uncertainty. I hope there is some added value in some of the questions, and that it could spark a much-needed conversation.

I would love to hear from your ideas, references, and comments.

Mohammed Alzahrani

Interested in research, monitoring, and investigation of everything related to the Earth, the Earth’s atmosphere, and the links with the universe, the hourglass

6 个月

Nice

回复
Emre Soyer

Behavioral Scientist / Consultant - Decision Making & Negotiation

2 年

Nice post, big ideas. Thanks. You could write many posts about each. 2 thoughts that immediately came to mind (based on my perceptions and some research): -- Causal opacity is underappreciated by decision makers. We are great storytellers, but that makes us too eager to assign the wrong causality and then believe it for a long time. -- Exposing oneself to positive asymmetries is a great idea but often involves a lot of small-yet-annoying negative shocks, which can prevents many people from daring to such exposure.

Marilyn Mehlmann

Co-Founder at Legacy17 cooperative association

2 年

Helpful - and challenging. We're pondering the design of a short (3-4 days) workshop, tentatively called 'Wise Choice', to better equip both students and activists for individual and collective decision-making. How can such an event enable transformative learning, not least in the sense of wider horizons and better-informed risk perception, rather than leading to inaction/decision paralysis when complexity appears insurmountable? If we don't know anything, why take action at all? (Forgive the hyperbole...) In the worst case: could the very act of encapsulating this further BoK in a workshop contribute to the perception of either rationality or impossibility?

Ange Ballard (PhD, MEd)

Consulting, partnering, and managing relationships and organisational processes to embed lived experience and co-production

2 年

Transrational as a concept can cut through the rational/irrational duality, a bias in itself.

Ange Ballard (PhD, MEd)

Consulting, partnering, and managing relationships and organisational processes to embed lived experience and co-production

2 年

An answer to research Q re biases ( your 5th idea...and based in my research using Sensemaker and Kahnemann) is when people are not personally invested in them. Eg: public servants, also property investors, not willing to see past their own biases about rental system issues/ data because these challenge their (literal) investment in the system.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了