Better the devil you know than the devil you don’t
What is it?
Imagine I offer you a little lottery game where you can win 10 dollars if you pick a black ball out of one of two boxes that I put in front of you. Each of the two boxes contains 10 balls. In box A there are 5 black and 5 red balls, in box B there is a random proportion of both balls (from 0 black and 10 red to 10 black and 0 red).
You can now choose one box to pick the ball from. As said: if the ball is black, you win the 10 dollars. Otherwise, you win nothing. Which box would you choose?
If you are like most of us you choose box A. What means – logically analyzed – that you think the likelihood of winning the 10 dollars with a black ball is higher with box A than with box B. What means – logically analyzed – that you should go for box B if I had offered you the 10 dollars win for a red ball.
Still with me? OK, imagine I offer you a little lottery game where you can win 10 dollars if you pick a red ball out of one of two boxes (for the rest of the description, just jump back to the top of the article). Again, you would choose box A. What means – logically analyzed – that you take box A for a higher chance to win with a black ball and box B with red, but for the very same argument take box A for a higher chance to win with a red ball and B with black. That’s the famous Ellsberg paradox which illustrates nicely the ambiguity effect.
Why does it happen?
The reason why most of us would prefer box A with 5 balls of each color to box B with a random distribution (which is statistically identical) is simply that with box A we (think we) know the probability of winning. Our brain hates uncertainty, ambiguity, and risk. It loves structure, clarity, and simple explanations.
At this point we need to distinguish the ambiguity effect from risk (or loss) aversion. A risk is typically something we can calculate or estimate. It is a lack of security. An ambiguity is a lack of information. And this has quite an impact on our decision making.
How do we try to overcome ambiguity? We seek for more information. And that can be a problem. Sometimes more information leads to better decisions. But in many cases it is absolutely not clear where we should get the information from, how valid it is, how much effort it takes to get it (and how much time and effort would be wasted) and, even worse: how much this additional information really would help for our decision.
领英推荐
In fact, there are studies showing that information which is irrelevant for our decision still can influence this decision – in an irrational way, as the relevant information becomes less important with all the irrelevant stuff we got to know. This is called the “dilution effect”. A study has shown this in an example with two (hypothetical) students – Anna and Berta. Subjects had been asked how likely it is that Anna or Berta will fail in the next exam. One group was given only relevant for Anna like how often she had failed before and how little she invests in learning. Another group got exactly the same information for Berta but also some additional “insight” which was irrelevant for the question at hand (like whether she prefers pizza to past or cats to dogs). Interestingly, people rated the likeliness for Anna to fail in the next exam significantly higher than for Berta; in other words: the irrelevant information influenced how much the relevant information was taken into consideration.
There is another negative effect of trying to reduce ambiguity with information, which is called information bias. Scientists demonstrated this effect with an experiment in which they presented subjects with a diagnostic problem involving fictitious symptoms, tests, and diseases. The majority of subjects decided to gather more information and have additional tests performed, even though these tests did not provide any additional relevant information. For more information about this check the comments (bias alert: you will definitely get more information by reading the comment but is it really relevant for you? Check for yourself and decide!)
How can we avoid it?
The simple answer is: embrace ambiguity, stop trying to reduce it by information seeking. The honest answer is: that’s difficult. The problem is that we never know if additional information really helps us or not. We don’t know what we don’t know, so how should we know that we will not find anything worth knowing if we continue with our research?
My way out if this issue is to set a timer. I thoroughly do my research within a certain time frame, gather as much information as I can, but then I stop. And make the decision. Even I can still feel a lot of ambiguity.
What’s your thinking around that?
Does this sound familiar to you? Any own experiences or stories you would like to share? Please start a conversation in the comments section!
Lecturer bei Surrey Business School, University of Surrey
2 年Super relevant and interesting this week. Thanks Peter!!!!
Improving Education Businesses | Coaching - Training - Consulting
2 年The example which is taken form the book of Jonathan Baron (2006) “Thinking and deciding” about the information bias and the value of information: A female patient is presenting symptoms and a history which both suggest a diagnosis of globoma, with about 80% probability. If it isn't globoma, it's either popitis or flapemia. Each disease has its own treatment which is ineffective against the other two diseases. A test called the ET scan would certainly yield a positive result if the patient had popitis, and a negative result if she has flapemia. If the patient has globoma, a positive and negative result are equally likely. If the ET scan was the only test you could do, should you do it? Why or why not? (continue in the reply)