3 lessons for behavioural scientists (from someone who doesn't understand behavioural science)
Image by athree23 from Pixabay

3 lessons for behavioural scientists (from someone who doesn't understand behavioural science)

The first time I attended a lecture on decision-making, I sat up in my seat. I devoured all the information around the framing effect, the availability heuristic, and all the other ways in which humans can be stupid. At the end, I rushed towards the lecturer and rained a bunch of questions down on her, in a manner resembling more of an interrogation than student curiosity. She recommended I pick up a handy little book called ‘Nudge’.

Flicking through the pages of Nudge, I discovered the exciting world of behavioural science and behavioural economics. This was like a collection of all the ways in which people are irrational; an encyclopaedia of quirks that I could memorize and try to find in the real world. When I finished the last page of the book, I put it down and told myself, “This is awesome. I wanna study the science of human stupidity”.

And so I embarked on a journey to educate myself on everything behavioural science – the social and cognitive principles. I studied subjective social perceptions, the psychology of risk, and even computational approaches to decision-making. I loved every single bit of it – so the next logical step was signing up for a master’s degree in behavioural science.

It’s been over a year since I scribbled that application statement, filled with the ramblings of an over-excited final year student who was on a mission to change the world. My passion for behavioural science is even bigger now than it was back then, and I still think it’s one of the best tools we have to bring about positive change. But I have also opened my eyes to the ways in which we, as behavioural scientists ourselves, can sometimes make sub-optimal decisions. So, by the delusion of grandeur vested in me by the Dunning-Kruger effect, and barely being halfway through my master’s, I give you three lessons that I’ve picked up along the way.

1.      Mental shortcuts don’t (always) make us stupid

It didn’t take long before I realised there’s more to behavioural science than just irrationality. Broadly speaking, the discipline is based on two categories of human inference: biases and heuristics. Biases are systematic flaws that prejudice our decision making. For example, stereotyping is a form of bias, because it extrapolates minimal observations into flawed conclusions about other people.

Heuristics, on the other hand, are mental shortcuts, or simple rules of thumb. The world is complex, and we can’t possibly analyse every single bit of information around us. Heuristics help us make decisions swiftly and bypass the irrelevant stuff. For example, it’s reasonable to infer that someone in overalls is a manual worker, and someone in a suit works in an office. We don’t have to scan their hands for calluses, take their business card, or follow them to their place of work (that would be downright creepy). Heuristics aren’t perfect – the suited man could be a construction worker who is attending a special occasion – but they’re usually good enough.

So far, so good. Rules of thumb are common knowledge. However, many people seem to think that heuristics always sacrifice some accuracy for efficiency. We don’t need to be 100% accurate – we could get away with being wrong a little bit of the time. And this is fine, because the loss in accuracy is more than made up for by the gain in efficiency and time saved[1]. 

Never in the history of science have we used future data to create our models.

Here’s where we’re wrong though. We love to think that the more data we feed into a model, the more accurate it gets. In fact, so strong is this tendency, that we’ve had to stress parsimony as a core scientific principle. We don’t want so many variables and data that we render our models unworkable. The idea that more data means accuracy at the expense of efficiency seems to be accepted as a logical rule (a heuristic, if you like).

Let’s take investments for example. On one hand, the 1/N heuristic is a simple way of managing your investment portfolio. It basically means you split your money equally across the number of assets you have. On the other hand, Markowitz’s mean-variance portfolio is a Nobel-winning model that pits assets together. It maximizes expected returns by modelling risk after the variance of the assets over a given time period. In terms of complexity, it’s easy to see which model looks more rigorous. The 1/N rule just looks like an arbitrary way of playing the stock market for a rookie, whereas the Markowitz portfolio accounts for stock market fluctuations and asset risk. Which model do you think would perform better?

If you guessed the Markowitz model, guess again. When the 1/N rule was compared against 14 other optimizing models (including the Nobel-winning one), none of them were able to consistently perform better than the heuristic rule. In fact, the 1/N rule was the best performer for risk premiums, and the second-best performer when it came to the total value of stocks traded[2].

This might come as a surprise. How can less data lead to more accuracy? The answer, according to Gigerenzer and Gaissmaier, is simple. We like to call models “predictive”, but they’re not exactly that. Models are fit to data retrospectively, complete with calculated coefficients and effect sizes. The predictive value of a model is, in itself, an estimation, with variance errors[3]. Some models are actually pretty good at predicting future patterns. However, keep in mind that never in the history of science have we used future data to create our models – future data doesn’t exist yet.

No alt text provided for this image
The hypothetico-deductive model as used in scientific research. Data are collected to see whether they fit our predictions. 

Diagram taken from Hopayian, K. (2004). Why medicine still needs a scientific foundation: restating the hypotheticodeductive model – part two. British Journal of General Practice, 54(502), 402-403.


As opposed to formal models, heuristics have been wired in us by some capacity of our evolutionary journey. You can think of heuristics as models, optimised over millennia of natural data. Therefore, while we can use good models to advance our predictive ability, number-based models are not always more accurate than heuristics simply by virtue of quantification. To assess a model’s true predictive value, we should look at its ability to predict post-hoc patterns – using data that weren’t part of creating the model in the first place.

Lesson 1: Efficiency and accuracy aren’t mutually exclusive. Heuristics are fundamentally parsimonious models. Patterns change, and just because a model was based on more data, doesn’t mean unearthed patterns will hold in the future. (But some models are better than others – a lot of them are better than heuristics, too).

2.      Be wary of interactions – context is your friend

A while ago, I was browsing LinkedIn when I came across a post criticising Wikipedia’s approach to donation requests. Wikipedia was trying to get users to fund them by telling them only 1% of visitors donate. I was baffled. Intuitively, Wikipedia’s message looked good. It made me feel guilty every time I visited, and I’d even be a bit upset if I left without donating. If only 1% gave, I wanted to be more like that 1%. Wikipedia helped me, so I should help them back. It struck me as a bit of a surprise that people were now criticising this approach.

No alt text provided for this image
The notorious Wikipedia donation request. Image attribution: https://recessionsolution.com/2017/12/20/charles-dickens-the-accidental-content-marketer-for-christmas/wikipedia-donation-request/

Of course, in behavioural science, what Wikipedia did was a big no-no. It is thought that one of the most powerful ways of getting people to do something, is to convince them that everyone else is doing it as well[4] (although some confound concerns have been raised[5]). For example, Goldstein, Cialdini and Griskevicius[6] wanted to get hotel users to conserve energy by reusing their towels, instead of using new ones daily. They found that, when they put up signs indicating that 75% of hotel guests did this, more people started reusing towels (this was recently replicated in German hotels[7]). This effect was even more pronounced when the sign was narrowed down to read that 75% of the residents in that room reused their towels in the past. This technique is so ubiquitous that it has its own name – social norms marketing.

What Wikipedia did was violate the principle of social norms. Instead of telling people that everyone is donating, they told them nobody is. Behavioural scientists had to make things right. So, many flocked to the comments, suggesting improvements to Wikipedia’s messaging. Some even highlighted Wikipedia’s need to hire behavioural scientists.

Only, Wikipedia had actually tested many messages. This was the one that got them the most donations.

I’m not sure why, but Wikipedia’s method felt right to me. It felt powerful – it triggered an empathic response. I suppose that in hindsight, I could pull out a few theories to explain it. For example, this message could play on reciprocity[8] – I’m using the website but I’m not giving anything back, which isn’t fair. Or, it could play on reciprocal altruism. These people are selflessly providing me with so much information. Am I just going to ignore their plea?

Psychology has a million different theories. If we really tried, we could apply half of them at any given time. Consider pluralistic ignorance[9], which goes something like this. My friends and I are discussing where to go for dinner. I’m really feeling like tacos. My friend Bob, is also in the mood for tacos. So are my friends Sally, Heather, and Sam. However, we usually tend to go for Chinese. So, everyone thinks that everybody else is in the mood for Chinese. When we’re discussing where to go, we all decide to be polite and suggest Chinese – what we think the others want. During our trip to the Chinese restaurant, however, every single one of us is internally thinking, “Bummer, I really wanted some tacos”. Pluralistic ignorance is when everyone thinks the same thing, but everyone thinks that everyone else thinks something different. (Definitions are not my strong suit, sorry).

No alt text provided for this image
Pluralistic ignorance is more common in everyday life than you'd think.

















Image attribution: https://azilliondollarscomics.com

Now, consider the opposite. The false consensus effect[10] is when someone believes that everybody else thinks the same as them. I recently read that people in Texas often eat pickles at the cinema. When some Texans were told that they’re really the only ones who do this, they were quite shocked. So was I. I thought everyone just sticks to popcorn.

No alt text provided for this image
The heart is suffering from a bad case of false consensus here.

























Image attribution: https://theawkwardyeti.com

Pluralistic ignorance and the false consensus effect cannot occur simultaneously. This is the whole point of psychology. Depending on context, one phenomenon might take hold over another – sometimes, the same phenomenon might even produce vastly different results.

No alt text provided for this image

For example, Schultz and colleagues[11] wanted to reduce the energy consumption of households in San Marcos, California. They provided data to each household on their own energy consumption, and the average consumption of households in the neighbourhood. Households consuming above the average did indeed reduce their consumption. However, households with below-average consumption saw this as “leeway” – so they increased their consumption to match the average. It wasn’t unless the researchers provided feedback – in the form of a smiling face for low consumption, and a sad face for high consumption – that the low-consumption households maintained their power usage. Even social norms can backfire.

So, we have conflicting theories, interventions that can work either way, and even multiple promising interventions. How do we know which theory to apply, whether our plan will come back and hit us like a boomerang in the face, or if we’re picking the best possible intervention?

Using context and intuition.

I spent the first section talking about why heuristics can be so important. Here, let’s put them into context. Social norms marketing in the Wikipedia message would look something like this: “75% of our users donate to us. Please donate as well”. This would be signalling to me that unless I donate, I’m a deviant. But, here’s the thing. Wikipedia has 38 million registered users – so if I had to take an incredibly conservative guess at the number of unregistered users, I’d estimate somewhere north of 100 million. If Wikipedia is telling me that 75% of its 100+ million users are donating, my first thought would be “Great. Why do I care?”.

Imagine someone flashing their millions of monthly revenues and then asking for another donation on top. I’ll take my norm deviance with a side of coffee, thank you very much.

This line of thought is intuitive, not data-driven. If we look at things in their context, it makes intuitive sense that the 1% message would work much better than the 75% one.

It’s important to keep in mind that many psychological findings are generated in labs, under heavily controlled conditions. But in the real, complex world, variables will interact with one another, producing vastly different results than if they were acting in isolation. In the real world, there are infinite factors all acting at once. Complexity sometimes makes our intuition our most reliable insight.

Lesson 2: Behaviour is context-based and situational. Before we dive deep into our models and theories, it’s sometimes worth taking a vantage-point look at the bigger picture. Some of the best ideas can be generated by intuition (and refined through systematisation).

3.      Boxology does not equal rigour

Psychology has gotten a bit of a bad rep for its lack of quantified models. Take one of the most influential models in cognitive psychology – Working Memory[12]. It’s a very influential model that has stood the test of time. However, it only tells us that there is a visual, an auditory, and an episodic component in working memory. It says nothing about the dynamics between them.

No alt text provided for this image
The Working Memory model. Diagram taken from Baddeley, A. (2010). Working memory. Current Biology, 20(4), R136-R140.

The replication crisis has highlighted the need for quantification to many scholars. Everything must be considered in the context of everything else – and unless we consider indirect links in our models, such as mediators, covariates, and interactions, we are missing the bigger picture.

We do tend to place our focus on “box” models in behavioural science. These are idea-generative tools of incredible utility, but it’s important that we don’t overstate their applicability. Take the (arguably) most widely used model in behavioural science – COM-B. First, we identify a target group. Then, we observe how we can change the target’s Capability (psychological or physical), Opportunity (physical and social), or Motivation (reflective and automatic) to enact the desired behaviour change[13].

No alt text provided for this image

Like many models, COM-B is also guilty of “boxology”. As useful as it is for idea generation, it is quite limited when it comes to insights around exactly how useful an intervention will be, the conditions under which it will work, and component dynamics. It simply helps us highlight different factors of behaviour. On the other end, the PATH (Problem, Analysis, Theory, Help) model involves extensive theorising, then pruning obsolete ideas to keep only the (quantifiably) largest predictors of behaviour change[14]. While COM-B is more of a forward-generative model, PATH is more of a backward-engineering one (start from the end, reverse-engineer the solution). Both are useful, but for quite different purposes.

Models exist to open our minds to more possibilities, not shut them against alternatives.

COM-B, like EAST and MINDSPACE, to name a few, should be viewed as idea generators – a framework of considerations, if you like. For someone who knows their psychology, Capability, Opportunity and Motivation are good nudges to promote multifaceted thinking. However, it’s not the end-all-be-all of behavioural interventions. Ideas don’t necessarily have to fall in one of these categories. Yet, lots of people draft out volumes of brilliant ideas – and then put them on post-it notes, frantically sprinting along the length of a whiteboard trying to figure out where each one goes.

Our tools and classifications can never be exhaustive of behaviour. They exist to open our minds to more possibilities, not shut them against alternatives. Models should be used for what they are – idea nudges. It’s fine if some ideas stay out of our boxes, or if our charts get a bit messy sometimes. Attempting to categorise non-categorisable things just takes time away from more fruitful ideation. And, when it comes to systematising, there’s a thousand other models we can look to. Each one serves its purpose, but there will always be that blank space that no model accounts for. Behavioural scientists are humans. Our fast-and-frugal intuition is the only model that can explore that blank space; and that blank space may sometimes give us the much needed answers to the difficult problems that we scratch our heads over.

No alt text provided for this image
While by no means an exhaustive collection of models and frameworks, there is a lot of blank space left unaccounted for here which we can only tap into using intuition. A lot of these models - such as HOOK and B-MAT - also have a lot of shared elements.

Lesson 3: Some models help create ideas, some models help devise solutions, but no model should dictate our approach to interventions. Creativity, intuition, and our educated “gut feeling” should also be accepted as a fundamental part of devising behavioural interventions.

The cynical naivety vantage point

Systematisation should not supersede creativity in the science of creative ideas.

It might seem odd that I’m writing an article on lessons for behavioural scientists at the start of my journey as a behavioural scientist myself. After all, there are people who have been practicing in this field since the 70’s, when it first emerged. But it's easy to get tangled up sometimes. If certain methods, models, or frameworks become the norm in the industry, it’s easy to follow them without question. Eventually, it’s easy to confine ourselves. So, I decided to write this now, while I’m still na?ve enough to be (relatively) free from the grasp of field conventions.

I have less experience than many people involved in behavioural science, which might just be what makes me cynical enough to provide these recommendations. It might not make sense. But, as Rory Sutherland kept saying when I met him, much of behavioural science has been founded on ideas that don’t make sense. And it might be unfruitful to try and make sense of something that doesn’t make sense.

I suppose these three lessons really fall under one key message: Just as systematic analysis is key, some degree of intuition is also crucial. Sometimes, it pays to make use of it rather than try and hold it back. Systematisation should not supersede creativity in the science of creative ideas.

Then again, I’m only a master’s student, so what the hell do I know?



References

1.      Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision-maker. New York, NY: Cambridge University Press.

2.      DeMiguel, V., Garlappi, L., & Uppal, R. (2009). Optimal versus na?ve diversification: How inefficient is the 1/N portfolio strategy? The Review of Financial Studies, 22(5), 1915-1953.

3.      Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62(1), 451-482.

4.      Perkins, H. W., & Berkowitz, A. D. (1986). Perceiving the community norms of alcohol use among students: Some research implications for campus alcohol education programming. The International Journal of the Addictions, 21(10), 961-976.

5.      Harries, T., Rettie, R., Studley, M., Burchell, K., & Chambers, S. (2013). Is social norms marketing effective? A case study in domestic electricity consumption. European Journal of Marketing, 47(9), 1458-1475.

6.      Goldstein, N. J., Cialdini, R. B., & Griskevicius, V. (2008). A room with a viewpoint: Using social norms to motivate environmental conservation in hotels. Journal of Consumer Research, 35(1).

7.      Bohner, G., & Schluter, L. E., (2014). A room with a viewpoint revisited: Descriptive norms and hotel guests’ towel reuse behavior. PLoS One, 9(8), 1-7.

8.      Strohmetz, D. B., Rind, B., Fisher, R., & Lynn, M. (2002). Sweetening the till: The use of candy to increase restaurant tipping. Journal of Applied Social Psychology, 32(2), 300-309.

9.      Katz, D., & Allport, F. H. (1931). Students’ attitudes: A report of the Syracuse University reaction study. Syracuse, NY: The Craftsman Press.

10.  Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13(3), 279-301.

11.  Schultz, P. W., Nolan, J. M., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2007). The constructive, destructive, and reconstructive power of social norms. Psychological Science, 18(5), 429-434.

12.  Baddeley, A. D., & Hitch, G. (1974). Working memory. Psychology of Learning and Motivation, 8(1), 47-89.

13.  Mitchie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6(42), 1-11.

14.  Buunk, A. P., & van Vugt, M. (2013). Applying social psychology: From problems to solutions. London: SAGE.

要查看或添加评论,请登录

Alexandros Efstratiou的更多文章

  • Big data does not mean good data

    Big data does not mean good data

    "All models are wrong, but some are useful" -George Box In her 2016 book, "Weapons of Math Destruction", Cathy O'Neil…

    2 条评论
  • Who are the naive scientists?

    Who are the naive scientists?

    In 1979, Amos Tversky and Daniel Kahneman’s prospect theory took the world of economics by storm. Up to then…

  • Averting crisis 2.0

    Averting crisis 2.0

    In an article I wrote a while back, I predicted that people who believe in COVID-19-related conspiracy theories will…

    4 条评论
  • Has YouTube just fueled a conspiracy wave?

    Has YouTube just fueled a conspiracy wave?

    I wasn't planning on writing another article so soon after my last one, but with something new happening every day…

    4 条评论
  • Harnessing bias against COVID-19

    Harnessing bias against COVID-19

    There's been a lot of talk about getting the public to comply with government requests. Stay at home, don't panic-buy…

    12 条评论

社区洞察

其他会员也浏览了