Challenge #4 - Throwing the 'baby'? out with the bath water

Challenge #4 - Throwing the 'baby' out with the bath water

In Challenge #3 Are Change Management Models built on solid evidence? I suggested that there is a lack of evidence that supports the claims made by OCM (Organisational Change Management) models. This lack of a scientific mind-set within change management makes it vulnerable to all sorts of folk law and myths. Currently I have found around 15 critical claims within OCM that are not supported by evidence.

But it is not just that the lack of evidence that supports the claims that is the worry. According to James Ladyman (Philosper of Science at Bristol University) it is not good enough to just collect all the evidence that supports a claim and show that the claim doesn’t hold true. Sometimes the science is just wrong, or theories just cannot be pinned down, people hold onto dysfunctional theories and get caught in a ‘group think’. 

No matter how much the claim is falsified we stick together and say that ‘we shouldn’t ‘throw out the baby with the bath water’.

Who would ever throw a baby out?

Before we get into whether we should throw out babies with bathwater I just want to spend some time exploring this analogy. Can we compare a psychological theory that lacks evidence to a baby? Clearly it is emotive language and puts the claim challenger (skeptic) on the back foot because it is just a truism - who would ever throw a baby out?

But the analogy also displays a lack of understanding of how empirical testing of a theory in science works. In the conceptual world of psychology, you can never say a theory is ‘right’ or ‘wrong’ – there is a just a ‘continuum of confidence’. There is no truth, we can only work in likelihoods of it being true. A theory is just an idea – a baby is a baby. You wouldn’t throw out bath water even if you thought there was 1% chance it contained a baby. But (hopefully) you would throw out an idea if you thought there was only a 1% chance of it being true. But people don’t. It seems, like our babies, we fall in love with our beliefs and cannot let them go.

To help us try to distinguish claims that might have some scientific or evidence-based support from those that that are potentially just BS, Ladyman breaks down this lack of scientific rigour into 4 areas: bad science, pseudoscience, science fraud and truthiness (or just plain old BS).

Bad Science

In 2005 Kenan Distinguished Professor of Psychology Professor Barbarba Fedrickson & Marcial F. Losada found a formula for flourishing – the positivity ratio. Fredrickson states ‘all “flourishing” individuals should have positivity ratios above 2.9013, while all “nonflourishers” should have positivity ratios below 2.9013. A book was published and given rave reviews by the founding fathers of positive psychology Martin Seligman & Mihaly Csikszentmihalyi. The positive ratio was applied in education, business and marriages. Some went so far as to claim that the positivity ratio was actually equal to PI (3.14)!

But further independent analysis (Brown, Sokal, & Friedman) in 2013 found that the mathematical methods used did not support the positivity claim. In her defence Fredrickson used the baby and the bathwater analogy saying that ‘it will be important to keep close hold of the slippery baby (positivity ratio) while we drain the somewhat murky bathwater.’ ‘this infant may seem a bit sullied, in my estimation a good scrubbing reveals a healthy baby well worth letting grow up’. She softened her claim and suggested that a different experimental design would have been appropriate (albeit she doesn’t explain why she didn’t use this design the first place!) - ‘I’ve come to see sufficient reason to question the particular mathematical framework Losada and I adopted to represent and test the concept of a critical tipping point positivity ratio’

Brown, Sokal & Friedman responded again to reaffirm that the crux of the argument wasn’t so much about the math but whether you can really measure positive emotions in any meaningful way at all. The question went to the heart of positivity psychology movement. They state “ If someone laughs at a joke on TV, eats an ice-cream, sees their dog get run over, and watches a nice sunset, are they at a 3 to 1 ratio of positive to negative emotions and flourishing? And so it is with any comparison of emotions, as who can provide a value-free metric on which to draw any comparison in a universal–invariant way?”

Pseudoscience

But the positive psychology movement still clings onto the ratio. Losada continues to sell the positivity ratio as part of his consultancy services while Seligman and Fedrickson still hold onto the idea that there is optimal ratio for flourishing. But Friedman & Brown et al didn’t stop there. In a 2018 article they started to delve deeper into other claims made by Fedrickson and the positive psychology community. They debunked other claims such as meditation leads to better physiological outcomes (here is the debunking) and that heathy thoughts make you physically healthier (here is the debunking). 

James Ladyman suggests that pseudoscience is born from a the 'social organisation of enquirers that doesn't have the characteristics to be scientific'. This is potentially the case for positive psychology where thegenius of the positive psychology movement” and its luminaries doggedly stick to dodgy claims. This is particularly troubling for this fledgling branch of psychology that lamented is predecessor (humanistic psychology) for lacking scientific rigour. It is also troubling for students of positive psychology (like me and possibly Nicholas Brown) who are paying thousands of pounds in tuition fees.

Science Fraud

According to Ladyman science fraud is when science is used to deliberately deceive – ‘someone pretending to know when they know they don't’. I guess there is a fine line between pseudoscience and science fraud – how do we know that someone is intentionally trying to deceive us? There is the case of Diederik Stapel a Dutch psychologist who admitted to fabricating 15 years of research. This is an extreme case, but here is an article about existence of extra-sensory perception which couldn’t be replicated. Based on this study, do you believe in extra-sensory perception? Probably not because it is such an extraordinary claim, but what about other studies that make more mundane claims? One study on scientific misconduct found that almost 56 percent of researchers admitted to collecting more data after seeing that the initial test was not statistically significant and a further 50% admitted to selecting that “worked”. So it is not surprising that the Psychology Reproducibility Project found that only 39% of 100 studies could not be successfully replicated. So when we read that more believable theories have also failed the replication test such as unconscious behavioural biasing and ego depletion was this just bad science or science fraud?

Truthiness (or just BS)

Then there is Truthiness - making people believe something is true when you know is not backed up by science – a polite way of saying BS. According to Ladyman these theories can be very difficult to prove false because they are generally truisms. These are “grand theories that are so global, complicated and “fuzzy” that they can be used to explain anything – theories constructed more for emotional support because they are not meant to be changed or discarded”(Stanovich p.31).

Take for example Simon Sineks ‘Start with the Why’. On page 57 he claims that ‘When you force people to make decision with only the rational part of their brain, they almost invariably end up “overthinking”.’ How would you ‘force’ people to use certain parts of their brain? And who controls our brain anyway and exactly where is the rational bit located? These questions aside, to support this statement, Sinek ‘scientises’ his claim by referring to Richard Restak’s book The Naked Brain. Restak’s book doesn’t mention ‘overthinking’ (I read it twice but maybe I missed it). But ‘overthinking’ or rumination is a term in psychology ‘often defined as repetitive thinking about negative personal concerns’ which can lead to depression, anxiety, self-harm and substance abuse – not the sort of outcome you would expect from deciding on which TV to buy! So, what does Sinek mean? What Restak does refer to is research into people’s ‘impulse systems’ and that ‘a person’s relationship with a brand resembles an addicts’ relationship to their drug of choice’ (P173). So, buying Coke, an Apple computer or a Harley is simply a base case of satisfying a compulsive WANT – there is no meaning – there is no WHY. Creating WANT is the short cut to brand loyalty not taking someone on an existential journey to find their WHY.

Maybe what Sinek means by “overthinking” is how the brain uses heuristics – general rules of thumbs or judgements that govern our behaviours. But this is a hugely complex area and a debate that has been raging between Gerd Gigerenzer (author of Risk Savy) and Nobel Prize Winner Daniel Kahneman (author of Thinking Fast & Slow) for over 20 years.

In fact, Sinek doesn’t need to try and use neuroscience to ‘scientise’ his position. Research into how to make work more meaningful has been an area of research for the past 20 years. Being a fledgling area of study it lacks a clear consensus on what meaningful work means but it does throw up some interesting paradoxes. But the main point here is that Sinek is trying to provide a grand theory for something that is hugely complicated – it appeals to our emotions (we must find meaning in life) but scratch the surface and it is confused and fuzzy.

Should we keep the 'baby'?

So, if the ‘slippery’ baby is based on bad science, pseudoscience, science fraud or truthiness is it worth ‘scrubbing’ it up and keeping hold of it?

Take for example the Kübler-Ross curve. It has been quoted as “one of the most influential books in the history of psychology”. But if it was a valid construct you would expect it to explain people’s reaction to death at least maybe 50% of the time? The curve actually explains people’s reaction to death (i.e. going through a pattern of specific reactions Denial, Anger, Bargaining, Depression, and Acceptance) about 11-17% of the time. This low level of prediction is because Kübler-Ross’s original research was based on personal observations of dying patients which could be subject to all sorts of different bias and is way down the evidence hierarchy.

This evidence wouldn’t get on the Evidence Based Management practice trustworthiness scoreboard (here is a useful summary). It’s a bit like a rotten apple – how much of the apple must be rotten before you would eat it? For the Kübler-Ross curve at least 80% of the apple is likely to be rotten i.e. not predicting what it says it predicts. To be able to say, ‘There are signs that the Kübler-Ross curve explains people reaction to death’, you must get at least a 60% (only 40% rotten) trustworthiness score. To say ‘it is shown that the curve explains people’s reaction to death’ you would have to score 90% (only 10% rotten).

Like Fredrickson & Losada, Kübler-Ross softened her claim stating that the stages of grief “are not stops on some linear timeline in grief. Not everyone goes through all of them or in a prescribed order” (p. 7). So, what is the value in believing something when the author of the theory starts to back track and there is mounting evidence that the theory doesn’t hold?

Why people hold onto debunked theories becomes more perplexing when there are lots of alternative theories that are more likely to work.  Take CBT for example, there are lots of meta-analysis proving its efficacy in helping people deal with trauma. And if OCM practitioners really need a visual of how people respond to behavioural change then maybe use something based on the transtheoretical model of change or the theory of planned behaviour.

And what has death and grief got to do with OCM anyway? OCM seems to be vulnerable to extraordinary extrapolations where theories from one field (e.g. studies on bereavement or neuroscience which are often just single studies) are used to make overarching assumptions in organisational change. If an OCM practitioner’s interventions evoke emotions that are like those who have just experienced the death of a loved one, then maybe it is the practitioner’s methods than should be questioned.

If you still want to use the Kübler-Ross curve, then think of the damage it may cause people who are not experiencing the sequence of emotions this stage theory stipulates? By believing the curve you set an expectation that people, even should, go through stages of grieving which can be harmful to those who do not. It is a lose-lose scenario. If people go through the curve, they are a resistor (Denial & Anger) if they don’t, they are abnormal. It’s like a ducking stall test for a witch - if you drown (resistor) you are innocent if you live you are a witch (weird). By believing the curve we do not extend our understand of what is happening to people during change because we are too busy trying to categorise them into non-existent stages.

If you still want to believe in the curve and make generalisations about how people grieve then think about the potential damage it may do to your career, particularly if you are working internationally. For example in a cross-cultural study, Chinese participants seemed to recover more quickly from bereavement emotionally compared to US Americans. And if we continue to believe in these myths then think about the damage the positivity ratio did to the positive psychology movement. 

One of my biggest fears of perpetuating myths is that OCM becomes a pseudoscience incapable of differentiating between ‘fact’ and fiction. So, the very foundations of OCM are bought into question. Are we going to build a practice based on the best available evidence or continue using over-simplistic models that are not backed up by evidence?

Isn’t change management the practice of challenging and changing beliefs anyway? So, part of our roles as change practitioners is to be objective and open to alternative ideas. When change practitioners are faced with evidence that contradicts their beliefs shouldn’t we be equipped to change rather than hold onto dysfunctional ideas? This is liberating because it allows us to move on, improve our practice, think other thoughts and offer different strategies to our clients. Maybe the reason why change management is so confused, and theories of change are built on sand rather than science is because we don’t have open debates and admit our mistakes?

At the end of the day change practitioners can believe what they want but as Nobel Prize winner Peter Medawar wrote ‘the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not’. Admitting you are wrong is not only the backbone of being scientific but also the backbone of being an OCM practitioner – if change practitioners can’t change then who can?

Vincent Musolino

Fondateur, consultant chez COAPTA | Membre Club-Entreprises CEP, Comité HR-Jura Bienne, Comité HR Swiss | Personne de Confiance en Entreprise CSPCE | Ecoutez mon podcast "Leadershift"!

4 年

Hi Alex Boulting, just discovered your series of articles - I am trying to bust as many myths as possible in my own practice but WOW - you just destroyed some of my gotos (ADKAR, KR change curve) :-) How would you go about finding suitable replacements? Let's take the KR curve. How can we get people to think about their own change perspectives, where they're at, how to move forward? I don't have the time to read all the litterature on that topic; I could read meta-analysis articles because I have a science background, but where do I start? Can you recommend journals, or generic books busting myths and proposing alternatives?

Paul Thoresen, M.A.

Industrial Organizational Psychology Practitioner | Organization Development |

4 年

The thing that i come back to with myth busting is . . . "if not this, then what" I mean lets say I break down how some myth is perpetuated (Insert myth here: 70% of changes fail, or MBTI is good etc. )? But people seem to need to fill that void. If a colleague believes that 70% of changes fail is a real number based on rigorous science and I am able to help deconstruct that. So if they start to wonder what the "real" number is if it is not 70% and i do not help to fill that in with new information then well they will likely go back to their old beliefs. There is a great PDF on this somewhere, likely on the CEBm site, I will see if i can track it down.?

Paul Gibbons

Keynote speaker: AI and Future of Work, Ethics of AI, Leading AI, Future Technologies and the Impact on Society

4 年

If you want to wonder how much garbage is talked by gurus - think about this! How many leadership books are there (1000s)? How many great leaders have never read one (errr. most I can think of)?? How many shite leaders have read them all (quite a bloody few)... So what value is most writing on leadership???

Paul Gibbons

Keynote speaker: AI and Future of Work, Ethics of AI, Leading AI, Future Technologies and the Impact on Society

4 年

I think, fwiw, that folks that contribute generally have more nuanced views than can easily be expressed in a paragraph or sentence....? I tried to do this at length elsewhere, but there is pure baloney, useful baloney, and harmful baloney...? a lot of the harmful baloney came from Kotter, MBTI is useful baloney, Kubler-R is harmful baloney...??

回复
Paul Gibbons

Keynote speaker: AI and Future of Work, Ethics of AI, Leading AI, Future Technologies and the Impact on Society

4 年

This would make for an interesting conference workshop... the soundbite style of LI comms doesnt really help build relationships and you want to build relationships even when you are smacking down :-)

要查看或添加评论,请登录