Challenge # 5 – How to spot dodgy OCM concepts – Part 1

Challenge # 5 – How to spot dodgy OCM concepts – Part 1


In Challenge #4 Throwing the baby out with the bathwater I challenged OCM practitioners and academics who argue that we should hold onto psychological concepts (you could insert construct, theory, models – click here or here if you want to read more about the difference) or claims even if they have been shown to have little or no validity and can’t be replicated. So, what do we mean by validity and replication? And how can we tell whether concepts or claims are likely to be valid or just a passing fad? 

In Part 1 of this series I try to explain the concept of ‘face validity’. How ‘on the face of it’ concepts seem intuitive and are supported by ‘rigours research’ or impressive statistical techniques but can deceive resulting in potentially disastrous consequences.

Toasters, enema’s and eugenics

Could toasters and other domestic electrical appliances be used to solve teenage pregnancy problems? Taking this at face value, we know this doesn’t make sense. So our starting point when spotting dodgy concepts is whether it has ‘face validity’ – does it intuitively feel like a toaster is a good measure of teenage pregnancy? Probably not. But if we don't question the face value of concepts, what feels intuitive now may seem ridiculous in a few years.

Take for example the question ‘Does giving women enemas reduce infection rates at childbirth?’ At face value this seems like an odd question - what have enemas got do do with childbirth? But a question that wasn’t asked which had some unpleasant consequences. 50 years ago, doctors followed ‘best practice’ and gave women enema’s during labour  in the belief that it reduced infection rates. You can imagine - a painful and humiliating process for thousands of women. In those days little evidence was available, and most Doctors were male (around 5-10% of Doctors were female in the 1960s). Doctor’s decisions were based on their own judgement, (lack of validity) experience and maybe a lack of empathy for women. Their beliefs were formed in medical schools whose methods were also probably dated. So, a 60-year-old doctor could have been using methods from the 1800’s in the 1960’s – not the latest evidence. So, practices were often woefully out of date and often harmed patients. But it wasn’t until 2000 that two trials involving 665 women concluded that there was no clear difference in infection rates between women who had enemas and those who did not. These trials were valid and reliable because they used a technique called a randomised controlled trial (an experimental technique that aims to control for effects other than those being tested). So the enema myth was busted. But, even today, doctors rely heavily on the “art” of medicine with less than 20% following an evidence-based approachDoctors often do things for no reason probably because it is just ‘that’s the way they have always done it'.

Another example dates to 1869 when British scientist Francis Galton (who invented psychometrics) assembled data on leading English families, concluding that superior intelligence and abilities were inherited with an efficiency of 20%. So why not improve the quality of the human population by selective breeding and eliminate ‘feeblemindedness’ and ‘criminality’? With the noble aim of reducing poverty and improving the population’s mental and physical wellbeing it seemed to make sense, had face value and the eugenics movement was born. But are ‘feeblemindedness’ and ‘criminality’ really inherited qualities or even definable and measurable traits at all? And aren’t environmental factors (e.g. poor housing and nutrition) more likely to cause wellbeing than genetic make-up? Eugenics was boomed and Galton was caught out by his subjective definitions (lack of validity) and confused correlations with causality (lack of reliability) – the toaster phenomena.

But imagine if the trials hadn’t been conducted on women to understand pregnancy infection rates or we had spent our resources over the past 100 years on selective breeding and not on solving socioeconomic issues? So, believing in things without questioning their validity and reliability can have potentially disastrous consequences. Despite all the evidence eugenic philosophies still remain and people still get rectal burns from coffee enemas or even die.

Delusions and halo effects

The world of business and change management has its own examples. In Search of Excellence was published in 1982 by Tom Peters and Bob Waterman and was a huge success. On the face of it their analysis offered a blueprint for organisational success and seemed intuitive and appealing. But most of these ‘Excellent’ companies quoted in the book didn’t do very well 5 and 10 years after the book was published. Of the 35 ‘Excellent’ companies only 12-13 companies grew faster than the S&P 500 average (The Halo Effect p89). And again in 1994, with the publication of ‘Built to Last’ by Jim Collins and Jerry I. Porras, of the 18 companies cited only 8 outperformed the S&P 500 average over 5 years and within 10 years only 6 of the companies were still keeping pace with the S&P 500 (The Halo Effect p98). Phil Rosenzweig, the author of The Halo Effect, suggests that the belief that ‘companies can follow a blueprint to lasting success maybe appealing, but it is not supported by the evidence’ (p105). The analysis that underpinned these books was flawed because it was based on (amongst others):

-         Halo effects – this is when we use our general impressions of a company to infer some specific attributes e.g. a company seems to perform well so it must have a superior strategy, leadership or change management capability. This is why the cross sectional analysis used in these books cannot be used to show causality.

-         Delusion of Connecting the Winning Dots where we start to see ‘winning’ patterns within high performing companies. But if we don’t compare and ‘control’ patterns within ‘winning’ companies with ‘failing’ companies how would we ever know they are unique to success? This is why controlled trials are important because they help isolate patterns in data.

-         Delusion of rigorous research – This is when writers state they have analysed hundreds of companies over several years conducting ‘exhaustive’ ‘in depth’ ‘robust’ research. They mistake quantity with quality. For example, no matter how many interviews are conducted with CEOs from high performing companies we cannot claim to have found ‘winning formulas’ – Halo effect in, Halo effect out. You can read more about quality of evidence in this article.

-         Delusion of correlation & causality – The toaster pregnancy link is a good example of how correlations and causality can get mixed up. But sometimes the relationship can be more subtle like employee engagement and performance – does performance drive engagement or the other way around? And what sort of experiment do we need to run to be clear on the direction of the relationship. Cross sectional data won’t show causality, we need longitudinal data gathered over time and/or controlled trials (this is what was used in the enema infection study used & here is one on why working from home should be standard practice) to be clear on causality that can strip out the effects of different variables.

So, the very things that Peters, Waterman, Collins & Porras claimed to be drivers of enduring performance – strong culture, commitment to excellence etc were just attributes based on performance – their underlying research methods in both books were flawed. Their ‘El Dorado companies’ that always out-performed the market never existed because there is no blueprint for success.

Delusion of rigours research

Change management also suffers from its own delusion of rigours research. For example, Prosci (proponents of the ADKAR model) use their ‘Best Practices in Change Management’ study which ‘reports over the last twenty years, compiling data from more than 6,000 change leaders in 85 countries’ to support their claims. They claim their data shows a correlation between the use of OCM and outcomes stating ‘effective OCM drives results and outcomes’ which can increase the likelihood of meeting objectives by as much as 6 times! The word ‘drive’ implies cause and effect. A substantial and appealing claim that needs substantial evidence. But the data and analysis doesn’t control for other factors. Companies that adopt effective change management processes may already have trust in leadership, good supervisory support, a clear vision which may explain more of the variation in project success than PROSCI’s factors (e.g. structured change management process, effective communications plans, managing resistance etc). The data does not support the claim. In addition to suffering from the delusion of rigorous research & delusion of connecting the winning dots, PROSCI’s research suffers from the ‘delusion of correlation & causality’ and probably the Halo effect – no matter how big the data set is, you can’t prove causality with the cross sectional questionnaire that PROSCI use. Even PROSCI seem confused using the words ‘correlation’ (a statistical technique showing a relationship between two variables) and ‘relationship’ (which could mean a correlation or a causal relationship) for similar bar charts.

Beware of extraordinary extrapolations & lack of definition

As discussed, eugenics and enemas are based on out-of-date disproven theories, but they still linger on. An example in OCM is Edgar Schien, the ‘father of organisational development’, who thought that the concept of coercive persuasion was integral to change. He stated in 1962 that ‘support for attitudes have to be undermined and destroyed if change is to take place’ – a strong belief and extraordinary extrapolation from Chinese Communist Party prison camps to modern organisations. Schien (1960) likens this process to ‘unfreezing’ (borrowed from Lewin’s Unfreeze-Change-Freeze ‘model’)- which he describes as a process ‘in which the prisoner’s physical resistance, social and emotional support, self-image and sense of integrity, and basic values and personality were undermined, thereby creating a state of “readiness” to be influenced’. This is scary stuff! But wait a minute, Daryl Conner (1992) states that ‘Orchestrating pain messages throughout an institution is the first step in developing organisational commitment to change’ (Managing at the Speed of Change p.98). The idea that commitment comes from pain seems more akin to a rectification movement to systematically remould minds than OCM. These claims seem akin to Kotter’s claim that practitioners need to create a sense of urgency. But what exactly does a 'sense of urgency' or creating pain mean? Kotter doesn’t seem sure. He states organisations must ‘create a crisis’ (Kotter 2012) but also takes a softer approach stating urgency as ‘business-as-usual not being acceptable’ (Kotter 1995) – two completely different things. If we don’t know what Kotter or Conner mean by sense of urgency or pain, we don’t know which end of the urgency/pain spectrum we should aim for. High quality evidence tells us that a perceived threat is only effective if people feel they can do something about it (Peters, Ruiter & Kok 2013). So, Kotter and Conner’s strategy is high risk. Don’t we need to equip people with a sense of autonomy and efficacy first and then what would be the point of imposing pain? If you start with urgency and pain, you will end with resistance which is probably why Kotter also believes that humans don’t like change (Kotter 1995) – or certainly his kind of change.  ADKAR follow suit. Their 2010 curricula states ‘The natural reaction to change is resistance’(Prosci Change Management ToolKit 2010). So, you can see how Schien’s beliefs based on brainwashing studies in prison camps in the 1950’s permeates through OCM today. 

But if neither Kotter or Connor are clear on what they mean by urgency or pain how can they be ‘operationalised’ and measured?  Whether a concept can be ‘operationalised’ and measured is one of the first steps in identifying whether a concept or claim is likely to have any validity. So, take for example the concept of change readiness. Is it psychological e.g. the individuals emotional state (say the WIIFM), a shared team property or more structural such as the organisation’s processes? And are we measuring readiness for a specific change or any type of change? And if it is specific then don’t people need to know what the change is before they can make a judgement about it? If the change is emergent then how do you ever assess readiness? So although many practitioners may start with a change readiness assessment are we clear on what it really means?

First 4 steps in spotting a dodgy concept

OCM is underpinned by concepts or claims around Brains hating change, Unfreeze-Change-Refreeze, Creating a sense of urgency, DABDA WIIFM, Resistance, #Growth Mindset, #Learning Styles, #MBTI, Maslow’s hierarchy of needs etc. But hopefully in part 1 of this series I have identified the first 4 areas that help understand why these concepts are potentially dodgy:

1.      On the face of it does the concept or claim seem to make sense? So, for example, does inducing pain, urgency or coercion seem to be good places to start in OCM?

2.      Does the concept or claim rely on extra ordinary extrapolations? Does the research that underpins the concept come from a completely unrelated field such as prison camps or people dying as with the change curve?

3.      Does the concept or claim suffer from data delusional? Does the analysis suffer from rigorous research, connecting the winning dots, causality v correlation delusions or halo effects?

4.      Is the concept clear enough to be operationalised? Can the concept’s definition be pinned down to a relatively specific definition and measurement?

In part 2 of this series I will dive deeper into the validity and reliability of concepts which can help us spot dodgy concepts.

Bernd Zimmermann

Managing Partner @ Change Pioneers | Senior Advisor & Interims Manager | OD, HR, Change Management, Leadership Development

4 年
回复
John O' Boyle

Director Leadership Development

4 年

As a new reader of this post, this seems spot on. One reads and often blindly accepts these theories as gospel, without interrogating the underlying data. I particular like your In Search of Excellence example. Keep up the good work

Brendan Martin

Performance Booster at ? ?? Transformational coach - I help teams, both business and sport.

4 年

Another fantastic article! You give me some of the best inspiration for discussions in my change management courses! Keep up the great work!

Ridzal Thajeb

Change Management

4 年

I hate good readings, because now I have to Think. That's hard work. Seems there's a niche for AI to do all the validation, while a human change manager serves the purpose of putting a face to the project. People trust people. Thanks for the great read, Alex, can't wait for Part 2.

Eduardo Muniz

GM/Strategic Change Consulting Practice Lead at The Advantage Group, Inc.

4 年

Alex Boulting. A simple request for factual evidence can help validate that concept. So far track record doesn't help most OCM methodologies since the vast majority of Change initiatives are wrongly deployed and unsustainable. Thank you for sharing

要查看或添加评论,请登录

社区洞察

其他会员也浏览了