Are Change Management Models built on solid evidence?
In Challenge #2 Which is the best Change Management Model ? I suggested that there was a lack of evidence underpinning many of the popular change management models.
In his book ‘The Science of Successful Organisational Change‘ Paul Gibbons states:
‘The most shocking thing is that during more than 30 years in business, at the most senior levels, in the world’s biggest companies, dispensing consulting advice, no client ever asked me whether there was evidence to support the models, frameworks, tools, methods, and ideas I proposed using. Never.’
Even more troubling is evidence suggesting that in healthcare, there could be a negative relationship between the number of years that a practitioner has been practicing and the quality of their judgement. So, we need to be a bit wary of practitioners who say, ‘in my experience’. Their experience is subject to a variety of cognitive bias (e.g. overconfidence) and their “tool-kits” maybe out of date. My own research on this shows that the older the theory the more likely it is to be used by change practitioners.
The question is, are change practitioners ‘relentlessly seeking new evidence and insights to update their assumptions, knowledge and skills (Pfeffer & Sutton HBR 2006). Do change practitioners’ practice what we preach? Are we able to adapt our personal paradigms in light of new evidence?
Maybe not as often as we should.
Like any profession, change management practitioners should be promising one thing; that the assumptions underpinning our practice are based on robust evidence.
If we don’t have robust evidence that our practice works, we have no practice…
It’s all about the claim
According to the Center for Evidence-Based Management (CEBMa) it is all about the claim that practitioners make and the quality of evidence that supports the claim.
They rank the trustworthiness of the studies that support (or not) the claims according to an evidence hierarchy.
Research based on before-and-after measures (called randomized control trials – RCTs) are at the top of the hierarchy because they can quantify the cause-and-effect of a claim. Meta-analysis studies that pull together RCT’s are the ‘gold standard’ for categorizing the trustworthiness of claims. Surveys are regarded as having a higher level of bias so are in the middle of the hierarchy and expert opinions (for the reasons mentioned above are at the bottom). Based on the quality of the studies supporting (or contradicting) the claims being made; we can form conclusions on the likelihood of the claim being true from ‘very likely’ to ‘very unlikely’. For example, if you go to scienceforwork.com they have a trustworthiness score at the end of all their articles. Evidence based practices are becoming so important that CIPD have put evidence-based decision-making at its core professional values.
But unfortunately, it is not as simple as putting evidence into a hierarchy – different types of evidence answer different types of questions. In their book on evidence based management Eric Barends and Denise M. Rousseau state ‘we can only judge the trustworthiness of a study’s findings given its research design and the questions asked’ (p.156). I have designed a process that change practitioners can go through which is adapted from Barends & Rousseau’s methodology (see Table 7.4 page 158) for finding out which research design is best for answering which questions.
Or, a simpler version, these are the types of questions practitioners should be asking before choosing an intervention.
So, is Change Management built on a solid evidence base?
I though the easiest way to look at this would be to look at the most popular models individually.
1. Kotter’s 8 step model (or 8 Accelerators) – arguably the most popular change management model in the industry. It may come as a shock but Kotter’s model is the only model (that I am aware of) that has been empirically tested. But Kotter’s model is not based on empirical research. In his book ‘Leading Change’ (2012) there is no bibliography with references to outside sources. In their review of Kotter’s model Appelbaum et al suggest that its popularity is based on its ‘direct and usable format than from any scientific consensus on the results’. The reason for this is that Kotter’s model is built on an evidence base of stories of personal experiences, single company examples and case studies all of which are at the bottom of the evidence hierarchy and are subject to cognitive bias. Kotter’s model is also prescriptive suggesting behaviours that will or won’t work. So, if creating a sense of urgency ends up making things worse rather than better where do you go next?
2. PROSCI’s ADKAR model – again this model is built on stories and case studies ranging from Pineapple growers in Ghana to Emergency Services in Australia (Hiatt 2006). Hiatt references PROSCI’s research (we will explore this research in a separate article) and some articles but there is only one reference to a tried and tested psychological construct – Vroom’s Expectancy Theory. What is surprising is that given ADKAR is rooted in ‘How to facilitate change with one person’ (Hiatt 2006 p.1) it ignores over 40 years of research on human motivation. Also, extrapolating from a few narrow case studies into a grand theory applied to other cultures and sectors must be questioned. Also we cannot be certain of the sequencing – it could quite easily be DAKAR rather than ADKAR.
3. David Rock’s SCARF model – David Rock in an article published in the NeuroLeadership Journal (a journal he edits) in 2008 outlines his SCARF model – Status, Certainty, Autonomy, Relatedness and Fairness. He seems to ignore research finding in organisational psychology and relies on findings from neuroscience. The findings from neuroscience are suspect because Rock’s claims around Certainty for example confuse pain with change and claims around Status confuse ‘pecking order’ in baboon troops with social economic status in human society. And psychological constructs around autonomy (locus of control & self-efficacy), relatedness and fairness (organisational justice) have been around in academia for decades. There is plenty of research that shows how these constructs affect organisational performance, but David Rock seems to ignore it. So, his SCARF model misses important concepts such as trust and confuses concepts – what does Rock mean by Fairness does he mean social exchange, fairness of outcomes or the fairness of the process being followed? These are concepts academics have been debating for years so why not use their research? Again a model that seems to skip over the evidence is favour of a memorable acronym.
4. Lewin’s Unfreeze – Change – Refreeze – Model – I wrote a separate article about whether this can really be called a model at all – there is no book, no peer reviewed article, no empirical evidence (Lewin was a big proponent of testing theoretical propositions) to support this model. There are questions whether Organisational Change Management models built from Lewin’s paradigm are just built on sand rather than science. As Lewin would have wanted, we should be building models from multiple sources of evidence and rigours analysis rather than ‘n-stage frameworks’. In addition, since Lewin’s work was built on action research with small groups, it may not extrapolate to an organizational level of analysis.
From an evidence based management perspective the question is not whether any of the models mentioned above are right or wrong but whether their claims are supported by the “conscientious, explicit and judicious use of the best available evidence from multiple sources” (Barends, Briner & Rousseau 2015).
Kotter, Hiatt & Rock all miss 60+ years of research in psychology and their own research is of weak validity. So it is difficult to argue that they are using the best available evidence from multiple sources.
Do academics have the answers?
Barends et al in their assessment of research evidence in OCM think they don’t. They suggest that OCM research is mainly ‘one shot studies’ with nearly 90% having weak internal validity (the extent to which a piece of evidence supports a claim about cause and effect). But Stouten et al disagree 'we found several relevant systematic reviews in the context of planned change'. I would go one step further. If change is fundamentally about increasing the performance of an organisation there is lots of high-quality research practitioners can use (e.g. self-efficacy, autonomy, organisational commitment, psychological safety, goal setting, job design, coaching, feedback etc) to guide organisations to successful change.
If change is continuous and work becomes change, then doesn't Organisational Change Management = Organisational Development?
So maybe we stop making up new theories of change but try to integrate what Organisational Development tells us about how to create high performing dynamic organisations?
But academics certainly don’t have all the answers, but they do have a body of research that is independently reviewed and open to falsification. Practitioners are not there yet. Apart from PROSCI’s Best Practices in Change Management Study (which I will discuss in future blogs) there is little publicly available OCM research conducted by practitioners. What is stopping us from building a database of all our experiences in implementing OCM and subjecting it to critical review? But academic research and practitioner experiences are only two sources of evidence.
Creating our own evidence
Organisations need to conduct their own experiments among internal and external stakeholders creating their own sources of quality evidence. Generic annual high-level engagement surveys just won’t do. Bespoke surveys that measure the specific outcome of an intervention is the only way to get close to knowing if change is happening. Using other data such as how systems are being used (such as employee performance management and other talent management systems) should be another rich source of data. Building a scorecard summarising all these metrics would be a huge leap forward in showing stakeholders that an OCM programme is moving the organisation in the right direction. It is not big data it is simply data that answers the question ‘Did that intervention work?’
HR functions need to reinvent themselves as hubs of evidence-based experimentation and change.
Rather than being procurement departments for learning and development. HR should be working with people (particularly managers) directly understanding what really works on the ground and helping people to help themselves. This is the only way organisation can increase the speed at which they align internal capability with external demands creating dynamic capability.
This acceptance of lack of scientific rigor in change models opens the doors to all sorts of folk law and myths which undermines our practice. If we want to convince organisations that ‘doing’ OCM is better than not doing it, we need to assure them that if you do this (cause), this is likely to happen (effect). I appreciate this is difficult in OCM – social systems are complex – but if we don’t try are we not just selling snake oil?
Evidence-based Change Expert, International Keynote Speaker, Bestselling Author, Wharton Lecturer, Harvard Business Review Contributor, Podcast Host
3 个月This is excellent. Thank you.
Exploring Leadership and Technology #teamZEISS
1 年Thank you Alex, for your work! Besides all the more or less depressing content. I love your outlook: "HR functions need to reinvent themselves as hubs of evidence-based experimentation and change."
Keynote speaker: AI and Future of Work, Ethics of AI, Leading AI, Future Technologies and the Impact on Society
4 年Amazing article mate.
Today, every company & leader needs to master the power of storytelling. Gain essential storytelling skills that build trust and influence.
4 年Love the article, Alex! Thank you for taking the time to put it together.
Chief People & Culture Officer, Transformational Strategist & Coach unlocking individual & organizational potential.
5 年Excellent article Alex Boulting!