Medical error on trial: the case is closed, but is the problem solved?

Medical error on trial: the case is closed, but is the problem solved?

On 13th May 2022 former Tennessee nurse RaDonda Vaught was sentenced to three years’ probation for criminally negligent homicide, after her medical error led to the tragic death of patient Charlene Murphey in 2017. The full details of the incident, as well as the investigations and hearings regarding it, are outlined along with some excellent analysis by Martin Anderson here:

https://humanfactors101.com/2022/05/08/is-human-error-a-crime/

This case raises compelling issues which go well beyond the tough emotional aspects – not to dismiss them in any way, they are very real and must be acknowledged. Some will sympathise with the family of Charlene Murphey, and call for accountability and justice. Others will see in RaDonda Vaught’s tragic error their own experience of workload saturation, competing pressures and systemic challenges in a healthcare system struggling to meet demands every day (such as the hundreds of nurses wearing “#IAmRaDonda” t-shirts outside the courtroom during the trial). From either perspective though, the most important issue going forwards is surely the prevention of further accidental harm and loss of life. In this case, part of the developed solution has been the prosecution and conviction of RaDonda Vaught. The question this raises, and which affects healthcare worldwide, is whether a punitive approach to medical error – up to and including criminal prosecution – will improve patient safety.

Patient safety has been a key priority over the past twenty years, and in some sense the return on investment is being questioned. If all the awareness initiatives, error reports and root cause analyses aren’t producing the improvements needed, what do we need to do next? Perhaps the problem does indeed lie with individuals who effectively sabotage the system through non-compliance and poor decisions. In this case the threat of punishment for those who fail to “do the right thing” might help.

This perspective aligns itself with a particular way of trying to manage outcomes, along the following lines:

"As accountable managers we put a lot of effort into assuring quality. We thoroughly map the best processes to achieve both productivity and safety, and we publish these as policies and expect staff to comply. When something goes wrong, we review the policies, shore up the gaps with alterations or additions, and hope/expect that’s the end of the matter. If in the process we find that the mishap arose from non-compliance, then a range of disciplinary actions are available and appropriate."

This approach to assuring outcomes is about perfecting a blueprint for organisational success, and it assumes that if this is done properly then staff compliance with policy is all that is necessary. More than a few managers, when faced with yet another adverse event involving ‘failure to follow procedure’, have asked in exasperation, “why can’t people just follow the rules?”

On its own this strategy may work in some settings – for example, in a production-line factory with little automation, we can map exact behaviours that will and won’t work (“place the item here, wait until your hand is well clear, and then and only then, pull that lever”) and with a zero-tolerance policy for non-compliance, get the outcomes we want. The problem we face now is that most workplaces are far more complex, and in these settings this simplistic model alone produces plateauing results with a persistent “tail” of adverse events.

The prescriptive approach still has its place, but it shouldn’t be the only tool in our management toolbox. Just as individual situations require differing styles of emotionally intelligent leadership, management must be adaptively tailored to circumstance. In broad terms workplace activities range from simple to complex; from ones where we absolutely do not want people to vary task performance, to those where we absolutely do want the person “at the coal face” to assess each unique situation, draw on their experience and judgement, and make their own unique decision.

For example, in aviation, should an airliner suffer an engine failure during take-off the key decision points will have been mathematically determined and pre-briefed (the crew will stop on the runway until momentum makes that impossible, and above that speed they will carry the engine failure airborne regardless); and whatever happens, the critical actions won’t be made up on the spot or left to individual preference – they are crystal-clear, uniform amongst all crews and well-rehearsed. Meanwhile broader decisions in-flight (dealing with passenger illness, adverse weather, aircraft unserviceability etc.) rest with the pilot-in-command: they’re informed by policy and supported by the organisation, but are ultimately made from the experience and judgement of the person at the centre of the activity, and no-one else can dictate to this.

Similarly in healthcare, there are component tasks which have been shown to benefit from rigid protocols, such as minimising the risk of infection when inserting central lines[1] ; and overall, each act of care delivery is part of a much larger system which, without policy and structure, would be chaotic. But in amongst the blueprinted “knowns” there is a huge amount of activity which relies completely on human perception, judgement, adaptiveness and problem-solving at the bedside, moment by moment. In this pulsing, living system of human activity no amount of policy can substitute for the commitment and capability of the professionals involved, and a legalistic insistence on literal adherence to policy can be ineffective and even counter-productive. Patient safety experts Erik Hollnagel, Jeffrey Braithwaite and Robert Wears make the point well: “mandating over 2,000 health department policies (the number that are technically in operation in some publicly funded health systems) and asserting they must be used continuously to guide people’s everyday work would lead to systems shut-down”; and “standardising approaches by insisting that a clinical guideline on a common medical complaint such as headache or asthma—all fifty or more pages of them—must be slavishly read and everything in them adopted on every occasion when a patient with that condition presents in the Emergency Department, is not just impossible, but leaves almost no time for the actual care to be provided.”[2] There is a place then for standardisation and strict policy, but as part of a human system where on-the-spot professional judgement is also crucial and this must be respected. If we get the balance wrong and let prescriptive management overreach to every aspect of the workplace we end up with bloated bookshelves and a mindset that paralyses the human problem-solving capability upon which our systems rely. As it is sometimes said in aviation: “better to have one book telling people what to do than ten books telling them everything they can’t do”.

This may make some sense, but then how do we manage outcomes and assure quality if the policy blueprint alone is not the solution? It’s still very much possible, but it requires a shift in thinking.

Firstly, we need to stop seeing people as sources of potential error and threats to system integrity. In early industrial settings this perspective may have held some weight, as the workplace was like a giant machine and the human tendency for variance was a hazard: the more that humans could be “automated out”, the safer the workplace became.?In more complex workplaces people – with their variable performance – are actually our greatest asset: “systems perform reliably because people are flexible and adaptive, rather than because the systems have been perfectly thought out and designed or because people do precisely what has been prescribed”[3] . Policy can bring order and consistency but should not try to stop people from thinking for themselves. Automation can support human activity by relieving workload or performing basic monitoring, but its value is not in phasing out error-prone humans; it is in freeing them up to perform their vital, irreplaceable function of seeing things the system designer couldn’t anticipate, analysing them, adapting the system on a micro basis and solving problems. In complex settings such as healthcare people do this thousands of times every day, and the fact that they might get something wrong once in a decades-long career does not mean they are incompetent or discount their countless other invisible successes: we rely on their adaptability to make the system work.

Secondly, we need to rethink the way we respond when things do go wrong. The first and most vital question is not “what went wrong and who is responsible?’, it’s “how does this activity normally go right, and what vulnerabilities does this intersection of circumstances reveal?” Very often when adverse events occur, it is the case that at the time normal people were doing normal things that had worked many times in the past (as in my own experience of a mid-air collision and ejection during Air Force pilot training, mentioned here: https://www.dhirubhai.net/pulse/managing-human-error-lessons-from-aviation ). Although in hindsight it seems obvious that the wrong decision was made or the wrong action was taken, at the time the course followed presumably seemed correct (or at worst, an acceptable compromise between competing pressures). Human Factors expert Professor Sidney Dekker describes the “Local Rationality Principle”: people do what makes sense to them at the time based on the information they have available, the pressures they are under and the norms within their workplace[4] . Rather than in hindsight treating their decisions as aberrant, we need to understand how the activity that went wrong normally goes right: chances are it won’t simply be ‘because normally everyone follows policy and this time someone didn’t’. Perhaps the first step in an incident investigation should not be to interrogate the person involved, who most likely was doing exactly what they and many others had done successfully on many past occasions. Perhaps it would be better to interview as many people as possible who normally perform the activity – with no reference to the mishap – and simply ask “how, in your experience, does this activity work?” The comprehensive picture that develops will show us not only how indebted we are to the dedication and efforts of our people every day, but also the crucial accurate context for the mishap itself. Then when we do turn to the incident details, we understand the healthy body and not just the pathology; we can now with genuine insight evolve our system to become more robust in future.

Above all, if our goal is to manage outcomes and assure quality then this evolution is what matters most: for us to improve, individually and collectively, we must learn from our mistakes. As individuals this is easy because the connection between an action and its outcome is obvious: as a child we burn our hand on a hot stove, and we now know not to touch the stove again. For organisations that connection between actions and outcomes is far more complex and can be obscured by many things, such as:

(a)???Insistence that the blueprint of policy is the whole picture and accurately describes in toto the work being done (as opposed to listening aggressively, welcoming “bad news”, and proactively seeking to understand the challenges our people face);

(b)???Suppression of incident reporting through fear of negative consequences;

(c)????Skewing of incident investigations by a drive to dodge/assign blame (identifying and blaming the individuals involved – “them” – rather than seeking to understand the reality of the system that produced the result – “us” – which is more costly and riskier, but essential for organisational learning). An excellent White Paper from the UK Chartered Institute of Ergonomics and Human Factors, Learning from Adverse Events (https://ergonomics.org.uk/resource/learning-from-adverse-events.html ) makes the point that “focusing on individual failure and blame creates a culture of concealment and reduces the likelihood that the underlying causes of events will be identified”[5] .

Factors such as these within an organisation can completely disrupt the connection between outcomes and their true causes, making adaptation and improvement all but impossible. If an organisation finds that the returns from safety improvement initiatives have stalled, and that in spite of all efforts the same problems keep occurring, then it is highly likely that the “cause and effect” feedback loop is broken. Solving that problem will have its complexities, but the defining quality for success will be the decision to stop vilifying people who make mistakes, and start learning from mishaps and the evidence and data they offer. As British journalist Matthew Syed put it in his outstanding book Black Box Thinking, “a failure to learn from mistakes has been one of the single greatest obstacles to human progress”; and “a progressive attitude to failure turns out to be a cornerstone of success for any organisation”[6] .

What place is there then in an evolving organisation for a punitive approach to error? Well, what might be the presumed benefits of such a construct:

Firstly, is punishment necessary to hold people accountable? We have an ingrained cultural expectation that when a person causes harm (intentionally or otherwise), the delivery of justice will involve some form of retribution. However there are well-mapped frameworks and rationales for applying justice in a restorative rather than retributive manner, including the idea of ‘forward-looking accountability’ (which allows resources to be invested in safety rather than liability-limiting)[7] . Justice can be served by avoiding further harm, and without inflicting punishment on the perceived “guilty party”.

Secondly, does fear of punishment push us to perform better? While in broader society we accept that – for a certain element – fear of punishment is necessary as a deterrent, it seems unlikely that a professional person going about their chosen work would require that kind of motivation. For most of us, doing our best for the people who rely on us is the driver; and if pushed for energy at the worst of times, then fear of failing those people, or of failing our own professional standards, would be enough. Throughout my career as a pilot I have enjoyed indemnity from any legal action arising from my work, and I have never encountered or needed “zero tolerance for rule-breaking” to govern my behaviour. Professional people are often under a good deal of pressure as it is, trying to achieve the best outcomes possible with the resources available. The extra stress of potential punitive action will probably not increase their motivation, but it may simply add to their list of challenges. Emotional Intelligence authors Daniel Goleman, Richard Boyatzis and Annie McKee make the point regarding negative emotions such as fear and anger: “from a biological perspective, these emotions were designed for short, intense bursts meant to prepare us to fight or run. If they last too long or are continually primed, they exhaust us or slowly burn us out”[8] . With regards to healthcare, does this not describe the problem that already exists after several years of worldwide pandemic? Reports of healthcare systems suffering widespread fatigue, burnout and staff shortages abound in news media around the world. Adding further pressure to these systems is unlikely to improve human performance.

To that end, it can be argued that the threat of punishment in a complex setting such as healthcare will do little to improve the performance of individuals, and may actively hinder efforts to evolve; and that there are viable alternatives which produce better outcomes. There will always be outliers, specific cases of clear deliberate wrongdoing, but does it make sense to gear our entire system for these few exceptional cases?[9] If we optimise our system to deal with these rarities then the negative impacts on organisational learning and staff engagement will be felt across the board, at considerable cost. If we assume the majority of our staff are well-motivated professionals and avoid the punitive approach, some deficient individuals may take advantage of that; but the vast majority will be better empowered to use their resourcefulness to achieve organisational goals, and we will be far better informed about how best to improve.

There is a lot in this, and transforming a system is not simple, especially when the demands upon it are so high. However there is a wealth of outstanding literature about how to make these constructs work, and examples of where Human Factors-informed thinking, modern safety theory and genuine Just Culture have been operationalised with great success. The vision and the roadmap are there to be followed. Like any positive change – such as exercising or eating more healthily – we don’t have to become perfect overnight, we just have to start. If we identify viable, constructive steps, and take them consistently, the benefits will become obvious and will compound: focus on the behaviours and the outcome will take care of itself. For the sake of people like Charlene Murphey and her family, and for that of RaDonda Vaught and the many healthcare professionals who see themselves in her, any effort to shift the trajectory for the better is one worth making.


[1] Pronovost, P., & Vohr, E. (2010). Safe patients, Smart Hospitals - How One Doctor's Checklist Can Help Us Change Health Care from the Inside Out. London, England: Hudson Street Press, ch. 2

[2] Hollnagel, E., Wears, R. L., & Braithwaite, J. (2015). From Safety-I to Safety-II: A White Paper. Odense, Denmark: The Resilient Health Care Net: Published simultaneously by the University of Southern Denmark, University of Florida, USA and Macquarie University, Australia.

[3] ibid.

[4] Dekker, S. (2011). Patient Safety - A Human Factors Approach. Boca Raton, USA: CRC Press, p. 55

[5] CIEHF. (2020). White Paper: Learning from Adverse Events. Wootton Wawen, UK: Chartered ?Institute of Ergonomics and Human Factors.

[6] Syed, M. (2015). Black Box Thinking: The Surprising Trush About Success. London, UK: John Murray (Publishers), ch. 1

[7] Dekker, S. (2017). Just Culture: Restoring Trust and Accountability in Your Organization. Boca Raton, USA: CRC Press, ch. 5

[8] Goleman, D., Boyatzis, R., & McKee, A. (2002). Primal Leadership - Realizing the Power of Emotional Intelligence. Boston, USA: Harvard Business School Press, p. 25

[9] Provan, D., & Rae, D. (2021, February 7). The Safety of Work Episode 65. Retrieved from The Safety of Work: https://safetyofwork.com/episodes/ep65-what-is-the-full-story-of-just-culture-part-2/transcript

Great article Mark, and a "must read" for those involved in quality and safety in healthcare. One of the realities I encountered were the demands of families to hold people accountable when an adverse event occurred to their child. Healthcare workers and health providers are starting to see the need for HF principles being used at the bedside, with the goal of preventing adverse events before they happen (Safety 2). Parents and those outside of health don't yet understand. I think it was the District Attorney who charged RaDonda Vaught independently; not the family or health regulators. How we navigate the competing demands of families and just culture is the $64,000 question. #humanfactors #safetyculture #healthcare

The Chartered Institute of Ergonomics and Human Factors White Paper as mentioned in the article is available here: https://ergonomics.org.uk/resource/learning-from-adverse-events.html It's a great read, and while it is aimed at high level organisations it has value for any team intent on improving outcomes.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了