Comment on News Report - 'Watchdog failed to investigate eight surgery deaths' and the problem with measuring surgeons that might kill you
Richard A D Jones
President C2-Ai - Serial entrepreneur to multiple exits - #tech4good - C-level Swiss Army Knife, creative strategist, author, lecturer on digital health
If you’re a hospital board member and/or senior manager, your duty of care, oath, legal/governance responsibility means this article is worth your time as a guide to avoid the consequences of failing to measure the right things (fines, sackings, scandals, maybe even prison).
The Australian reports that, “the Health Care Complaints Commission responded to a notification by three surgeons who had claimed a fellow surgeon was “not fit to operate” and had alleged a “failure of proper processes” at Royal Prince Alfred Hospital by undertaking the assessment -rather than a formal investigation.” https://www.theaustralian.com.au/nation/politics/watchdog-failed-to-investigate-eight-rpa-surgery-deaths/news-story/762b0d0bcb58326789c0fb7b0b93fd5e
The scary part about this is that there may be far more complications and deaths occurring because hospitals are not using a system that can identify them. I’m going to tell you about why the problem in Australia is tricky unless you use the right systems for measurement and how we resolve these issues early before they become scandals (with five case studies described below).
Firstly, you want your surgeons to take on higher risk patients. You might be one in the future and you don’t want them ducking operations because ‘they are worried it will affect their mortality ratings if things go wrong’ as the Daily Telegraph reported*. Those ratings are simply the percentage of deaths to operations and so you immediately see the problem and how surgeons would (and indeed do) think defensively if measured in this way.
Trust me, we’ve been banging the drum about this for years. Our CMO wrote in HSJ on this topic in 2016 - ‘Why the NHS needs to get better at assessing surgical risk’ (https://www.hsj.co.uk/topics/patient-safety/why-the-nhs-needs-to-get-better-at-assessing-surgical-risk/7014195.article?blocktitle=Comment&contentID=7808)
The only way to properly investigate the situation highlighted by The Australian today is to use a properly risk-adjusted assessment of the observed results to expected. This is what we do for every patient in every hospital that uses our system as well as on request for regulators and authorities. No change to workflows in the hospital or disruption.
Without understanding the physiology of the patient, any co-morbidities and the risk of the prospective operation, you are trying to compare apples and oranges. We look at all those factor so can benchmark what happens (observed results for mortality and complications) against what should happen using our Ai-backed system built on 25 years of research and 120m patient records from 46 countries.
455 lives saved in our partner hospitals
To give a view on how effective our systems are, we looked at a recent 12-month period in our partner hospitals in the UK alone. Improvements made in that period can be equated to saving of 455 lives, nearly 4,000 instances of harm avoided and more than £20m in the direct costs associated with treating those harms. Rolling out across the NHS would save three times the lives lost on the roads each year.
CASE 1 - Study of outcomes of 6 orthopaedic surgeons over 2 years
Identified a ~3-fold variation in incidence of raw mortality & complications between surgeons.
However, the case-mix adjusted picture (the observed to expected ratio O/E) shows those with higher incidence were not poorer performers, but operating on higher risk patients. Compare for example Mr A with Mr D. On raw rates, Mr D seems better. But Mr A was dealing with more complex patients and actually getting the same or better results.
Table 1 – Raw Mortality (deaths per 100 cases) and our risk-adjusted Observed to Expected Ratio for the same cases
Table 2 – Raw Morbidity (complications per 100 cases) and our risk-adjusted Observed to Expected Ratio for the same cases
CASE 2 – Comparing one surgeon’s mix of patients to a broader group
Mr. Z was taking on more complex case-mix than his colleagues as shown in the chart below. He took more of his cases (blue) in the higher risk categories than his colleagues (yellow).
However, when we looked at the Observed to Expected ratio for Mr. Z’s cases, his performance was good. In this case, the choice of higher risk patients was justified.
CASE 3- Orthopaedic Surgeon also selecting high risk patients
This surgeon was, like the surgeon in Case 2, was also operating on higher-risk patients.
However, in this case his outcomes did not justify the approach. Some patients were too high risk to be operable and the Observed to Expected ratio for the surgeon showed results did not begin to justify the choices.
CASE 4 – Vascular surgeon who we had re-instated almost immediately after suspension
A vascular surgeon had a higher than normal death rate. Concerns were raised over quality of care based on raw complication rates (complications to operations). He was suspended by the regulatory authorities on this basis.
After we were asked to look at the situation, he was re-instated immediately using our analysis over 5 years showing excellent results in face of highly complex case-mix, particularly over most recent 2 years. In fact, he was the best surgeon in the hospital in the previous three years.
CASE 5 – Surgeon killing patients through training failure
A particularly sensitive case but one surgeon had not been trained in one aspect of their work and in closing up after operations was puncturing the bowels of patients. Seven died and the hospital hadn’t spotted it. Our system flagged this issue as soon as it went live. The surgeon would have carried on treating people, inadvertently killing them and would have got
So back to this case in Australia.
Sometimes we see authorities struggling because they don’t know how to do what we do. It’s a unique (but validated and proven) approach but if you’re not aware it can be done, you rely on the best measures you have and they are not fit for purpose. The previous cases show the insights we can deliver by risk-adjusting correctly to understand the real results in the real and specific circumstances for each operation.
So is there a problem with the surgeon in Australia? Only proper risk-adjusted assessment of the surgeon's results can tell you.
About C2-Ai
C2-Ai helps hospitals worldwide to demonstrably reduce avoidable harm and mortality, generate significant savings on operating expenditure (potentially millions per hospital) and reduce complaints/clinical negligence claims by up to 10%.
We uniquely and accurately risk adjust for each patient and can tell which hospitals, specialties, consultants etc. are doing well (given their specific case-mix), where the hospital has issues for mortality and complications, what the causes are, their economic impact, and how to resolve them.
We can then support hospitals with forward-looking applications to triage and manage patients more effectively, thereby optimising outcomes and cost-effectiveness.
Our suite of tools is not limited to hospitals, but has demonstrated wider application for regulators and purchasers/insurers.