How to Succeed at Failing, Part 2: Life and Death
In medicine, failure can be catastrophic. It can also produce discoveries that save millions of lives. Tales from the front line, the lab, and the I.T. department. Part of the series “How to Succeed at Failing.”?
This article comes from?Freakonomics Radio.?You can listen and follow our weekly podcast on?Apple Podcasts,?Spotify, or?elsewhere.
* * *
In early 2007, Carole Hemmelgarn’s life was forever changed by a failure, a tragic medical failure. At the time, she was working for Pfizer, the huge U.S. pharmaceutical firm. So she was familiar with the healthcare system. But what changed her life wasn’t a professional failure; this was personal.
Carole HEMMELGARN: My nine-year-old daughter, Allysa, was diagnosed with leukemia, A.L.L., on a Monday afternoon, and she died 10 days later. In this day and age of healthcare, children don’t die of leukemia in nine days. She died from multiple medical errors. She got a hospital-acquired infection, which we know today can be prevented. She was labeled. And when you attach labels to patients, a bias is formed, and it’s often difficult to look beyond that bias. So, one of the failures in my daughter’s care is that she was labeled with anxiety. The young resident treating her never asked myself or her father if she was an anxious child, and she wasn’t. What happens is we treat anxiety, but we don’t treat scared, afraid, and frightened. And that’s what my daughter was. Hospitals are frightening places to children.
Hemmelgarn says she filed a claim against the hospital but she didn’t move forward with a lawsuit, because of the emotional toll. She ultimately took a different path: in 2021, she co-founded an advocacy group called Patients for Patient Safety U.S. It is aligned with the World Health Organization. She also runs a master’s program at Georgetown University, called Clinical Quality, Safety, and Leadership.
HEMMELGARN: When harm does reach the patient or family, that is the time to really analyze what happened. And while you never want to harm a patient or family, one of the things you’ll hear from patients and families after they have been harmed is they want to make sure that what happened to them or their loved one never happens again. The example I can give for myself personally is, I did go back to the very organization where my daughter died, and I have done work there.
Today, on Freakonomics Radio, we continue with the series we began last week, on failure. We acknowledged that some failure is inevitable:
Amy EDMONDSON: We are, by definition, fallible human beings, each and every one of us.?
And that failure can be painful:
Gary KLEIN: I don’t think we should enjoy failure. I think failure needs to burn on us.?
This week, we focus on the healthcare system, where failure is literally a matter of life or death.
HEMMELGARN: Some organizations felt like they had already achieved the patient-safety mission. Others, it wasn’t even part of their strategic plan.
And we will learn where, on a spectrum, to place every failure —?from inexcusable …
John VAN REENEN: There’s lots of examples of huge public-sector failures. But this was one of the biggest.
To life-saving.
Bob LANGER: I really believed that if we could do this, it would make a big difference in medicine.?
“How to Succeed at Failing,” Part 2, beginning now.
* * *
The story of Carole Hemmelgarn’s daughter is tragic —?a hospital death caused by something other than the reason the patient was in the hospital. Unfortunately, that type of death is not as rare as you might think. Consider the case of RaDonda Vaught, a nurse at Vanderbilt University’s medical center. In 2019, she was prosecuted for having administered the wrong medication to a patient, who subsequently died. The patient was a 75-year-old woman who had been admitted to the hospital for a subdural hematoma, or bleeding in the brain. Here is RaDonda Vaught testifying at her trial:
RaDonda VAUGHT: I was pulling this medication. I didn’t think to double-check what I thought I had pulled from the machine. I used the override function. I don’t recall ever seeing any warnings that showed up on the monitor.
The medication that Vaught meant to pull from the Accudose machine was a sedative called Versed. What she mistakenly pulled was a paralytic called vecuronium. Vecuronium instead of Versed.
VAUGHT: I won’t ever be the same person. Um … It’s really … I … When I started being a nurse, I told myself that I wanted to take care of people the way that I would want my grandmother to be taken care of.
RaDonda Vaught was convicted of negligent homicide?and “gross neglect of an impaired adult”; her sentence was three years’ probation. You might expect a patient-safety advocate like Carole Hemmelgarn to celebrate Vaught’s prosecution. But she doesn’t.
HEMMELGARN: This doesn’t solve problems. All this does is it creates silence and barriers. When errors happen, so often, the front line workers — your nurses, allied health physicians — were blamed. But what we’ve come to realize is, it’s really a systemic problem. They happen to be at the front line, but it’s underlying issues that are at the root of these problems. It can be policies that aren’t the right policies. It could be shortages of staff. It can be equipment failures that are known at device companies, but haven’t been shared with those using the devices. It can be medication errors because of labels that look similar, or drug names that are similar.?
To get at the systemic problem in the Vanderbilt case, Hemmelgarn’s advocacy group filed a complaint with the Office of Inspector General in the Department of Health and Human Services.
HEMMELGARN: What we found most frustrating was the lack of leadership from Vanderbilt. Leadership never came out and took any responsibility. They never said anything. They never talked to the community. It was essentially silence from leadership. I think one of the other big failures we have in healthcare is fear. Healthcare is rooted in fear because of the fear of litigation. When there’s a fear of litigation, silence happens. And until we flip that model, we’re going to continue down this road.
That’s Amy Edmondson. We heard from her last week too. She’s an organizational psychologist at the Harvard Business School. She recently published a book called Right Kind of Wrong: The Science of Failing Well. The Vanderbilt case was not an example of failing well. Radonda Vaught, you’ll remember, dispensed Vecuronium instead of Versed.
EDMONDSON: You know, you don’t have a dangerous, potentially fatal drug next to one that’s routinely used in a particular procedure. It’s what we might call an accident waiting to happen. With that perspective in mind, RaDonda is as much a victim of a system failure as a perpetuator of the failure, right? So, this reaction, to make this a criminal — human error is almost never criminal. To criminalize this I think reflects an erroneous belief that by doing so will preclude human error. No. What we will do is preclude speaking up about human error. And to her credit, she spoke up — and that, one could argue, ultimately led her to the conviction. She would have been better off somehow trying to hide it, which I wouldn’t advocate, obviously. But when we recognize deeply recognize that errors will happen, then that means that what excellence looks like is catching and correcting errors and then being forever on the lookout for vulnerabilities in our systems.
Let’s take a step back and look at the scale of this problem. In 1999, the Institute of Medicine — known today as the National Academy of Medicine — issued a report called “To Err Is Human: Building a Safer Health System.” It found that 2 to 3 percent of all U.S. hospital admissions result in preventable injury or death, with medical error causing between 44,000 and 98,000 deaths per year. A more recent study, published in 2013 in the Journal of Patient Safety, put the number of preventable deaths at U.S. hospitals at more like 200,000 a year. These large numbers of preventable deaths got a lot of attention in the medical community —?but Carole Hemmelgarn says the attention hasn’t produced enough change.
HEMMELGARN: Some organizations felt like they had already achieved the patient-safety mission. Others, it wasn’t even part of their strategic plan. There’s areas where improvement has definitely escalated since the report came out over 20 years ago. But it hasn’t been fast enough. What we see is that not everything is implemented in the system, that you can oftentimes have champions that are doing this work —?and if they leave, the work isn’t embedded and sustainable.
Amy Edmondson at Harvard has been doing research on medical failure for a long time. But she didn’t set out to be a failure researcher.
EDMONDSON: As an undergraduate, I studied engineering sciences and design.?
It was several years into her engineering career that Edmondson decided to get a Ph.D. in organizational behavior.
EDMONDSON: I was interested in learning in organizations, and I got invited to be a member of a large team studying medication errors in hospitals. And the reason I said “yes” was, first of all, I was a first-year graduate student. I needed to do something. And second of all, I saw a very obvious link between mistakes and learning. And so I thought, here we’ve got these really smart people who will be identifying mistakes. And then I can look at how do people learn from them, and how easy is it and how hard is it. So that’s how I got in there. And then one thing led to another — after doing that study, people kept inviting me back.
Edmondson focused her research on what are called preventable adverse drug events — like the one from the RaDonda Vaught case.
EDMONDSON: Now, you can divide adverse drug events into two categories: one which is related to some kind of human error or system breakdown, and the other, which is a previously unknown allergy, so literally couldn’t have been predicted. And those are still adverse drug events, but they’re not called preventable adverse drug events.
Based on what she was learning from medical mistakes, Edmondson wanted to come up with a more general theory of failure — or, if not a theory, at least a way to think about it more systematically. To remove some of the blame. To make the responses to failure less uniform. Over time, she produced what she calls —?well, here, let’s have Edmondson say it:
EDMONDSON: My spectrum of causes of failures.
Coming up, we’ll hear about that spectrum of causes of failures. It can clarify some things — but not everything.
EDMONDSON: Uncertainty is everywhere.?
* * *
By the way, if you consider yourself a super-fan of this show, we’ve just launched a new membership program, Freakonomics Radio Plus. Every week, members get a bonus episode of Freakonomics Radio; you also get to listen ad-free to this show and the other shows in the Freakonomics Radio Network. To sign up, visit the Freakonomics Radio show page on Apple Podcasts, or go to freakonomics.com/plus. If you don’t want to become a member, just do nothing. Everything will stay the same.
* * *
领英推荐
How did Amy Edmondson become so driven to study failure? Well, here’s one path to it. Her whole life, she had been a straight-A student.
EDMONDSON: I never had an A-minus. Well, you know, I once had one in 10th grade. It just was so devastating, I resolved not to have one again. And I’m only partly joking.?
But then she went to college.
EDMONDSON: I got an F on my first-semester multivariable calculus exam. An F. Like, I failed the exam. I mean, that’s unheard of.?
In the years since then, Edmondson has been refining what she calls “a spectrum of causes of failure.” The spectrum ranges from “blameworthy” to “praiseworthy,” and it contains six distinct categories of failure.
EDMONDSON: Let’s take two extremes. Let’s say something goes wrong, we achieve an undesired result. On one end of the spectrum, it’s sabotage, someone literally tanked the process. They threw a wrench into the works. On the other end of the spectrum, we have a, a scientist or an engineer hypothesizing some new tweak that might solve a really important problem and they try it, and it fails. And of course we praise the scientist and we punish the saboteur. But the gradations in between often lull us into a false sense that it’s blameworthy all the way.
Okay, so let’s start at the blameworthy end of the spectrum, and move our way along. Number one, of the six:
EDMONDSON: My spectrum of causes of failures starts with sabotage or deviance. I soak a rag in lighter fluid, set it on fire, or throw it into a building, right? Or I’m a physician in a hospital, I am a surgeon, and I come to work drunk, and do an operation.?
After sabotage on the spectrum comes inattention.
EDMONDSON: Inattention is when something goes wrong because you just were mailing it in, you spaced out, you didn’t hear what someone said, and you didn’t ask, and then you just tried to wing it. Or you maybe are driving, you’re a trucker and you’re driving, and you look away or fiddle with the radio and have a car crash.
After inability comes what Edmondson calls “task challenge.”
EDMONDSON: Yes, the task is too challenging for reliable, failure-free performance.
By the way, if you don’t remember the story of Richard Feynman and the Challenger investigation and the O-rings —?don’t worry. We’re working on a show about Feynman that you’ll hear in the coming months. Okay, back to failure: the fifth cause of failure on Amy Edmondson’s spectrum is uncertainty.
EDMONDSON: So, uncertainty is everywhere. There’s probably an infinite number of examples here. But let me pick a silly one. A friend sets you up on a blind date. And you like the friend, and you think, okay, sure. And then you go out on the date and it’s a terrible bore or worse, right? It’s a failure. But you couldn’t have known in advance, it was uncertain.
The final cause of failure —?we have by now moved all the way from the blameworthy end of the spectrum to the praiseworthy —?is simply called experimentation.
EDMONDSON: I’m being fairly formal when I say experimentation, right? The most obvious example is a scientist in a lab and probably really believes it will work and puts the chemicals in and lo and behold, it fails. Or in much smaller scale I’m going to experiment with being more assertive in my next meeting, and doesn’t quite work out the way I’d hoped. It’s the Edison quote, you know, 10,000 ways that didn’t work. He’s perfectly, perfectly willing to share that because he’s proud of each and every one of those 10,000 experiments.
So that is Amy Edmondson’s entire spectrum of the causes of failure: sabotage, inattention, inability, task challenge, uncertainty, and experimentation. If you’re like me, as you hear each of the categories you automatically try to match them up with specific failures of your own. If nothing else, you may find that thinking about failure on a spectrum, from blameworthy to praiseworthy, is more useful than the standard blaming-and-shaming. It may even make you less afraid of failure. That said, not everyone is a fan of Edmonson’s ethos of embracing failure. A research article by Jeffrey Ray at the University of Maryland, Baltimore County is called “Dispelling the Myth that Organizations Learn from Failure.” He writes, “Failure shouldn’t even be in a firm’s vocabulary. To learn, from failure or otherwise, a firm must have an organizational learning capability. If the firm has the learning capability in the first instance, why not apply it at the beginning of a project to prevent a failure, rather than waiting for a failure to occur and then reacting to it.” But Amy Edmonson’s failure spectrum has been winning admirers — including Gary Klein, the research psychologist best known as the pioneer of naturalistic decision-making.
Gary KLEIN: I’m very impressed by it. I’m impressed because it’s sophisticated. It’s not simplistic. There’s a variety of levels, and a variety of reasons. And before we start making policies about what to do about failure, we need to look at things like her spectrum and identify what kind of a failure is it so that we can formulate a more effective strategy.
Okay, let’s do that. Coming up: two case studies of failure, one of them toward the blameworthy end of the spectrum.
John VAN REENEN: It was very much driven by the prime minister, Tony Blair.
The other, quite praiseworthy.
Bob LANGER: I failed over 200 times before I finally got something to work.
* * *
John Van Reenen is a professor at the London School of Economics. He studies innovation, but years ago he did some time in the British civil service.
John VAN REENEN: I spent a year of my life working in the Departments of Health when there was a big expansion in the U.K. National Health Service of resources, of various attempts at reforms.
The National Health Service is the U.K.’s publicly funded healthcare system, and there was one particular reform that Van Reenen got to see up close.
VAN REENEN: One of the key things that was thought could really be a game-changer was to have electronic patient records. So you can see the history of patients, know the conditions, what they’ve been treated with. And having that information — I mean, instead of having all these pieces of paper written illegibly by different physicians, you could actually have this in a single record — would not only make it much easier to find what was going on with patients, but could also be used as a data source to try and help think about how patients could have more joined-up care and could even maybe predict what kind of conditions they might have in the future.?
The project was called Connecting for Health, and there was substantial enthusiasm for it; at least the ad campaign was enthusiastic:
ANNOUNCER: All this is a key element in the future of the N.H.S. One day, not too far away, you’ll wonder how you ever lived without it.
The N.H.S. is a big operation, one of the biggest employers in the world.
VAN REENEN: I think the ranking goes something like the U.S. Department of Defense, the Indian railway system, Walmart — and the N.H.S. is up there in the top five. But then if you drill down into it, it is pretty fragmented. Each local general practitioner unit is self-employed. Each trust has a lot of autonomy. And that’s part of the issue is that this was a centralized top-down program in a system where there’s a lot of different fiefdoms. A lot of different pockets of power who are quite capable of resisting this, and disliked very strongly being told “this is what you’re going to have, this is what you’re going to do,” without really being engaged and consulted properly.?
But the train rolled on, despite these potential problems. Connecting for Health required a massive overhaul of hardware systems as well as software systems.
VAN REENEN: And the delivery of those was — there was a guy called Richard Granger who was brought in, I think he was the highest-paid public servant in the country. He was at Deloitte before he came. And then after he left, he went to work for Accenture. He was brought in to do this, and he designed these contracts — very tough contracts, which loaded the risk of things going wrong very strongly onto the private-sector providers. I think just about every single “winner” eventually either went bankrupt or walked away from the contract. The estimates vary of the cost of this, but estimates are up to $20 billion lost on this project. It was it was the biggest civilian I.T. project in the Western world. I mean, there’s lots of examples of huge public-sector failures, and private-sector failures as well. But this was one of the biggest.
British Parliament ultimately called this attempted reform “one of the worst and most expensive contracting fiascos ever.” So what kind of lessons can be learned from this failure?
VAN REENEN: I think it’s a failure of many, many different causes, of many different levels. That top-downness, the not really understanding what was going on at a grassroots level, and the haste —?it was attempted very quickly.
If you’re the kind of person who likes to understand and analyze failure in order to mitigate future failures, what might be useful here is to overlay the National Health Service’s I.T. fiasco onto Amy Edmondson’s spectrum of causes of failure. Reconfiguring a huge I.T. system certainly qualifies as a “task challenge”; but there were shades of inability and inattention at work here as well. All of those causes reside toward the “blameworthy” end of the scale. As for the “praiseworthy” end of the spectrum — that’s where experimentation can be found. The N.H.S. project didn’t incorporate much experimentation; it was more command-and-control, top-down, with little room for adjustment and little opportunity to learn from the small failures that experimentation can produce, and which can prevent big failures. Experimentation, if you think about it, is the foundation of just about all the learning we do as humans. And yet we seem to constantly forget this. Maybe that’s because experimentation will inevitably produce a lot of failure —?I mean, that’s the point! —?and most of us just don’t want to fail at all, even if it’s in the service of long-term success. So let’s see if we can’t adjust our focus here. Let’s talk about real experimentation. And for that, we’ll need not another social scientist — like John Van Reenen or Amy Edmondson, as capable as they are —?but an actual science scientist. Here is one of the most acclaimed scientists of the modern era.
LANGER: My name’s Bob Langer, and I’m an institute professor at M.I.T. I do research, but I’ve also been involved in helping get companies started. And I’ve done various advising to the government — F.D.A., and places like that.
Langer holds more than 1,400 patents, including those that are pending. He runs the world’s largest biomedical engineering lab, at M.I.T., and he’s one of the world’s most highly-cited biotech researchers. He also played a role in the founding of dozens of biotech firms, including Moderna, which produced one of the most effective Covid vaccines. One thing Langer is particularly known for is drug delivery: that is, developing and refining how a given drug is delivered and absorbed at the cellular level. A time-release drug, for instance, is the sort of thing we take for granted today. But it took a while to get there. One problem Langer worked on back in the 1970s was finding a drug-delivery system that would prevent the abnormal growth of blood vessels. The chemical that inhibits the growth is quite large by biological standards, and there was consensus at the time that a time-release wouldn’t work on large molecules. But as Langer once put it, “I didn’t know you couldn’t do it because I hadn’t read the literature.” So, he ran experiment after experiment after experiment before finally developing a recipe that worked. Decades later, thanks to all that failure, his discovery played a key role in how Moderna used messenger RNA to create its Covid vaccine.
DUBNER: So, in your line of work, when I say the word “failure,” what comes to mind?
What do you think? Would you like to live in a world where there’s no shame in failure? Or: do you think it’s important for failure to hurt, to burn (as one of our guests put it last week); maybe that creates a stronger incentive to succeed? I’d love to know your thoughts on this question, and on this series so far; send an email to [email protected], or leave a review or rating in your podcast app. Coming up next time on the show: we’ll dig deeper into the idea of grit versus quit. When you’re failing, how do you know if it’s time to move on?
John BOYKIN: We just could not stop it from leaking, and I was no longer willing to just keep pouring more and more of my money into it.
Case studies in failure, and in grit versus quit — including stories from you, our listeners. That’s next time on the show. Until then, take care of yourself and, if you can, someone else too. And remember to check out Freakonomics Radio Plus if you want even more Freakonomics Radio; every week you’ll get a bonus episode of the show — this week, for instance, you’ll hear our full interview with the remarkable Bob Langer. To sign up, visit the Freakonomics Radio show page on Apple Podcasts, or go to freakonomics.com/plus.
* * *
Freakonomics Radio?is produced by Stitcher and Renbud Radio. This episode was produced by?Zack Lapinski and mixed by Eleanor Osborne,?with help from?Jeremy Johnston. Our staff also includes?Alina Kulman, Elsa Hernandez, Gabriel Roth, Greg Rippin, Jasmin Klinger, Julie Kanfer, Lyric Bowditch, Morgan Levey, Neal Carruth, Rebecca Lee Douglas, Ryan Kelley,?and?Sarah Lilley.?Our theme song is “Mr. Fortune,” by the Hitchhikers; all the other music was composed by?Luis Guerra.