Your “Lean Daily Management” Approach Would Be Even Better with Some Simple Statistical Methods
Created by Mark Graban

Your “Lean Daily Management” Approach Would Be Even Better with Some Simple Statistical Methods

No alt text provided for this image

Edit - My book on these topics, Measures of Success: React Less, Lead Better, Improve More is now available as an eBook and paperback.

For a long time, many of us have taught that Lean is more than a set of tools, more than Kaizen Events and projects. As Toyota teaches today, through its TSSC group, the Toyota Production System and Lean are an organizational culture, a philosophy, a managerial method, and a set of technical methods.

It’s a system.

In recent years, it’s certainly been encouraging to see more organizations learning and practicing Lean as a management system. What’s often described as “Lean Daily Management” (or “Managing for Daily Improvement,” or MDI) typically includes practices like team huddles, formal Gemba walks, and boards on the wall that display performance metrics and improvement ideas that are initiated and worked on by staff and frontline supervisors.

One concern I have about Lean Daily Management is the way some businesses and healthcare organizations view and react to the charts on the wall, whether in the board room, the shop floor, or the operating room hallway.

Some very common practices out there violate known “best practices” for statistical analysis, which can lead to a lot of frustration and wasted time, while hampering needed improvement efforts.

I have written before, on LinkedIn, about how the book that’s had the biggest impact on me is Understanding Variation: The Key to Managing Chaos, by Prof. Donald J. Wheeler, from Tennessee (my apologies to LEI and its great authors). The methods I write about here are something I included, even just briefly, in the first edition of my book Lean Hospitals (and each edition since).

Here is an audiobook version of the article:

Dr. Wheeler worked directly with the late, great Dr. W. Edwards Deming for 20 years and still carries forth his message today. If you wonder why Deming should still be relevant to the Lean community, remember Shoichiro Toyoda’s words:

“There is not a day I don’t think about what Dr. Deming meant to us. Deming is the core of our management.”

I recently saw Dr. Wheeler give a keynote talk at the Society for Health Systems Conference and was reminded, through his wisdom and humor, of the important lessons from his book, which I’ve read countless times over the past 20 years.

Wheeler teaches several important concepts that should be incorporated into Lean Daily Management, such as:

No data have meaning apart from their context

I once saw a hospital post, in their lobby, that their “Quality Panel Score” year-to-date was 3.58. There was no context about what the maximum score was, what the scores were in previous years, or how that score compared to other hospitals. The only other piece of context was that the YTD target was 3.59. I’m always suspicious of overly precise targets and performance that’s suspiciously close to the target, but that’s also a topic for another day.

Two data points do not make a trend

We need to stop making comparisons and decisions based on two data points (including actual performance versus the last period, performance versus last year, or performance versus a goal or target). As Dr. Wheeler said in our conversation, three data points generally do not make a trend either. We need charts that show how a process performs over time and it helps to visualize that as a chart. We just need to react the right way.

Arbitrary targets can cause a lot of problems

As Deming taught, arbitrary targets that exceed the capabilities of the current process and system will often lead people to “fudge the numbers” or “game the system” to get the results that management demand. See the recent VA waiting time scandals and the Wells Fargo unauthorized accounts scandal for examples of this. A Lean culture should shift away from “naming, blaming, and shaming” as it’s called in healthcare. But, if we don’t have that culture yet, we must be careful with how managers and executives react to the charts on these boards.

Filter out noise to better find signals in the charts

Every data contains signal and noise. If we react to all the noise in the data, overreacting and asking for explanations for every up and down in the data, we might waste a lot of time and frustrate everybody involved (which could lead to the end of Lean Daily Management). Control charts (or what Wheeler calls “process behavior charts” – probably a better name) are the best way to filter out noise so we can make better decisions based on signals.

Use process behavior charts to detect signals and to determine if we’ve really improved

Once you learn the Deming / Wheeler statistical thinking (which is easy to learn and practice), you’ll no longer be satisfied when a team shows simple “before and after” data (a two-data point comparison that masks variation, making it impossible to tell if the apparent improvement is signal or noise.

Be Careful with Linear Trend Lines

While I’m happy to see charts on these boards, people should be careful with their use of “linear trend lines,” which are incredibly easy to draw in Excel. Linear trend lines can be very sensitive to the first and last data points in a series.

For example, here is a run chart with a linear trend line (in red) that suggests performance is improving:

No alt text provided for this image

A manager might conclude (and try to convince their boss) that “we are getting better” and would hope they’d praise their team for this.

Recognition is good, of course. But, only when it’s deserved.

Notice how there happens to be a very low point (82%) at the start of the chart and a very high point (91%) at the end. Hmmmm. If one were trying to be a bit deceptive, selectively choosing your starting and ending points is one way to do that.

Note: I’m not encouraging this sort of behavior.

It made me wonder what would happen if the first and last data points happened to not be in that graph, for some reason. It would then look like this:

No alt text provided for this image

Same data, different timeframe. Now, the linear trend is downward.

Wait, I thought we were improving?!?! Troubling, isn’t it?

Process Behavior Charts are Better

That’s why it’s better to use the process behavior charts. If you’re familiar with Statistical Process Control (SPC), the approach I will describe here is very similar (it’s called a “control chart for individuals”).

When looking at a chart, we need to ask questions, such as:

  • Are we improving? Or is performance just fluctuating around an average (noise)?
  • Is the process changing?
  • If so, is it changing continuously or is it more like we have occasional changes that might boost performance as more of a step-function instead of being linear?
Process behavior charts help us separate signal and noise and help us determine when a process' performance has really improved.

To create this chart, we take the time series run chart and add a line for the calculated mean. We also calculate “natural process limits,” as Wheeler would call them, or what SPC would call “upper and lower control limits.” We choose +/- 3-sigma control limits because that is proven to filter out most of the noise in the system.

We need to filter out noise because the last thing we want to do is draw the wrong cause-and-effect relationships between our process improvement actions (or attempts) and the results.

If there had been an improvement attempted, such as a Rapid Improvement Event or a change in staffing levels and assignments in February 2014, we want to determine, with statistical validity, if that made a difference or not in performance. If the patient satisfaction score is higher, we need to ask if that's a meaningful signal or if it's just noise, or fluctuation, in the data.

The average patient satisfaction score was about 86% over that time frame. The upper limit is calculated to be 92.5% (we do not choose this limit, as it is the “voice of the process”). The lower limit is calculated to be 80%. Read more and see a video tutorial on how this is calculated.

The process behavior chart, and understanding how to interpret it, leads me to draw a different conclusion about the data:

No alt text provided for this image

Bad news: Patient satisfaction is not improving (sorry to be a buzzkill)

Good news: Patient satisfaction is not getting worse (um, hooray?)

All we know is that we have a stable and predictable process that’s generating stable and predictable results. The chart is telling us (good news) that we can predict with a high degree of certainty that the July 2015 patient satisfaction score will fall between about 80% and 92.5%. The bad news might be that our target is 95%.

The current system is incapable of hitting that target. So, we need to improve the system. We’d have to roll up our sleeves and get to work.

In a stable and predictable system, a question such as “Why was patient satisfaction lower in April compared to March?” is the wrong question to ask. There is no likely explanation for why April generated different results than March. It was the same system both months and there is always variation in the output of a process and, here, it’s all noise. We could ask, “How do we improve the system so it performs better?” but that’s, again, a different question than asking, “What went wrong last month?”

If there had been an intervention in February 2014, I would have to conclude that it was ineffective. We planned, we did, we studied, and, based on this view, I’d want to adjust. There is nothing in the process behavior chart to indicate that performance is better. It appears to be fluctuating around a mean. We have noise in the system. We shouldn’t praise people for randomness in the data and we shouldn’t draw the wrong conclusions from our attempts at improvement.

If we tried a new intervention, we can use the “Western Electric Rules” to help us determine if there is a signal amongst that noise. If we saw 8 consecutive points above the mean, the process is telling us something has changed (or it might confirm that our countermeasure was effective). Of course, 8 consecutive points below the mean would tell us performance has shifted in the wrong direction. Or, finding any single data point above the 3-sigma limit of 92.5% suggests a signal, as that is statistically unlikely to be “normal variation” or noise.

Or, we can simplify things to use these three rules for finding a signal:

Rule 1: Any data point outside of the limits.

Rule 2: Eight consecutive points on the same side of the central line (average)

Rule 3: Three out of four consecutive data points that are closer to the same limit than they are to the central line.

The methods Wheeler teaches in Understanding Variation are easy to understand and they are not difficult to put into practice. I’ve taught these methods to managers and front-line staff in many health systems.

Once a team stops overreacting to every up and down or having to come up with an explanation for every point that’s below average or worse than the target, we free up a lot of time and mental energy for real and sustainable improvement.

Monday, I taught a workshop on these methods at the Lean Healthcare Transformation Summit in Brussels. I’ll be teaching these methods again in a workshop sponsored by Catalysis in Seattle this May and it will be offered as a pre-summit workshop at the Lean Healthcare Transformation Summit in Palm Springs this June. I hope you’ll join me. I hope you’ll buy Understanding Variation (and my new book) and, more importantly, I hope you’ll put these methods into use.

Mark Graban (@MarkGraban) is a consultantauthor, and speaker. His latest book, Measures of Success: React Less, Lead Better, Improve More is now available. Mark is also the author of the Shingo Award-winning books Lean Hospitals (the 3rd edition was released in 2016)  and Healthcare Kaizen. Mark was also creator and editor for the book Practicing Lean. He is also the VP of improvement and innovation services for the technology company KaiNexus and is a board member for the Louise M. Batz Patient Safety Foundation. Mark blogs most days at www.LeanBlog.org.

Amila Nandasekera MBA,Six SigmaMBB,LeanMBB,CMA

Business Improvement and Optimization. Boeing Australia.

5 年

It's very effective article hitting the nail on the head. What is equally important is to do 8 step problem-solving to get to the true root cause of the special cause variation/signal(problem) and eliminate the problem by eliminating the true root cause.

Troy Taylor

Thought Leader, Change Manager and TPS professional

7 年

Great post Mark. Well done. Highlight the mura and engage the leaders in improving the system.

Richard Salloum CMgr MCMI

Senior Product Manager - EUV at Edwards Vacuum

7 年

Excellent article on how to interpret process control charts and understanding variation. I particularly like the comment on start and end points of the chart influencing the data can be perceived. I'm from a machining background and we have found that in order to fully understand any data set, we need event logs to to run concurrently with the trend chart. This allows anyone analysing the data to truly understand what is the cause of any variation or inputs and is the basis for a control plan to sustain a capable process.

Darren O'Connor

Associate Director, Global External Operations at Biomarin

7 年

Excellent article.....

Rob Maguire

Lead Partner @ Baringa Partners | Operational Value Creation

7 年

Really interesting. Some excellent thinking there!

要查看或添加评论,请登录

Mark Graban的更多文章

社区洞察

其他会员也浏览了