Graphing our data is great, BUT
We were graphically displaying data for centuries before we had a practical way to wrap our arms around the Measurement Uncertainty that comes along with every single bit of that data. Over those years, displays of data gradually became richer in the density of the information they conveyed to our eyes, even while staying strictly two dimensional.
Years ago, Edward Tufte “re-posted” a superb example of the power of this graphical approach. He showed us a Charles Minard representation of the size of Napoleon’s Army as it first fought all the way to Moscow and then had to fight all the way back. This graph is stunning in its ability to convey so much horror and misery on a single flat page!
Tufte’s point was that with some careful thought and planning, we can compress data until it’s really wonderfully dense and insightful. (If you are not familiar with him, Tufte also detests PowerPoint.) Minard made this contribution to data representation at least 160 years before there was any meaningful way to question the Measurement Uncertainty of the data represented in his graph, or to assign a numerical weight to it.
We can see that this relatively loose, relaxed state extended all the way forward in our direction to the early Quality and Statistical gurus of the last century. To my eyes, this period ended roughly mid-century with John Tukey and Claude Shannon. Up until this point when someone presented a graphical representation of the data that they were using to make?any?point, we could only accept their implicit promise that any measurement uncertainty they might have encountered did not significantly weaken their point. It was only after this period that estimating Measurement Uncertainty first began being a practical possibility.
Of course, the development of this capability immediately created an uneasy coexistence between attempts at graphing data and that data’s Measurement Uncertainty.?When does the relative Measurement Uncertainty of any process rise to the point at which its risk compels us to consider it in tandem with any other viewpoint, or can it remain, as it did with Minard, a trivial question?
Adding the word “risk” to this challenge means that any answer we can achieve arrives along a spectrum with no easy, black/white answer.
A Worst-Case Situation As An Illustration
领英推荐
Suppose that we are graphing a process continuously using any one of many generally accepted graphing approaches. This view of any process must also define the process limits of “product acceptability” which is one of the main reasons for going to this trouble in the first place. To keep the peace, let’s ignore for right now the fact that there are many different ways of defining a variety of different process limit locations. Regardless of who places these limits, or the method they use, we will eventually still need to define our reaction whenever a data point inevitably lands right on our process limit line. The simple question becomes: “Is this particular data point ‘in’ or ‘out’ of tolerance?
If, dear reader your answer to this question while we talk about the process that?you care most about?is that it is of trivial concern to you, then you should stop reading this and save your time and energy.
If, like many, many others you hope that the Measurement Uncertainties your process sees every minute that you run it will just miraculously cancel out to zero, then read on please. This should include anyone who thinks that capturing a process by graphing it from its outer boundaries to its “center” will magically capture its Measurement Uncertainty, just because nearly all the process results are inside of its limits.?
We have silently moved into the territory ruled not by our QA or engineering departments, but by the perspective of our customers. If they are willing to go along with “a century from now, this will not matter” then we can feel safe in telling ourselves that we have finished with process risk assessment. Otherwise, it’s up to us to apply the resources to give our final customers the confidence that we are 1) as concerned about all this?measurement?risk?as they are, and 2)?technically capableof enclosing all possible risk outcomes.
This measurement risk didn’t just jump up from nowhere the first time that our process wandered out to its limit. It was there in plain sight all along and quite capable of growing quickly in a very non-linear fashion, too!
Whenever I get tired of beating this particular drum, I think about the thousands of people who are entering the measurement field for the first time every year and are unaware of these considerations. I was a member of this group myself while I performed a decade’s worth of equipment calibrations in a Pharma setting.
Just sayin’.?