Metrology Monday! #89 A Discussion on Conformity Assessment, Decision Rules and Measurement Decision Risk – Joint Probability
Last week we discussed conditional probability, which is a good way to answer the question “for this particular measurement and associated uncertainty and this specification, what is the probability of a False Accept?”? But what if you have a different question that you are seeking an answer?? For example, what if you were asked to purchase a calibration standard that would be capable of calibrating a particular device, like a multimeter, and you need it to be good enough for any measurement that the meter may have, from the lower specification limit of the instrument to the upper specification limit?? How good would the standard need to be for you to have an acceptable False Accept probability?
When we test a population of devices, we can expect:
·?????? That the data for the population forms some sort of distribution
·?????? There are a certain number of devices that perform to specifications and (hopefully) a smaller number does not
·?????? Often there is a central tendency for the distribution
·?????? A natural process will, with enough data, resemble a normal distribution
Because of this, it is very useful to use the knowledge that we have about this population of devices that we want to calibrate.
We need to understand the specifications for both the calibration standard and the multimeter for this example.? Both instruments should have an associated level of confidence with their specification.? For Fluke, this is usually 2.58-sigma or the 99% level of confidence.?
If you have this information, then you can apply the statistics associated with Joint Probability. ?The definition for False Accept is the join probability of an out-of-tolerance condition and the probability of the out-of-tolerance point being reported as in-tolerance.
The probability of a False Reject is the joint probability of an in-tolerance condition and the probability of the in-tolerance point being reported out of tolerance.
领英推荐
I included the mathematical formulas for False Accept and False Reject just in case you really like math and want to build your own tools, but there are some good tools available for free that will make these calculations for you.
In order to use Joint Probability, you will need to know:
·?????? The expanded uncertainty of the measurement with the Coverage Factor
·?????? The tolerance limits for the device, whether it is a specific manufacture and model number or it is a common tolerance for all models for a particular family, such as 3 ? digit multimeters
·?????? The shape of the distribution for the device (generally it is normal)
·?????? An estimate of the in-tolerance probability (percentage of devices that are in-tolerance at the end of their calibration interval).? The in-tolerance probability provides information for the coverage factor for the device under test distribution.? While we can use the coverage factor from the specifications, in real life I personally find it is better to be a bit more conservative because instruments may be abused by customers and damaged by shipping and the observed in-tolerance percentage may not be the same as the coverage factor in the product specifications.
Here is an example:? The device under test tolerance is 1%, and the in-tolerance probability (also called end of period reliability) is 95%.? The expanded measurement uncertainty is 0.25%.? The tool I am using is called RiskGuardTM which is freeware available at www.isgmax.com.? Another good tool is Suncal from Sandia National Laboratories, also a freeware tool.
As you can see, the computed False Accept risk is 0.92% and the False Reject risk is 1.59%.? Would this most likely meet your customer’s risk requirements?? For this example, did you notice that the ratio of the tolerance to the expanded uncertainty is 4:1?? If you glance back at post 88, the example for Conditional Probability uncertainty was about 6.66 times better than the uncertainty, but for the given measurement, the false accept risk was about 9%.
As you can see, in order to know the correct answer, we need to understand what questions to ask!? #MetrologyMonday #FlukeMetrology
Fluke 17025 Quality Manager at Fluke Corporation
4 个月I often wonder if there is a data set that is different for PDF in regards to being calibrated in-situ (on-site) or having to be shipped to a lab.
Gesch?ftsführer bei Esenwein GmbH
4 个月Jeff Gust What if the specifications of the UUT were not statistically determined or at least confirmed by the manufacturer but were instead 'set' by the marketing department or whoever else? With reputable manufacturers like FLUKE, I have a lot of confidence in them. But with some other manufacturers, my experience tells me that the specifications are more of a hopeful wish.
Metrologist | Experienced Scale Technician & MA1 Verification Officer | Expert in After-Sale Support and Technical Support | Writer | Technical Writer
5 个月Great discussion on False Accept and False Reject probabilities. It's crucial to not only focus on the measurement uncertainty but also factor in real-world conditions, like instrument abuse or shipping damage, that can impact in-tolerance probabilities. A conservative approach often aligns better with actual performance, especially when reliability is a concern. Tools like RiskGuardTM are great for quickly calculating risks, but understanding the conditions driving these probabilities is key. Thoughts on balancing conservatism with customer requirements?
"Came to Believe"
5 个月Jeff Gust These are still great posts! No matter how hard we struggle, we cannot remove the Risk of a False Accept from our customers plate and slide it back onto our plate! This fact cannot be altered by the presence of customers who don't have a prayer of answering this risk question for themselves. We still cannot take it from them.
инженер по наладке и испытаниям – ООО НТЦ Механотроника
5 个月Where do you get information on devices that the user gave you for calibration, if it is not Fluke?