ISO/IEC 17043:2023 - Outlier -Technique?to determine Outlier in Calibration.
Santhosh Abdhulla
Quality Manager (Official). Documentation Specialist (Freelance)
Grubbs' Test
Grubbs' test, also known as the Grubbs' outlier test or Grubbs' extreme studentized deviate test, is a statistical test used to identify outliers in a univariate data set. The test is particularly useful when you suspect that there might be one or more extreme values (outliers) that are significantly different from the rest of the data.
The basic idea behind Grubbs' test is to compare the value of the potential outlier to the rest of the data points and determine if it is significantly different. The null hypothesis for the test is that there are no outliers in the data set. The alternative hypothesis is that there is at least one outlier.
The test statistic, G, is calculated as:
G=max(∣Xi?Xˉ∣)sG=smax(∣Xi?Xˉ∣)
Where:
Once the test statistic is calculated, it is compared to the critical value from the Grubbs' table or calculated using a significance level and the degrees of freedom. If the calculated test statistic is greater than the critical value, the null hypothesis is rejected, and the data point is considered an outlier.
Let's consider an example where Grubbs' test is used in the calibration of a multimeter.
In this scenario, you have a set of measurements taken by the multimeter, and you want to check if there are any outliers that might affect the calibration process.
Suppose you have the following resistance measurements (in ohms) from your multimeter:
R={2.3,2.4,2.5,2.4,2.6,100.0} R={2.3,2.4,2.5,2.4,2.6,100.0}
In this dataset, most values are around 2.4 ohms, but there's a potential outlier at 100.0 ohms. We'll use Grubbs' test to determine if this value is a significant outlier.
s=∑i=16(Xi?Xˉ)2n?1s=n?1∑i=16(Xi?Xˉ)2 s≈35.76s≈35.76
领英推荐
If G>GcriticalG>Gcritical, you would reject the null hypothesis (no outliers) in favor of the alternative hypothesis (at least one outlier).
If Grubbs' test indicates that 100.0 ohms is an outlier, you might want to investigate the measurement further or consider excluding it from the calibration process if it is determined to be erroneous. Keep in mind that the example values used here are for illustrative purposes, and you should adapt them based on your actual data and the specifics of your calibration process.
Dixon's Q Test
Dixon's Q Test is a statistical test used for identifying outliers in a univariate dataset. It is particularly useful when you suspect that there might be one or two values that are significantly different from the rest of the data. The test is simple but effective and can be applied in various fields such as chemistry, environmental science, and quality control.
Here are the key steps and concepts associated with Dixon's Q Test:
It's important to note that Dixon's Q Test has limitations, and its effectiveness decreases as the sample size increases. Moreover, the test assumes that the data follows a normal distribution. If the dataset significantly deviates from normality, other outlier detection methods or transformations may be more appropriate.
As with any statistical test, the interpretation of results should be done cautiously, and it's advisable to use Dixon's Q Test in conjunction with other outlier detection methods for a comprehensive analysis of your data.
Let's consider an example of applying Dixon's Q Test in the calibration of a multimeter.
Suppose you have taken a set of resistance measurements with your multimeter, and you want to check if there are any potential outliers in the data.
Here is a sample dataset:
R={2.1,2.3,2.4,2.5,2.4,2.6,2.8}R={2.1,2.3,2.4,2.5,2.4,2.6,2.8}
Let's say you suspect that the measurement 2.8 might be an outlier. We will apply Dixon's Q Test to test this suspicion.