Milestones in Measurements
Comparability
In the continuing discussion on DQOs, sampling theory and data interpretation pursuant to PARCC parameters, we have covered:
Hypothesis
Precision
Accuracy
Representativeness
We are now moving onto “Comparability.”
Comparability essentially asks “To what shall I compare my ‘results’?”
Consider the following applications of the previous parameters:
Imagine you are a State Trooper and you have been assigned to catch speeding motorists along a stretch of road using a radar gun to sample the velocity of cars and measure their speed.
Now, to perform your task, you want to know if a particular car is speeding and, therefore, you will sample the velocity of each suspect vehicle. You must ensure your data are tenable in court in case the driver fights the ticket and so you will establish sampling DQOs and interpret the data according to PARCC parameters.
To begin, you establish an hypothesis: “The suspect vehicle is not speeding.” This is the default position that will be accepted unless there is confident information to the contrary; you are going to sample the velocity of cars with a radar to test the hypothesis.
For confidence, the measured speed must be “Precise” and so you measure the velocity over the course of several seconds to ensure that there are no unusual variations, and that you are observing a steady velocity and you are willing to accept deviations of no greater than 3% during the observation period (accounting for acceleration).
For confidence, the measured velocity must be “Accurate” and so at the beginning of the shift, you have verified the calibration of the radar unit with certified tuning forks and will permit no greater than 1% deviation and no observable bias. In this way, you are assured your sample reading is accurate.
For confidence, the measured velocity must be “Representative” of the suspect vehicle, and so you only select isolated vehicles such that no other vehicle is in the radar’s antenna field and you will only permit a reading from a vehicle whose travel is parallel to the antennae “line of sight.”
You are all set! You take your calibrated accurate and precise police radar and measure the velocity of an oncoming car at 63 mph. But you realize that although you now have a “result,” you are none the wiser, and you cannot use your data to speak to the issue of the hypothesis.
This is because of the subtle problem regarding the way the question was asked versus how you provided an answer – remember, the original question was not "How fast is the suspect car moving?" the original question was "Is the suspect car speeding?"
There is a missing component needed to answer the question - comparability. If you don’t have an a priori knowledge of the posted speed limit for that section of road, then knowing the vehicle's velocity with precision, accuracy, and representativeness is useless since you have nothing against which to compare the vehicle's speed and answer the ultimate question "Is the car speeding?"
The posted speed limit becomes the metric for comparison.
While sitting on the highway in a 65 mph speed zone, prior to measuring a vehicle's velocity, the police officer establishes an a priori decision criteria that if a vehicle's velocity is greater than 65 mph, then that vehicle is speeding and he will take specific, definable actions. If the measured speed of a vehicle is less than 65 mph, he will decide that no action is required; thus the Decision Threshold becomes 65 mph.
And so it is with other kinds of sampling – if one doesn’t have an a priori Decision Threshold against which one will compare the results, then one only has numbers – no useable data. It is imperative that the Decision Thresholds be established before any samples are collected.
In Industrial Hygiene terms, the “speed limits” are a priori decision thresholds, such as regulatory environmental thresholds, internal corporate policy limits, occupational exposure limits (PELs, TLVs?, RELs, MAKs, WEELs, etc), or other predetermined thresholds.
A “result” is not necessarily greater than the decision threshold just because a reported value happens to exceed the numerical value of the decision threshold. Let’s look at some real-time air monitoring data for a factory worker. In the following graph, the blue line represents the real-time concentrations of cyclohexane integrated over seven minute intervals. The green line represents the cumulative ppm-minute 8-Hour “Time Weighted Average” (TWA) exposure. The black horizontal line is the ACGIH TLV? – the decision threshold against which our full-shift sample will be compared.
Clearly, we see there are short intervals when the concentration of the cyclohexane exceeds the numerical value of the decision threshold – however, the full shift sample (the final result) is only 82 ppm, so the concentration is below the decision threshold… or is it?
As we have already explored, a reported value has two confidence intervals and the “true” value has a finite probability of being between those two limits; the lower confidence interval (LCL) and the upper confidence interval (UCL).
So when we calculate the confidence intervals for the full shift sample result, we see the following:
Since we know each sample has uncertainty, we must consider the uncertainty in the decision making process (unless the data is for regulations by the Colorado Department of Public Health and Environment, then you get to throw away all rational thought and toss out all science-based sampling considerations and regulatory decisions are made on wind direction, and personal feelings … sad, but serious).
So, using the uncertainty, let’s consider the following scenarios:
In two scenarios, (A and B) the reported value is greater than the decision threshold, and for two scenarios, the reported value is below the decision threshold.
If one is an ethical regulator, one typically looks at the LCL, and if one is a consulting Industrial Hygienist, one looks at the UCL.
Therefore:
In Scenario A, the reported value and both the LCL and UCL are above the Decision Threshold, and the sample result confidently indicates the concentration was greater than the Decision Threshold.
In Scenario B, the reported value and the UCL are greater than the Decision Threshold, but the LCL is below the decision threshold. Therefore, the Industrial Hygienist will interpret the data to indicate the sample exceeded the Decision Threshold, but a regulator should interpret the data to indicate possible compliance.
In Scenario C, the reported value and the LCL are below the Decision Threshold, but the UCL is greater than the decision threshold. Therefore, the Industrial Hygienist will interpret the data to indicate the sample as probable non-compliance (exceeded the Decision Threshold), but a regulator should interpret the data to indicate probable compliance.
In Scenario D, the reported value and both LCL and UCL are below the Decision Threshold. Therefore, the Industrial Hygienist and the regulator will interpret the data to confidently indicate the sample was below the Decision Threshold.
So, when we return to our cyclohexane result (82 ppm), the Industrial Hygienist should interpret the data to indicate a probable exposure that exceeds the Decision Threshold because the UCL (102 ppm) is greater than the Decision Threshold (100 ppm), and a regulator should interpret the data to indicate probable compliance because the reported value (82 ppm) and the LCL (62 ppm) are below the Decision Threshold.
How does this decision making process apply when dealing with “certified mould inspectors” claiming they are comparing an indoor sample to an outdoor sample?
To answer that question, let’s go back to our policeman with a radar gun - Imagine, the insane problem the cop would face if the posted speed limit changed from minute to minute to minute ranging from a posted speed limit of 5 mph one moment to 100 mph the next minute, and everything in between from minute to minute. As such, the cop is unable to know at any one time the posted speed limit! He would be trying to compare a confidently derived velocity against a chaotic unpredictable moving target – clearly impossible.
And yet this is exactly what “certified mould inspectors” do on a daily basis when they falsely claim they are comparing indoor to outdoor spore concentrations. The “certified microbial consultant” is attempting to compare one unpredictable chaotic dynamic variable (an indoor sample result) against another unpredictable chaotic moving target (outdoor sample). That is, such a comparison cannot be done, and in my experience, without exception, every “certified mould inspector” who claims to have compared an indoor sample with an outdoor sample, has never done any such thing – they have merely compared one utterly useless “lab result” of no confidence against another utterly useless “lab result” of no confidence …. 60 seconds later, the concentrations could easily have been reversed.
Let’s look at some actual data. In the table below, I have presented some spore tap data collected from a control residence (no mould problem) in Boston, Massachusetts. Each of the contemporaneous samples were collected in an identical manner at the exact same time of day.
What we see is the expected huge fluctuations from indoor to outdoor samples. What we also see is that four of the six indoor samples were greater than the outdoor samples (interpreted by “certified mould testers” to indicate a problem).
The above variations are not just an artifact of geography or different days – consider the following data all collected from the same date from a “control” house (no mould problem) in Conifer, Colorado:
Again, we see the expected large variations about the mean for both indoor and outdoor and two of the six indoor samples were greater than the outdoor samples. In fact, what we see is the reported value discussed as “Scenario C” above as the dependent variable being compared against “Scenario B” as the independent variable!
That is, when “certified mould inspectors” claim they are comparing an indoor sample (the dependant variable) to an outdoor sample (the independent variable), they are using a Decision Threshold (independent variable) whose uncertainty may be greater than the sample (dependent variable). In a nut-shell, they can’t do it and they have never been able to do it; they are just misleading the client, and bamboozling their victim with fancy lab reports and apparently “scientific” looking equipment and ripping off the victim with useless “results” that cannot be interpreted.
In fact, the only people who are collecting mould samples and pretending to compare an indoor spore count against an outdoor spore count are people who entirely lack any legitimate knowledge in sampling theory, aerobiology, and statistics. In any event, such a comparison has never been considered acceptable practice anyway and has never been based on science (I will address the source of the myth in a minute).
If we take a look at the two scenarios, houses with indoor mould problems, and control houses, or houses with no indoor mould problem, we see that very often, mouldy houses have spore counts that are less than the outdoor air, and houses with no mould problems often have spore counts greater than the outdoor air. This will not come as a surprise to those who know something about aerobiology, but will come as a complete shock to “certified mould inspectors”!
In the following graphic, the indoor spore counts are always expressed as the statistical MVUE (4) (usually based on six samples), and the outdoor spore counts are a mixture of MVUEs and (occasionally) single samples. In the first graphic, we are looking at the comparisons of indoor to outdoor spore counts for properties with no mould problems. We see that the indoor MVUE spore counts very often exceed the outdoor spore counts.
In the next graphic, the indoor spore counts are always expressed as the statistical MVUE (usually based on six samples), and the outdoor spore counts are a mixture of MVUEs and (occasionally) single samples. In this graphic, we are looking at the comparisons of indoor to outdoor spore counts for properties with confirmed mould problems. We see that the indoor MVUE spore counts very often are very much less than the outdoor spore counts.
It is possible that the myth regarding indoor v. outdoor comparisons started with notable, well respected researchers who alluded to indoor/outdoor generalities (1) and those generalities were then taken out of context and referenced inappropriately.
For example, in the 1998 edition of NIOSH’s Manual of Analytical Methods, QA/QC Chapter J, NIOSH (2) partially quoted a reference and stated:
In general, indoor microflora concentrations of a healthy work environment are lower than outdoor concentrations at the same location.(Macher & Burge 1995) If one or more genera are found indoors, in concentrations greater than outdoor concentrations, then the source of amplification must be found and remedied.
NIOSH then references the source as: “Macher JM, Chatigny MA, Burge HA [1995]. Sampling airborne microorganisms and aeroallergens. In: Cohen BS, Hering SV, eds. Air sampling instruments for evaluation of atmospheric contaminants, 8th ed. Cincinnati, OH: American Conference of Governmental Industrial Hygienists, Inc., pp. 589-617.”
However, if one goes to the original source (Macher & Burge, 1995), we see that the referenced authors correctly made the first observation (the general comment about indoor v. outdoor concentrations), but did not make the et sequitur conclusion – rather that was an unqualified misinterpretation by NIOSH (or at least a statement that has been misinterpreted by many) .
Placing the comments of the original cited authors back into context challenges the fundamental legitimacy of performing indoor/outdoor comparisons as they are being done and is contrary to what the originating author wrote elsewhere. On indoor/outdoor concentration issues wherein the same original author (Burge) also in 1995, observed: (3)
Indoor/outdoor relationships: Unless there is an indoor source for specific bioaerosols, concentrations indoors will generally be lower than outdoors. This effect is related to the reasons for occupying enclosures, which are designed to protect us from adverse weather and intrusion by vermin or other unwelcome (sometimes human) visitors. The outdoor aerosol penetrates interiors at rates that are dependent primarily on the nature of ventilation provided to the interior. Indoor/outdoor ratios of specific particle types (of outdoor origin) are highest (tending toward unity) for buildings with “natural” ventilation where windows and doors are opened to allow entry of outdoor air along with the entrained aerosol. As the interior space becomes more tightly sealed, the ratio becomes lower and lower.
Therefore, the indoor/outdoor ratio of airborne spores is primarily a function of building systems, and the indoor to outdoor ratio will rise and fall with the normal ventilation infiltration rate and other factors not related to indoor mould growth.
Unfortunately, poorly trained mould consultants have turned rationale into tautology and have repeated the NIOSH quote so often (and out of context) it has taken on a life of its own and is even included in sloppy “standards” such as the IICRC S520. In any event, such a practice has certainly become the hallmark of the certified “toxic mould" gangs as a normal practice. Common practice notwithstanding, the comparison still remains without foundation.
Conclusion
It is all well and good to be diligent with practices which include Precision, Accuracy and Representativeness, but based on the above information, all is for naught if you exclude the importance of Comparability in your decision making process.
References:
1) Burge HA Bioaerosols in the Residential Environment, Chapter 21 in Bioaerosols Handbook (Cox CS, Wathes CM eds), 1995
2) NIOSH is the US Department of Health and Human Services, Centers for Disease Control, National Institutes of Occupational Safety and Health.
3) Muilenburge ML, The Outdoor Aerosol, in Chapter 9 of Bioaerosols, (Burge HA, ed) 1995
4) Minimum variance unbiased estimate – the arithmetic “average” of a lognormally distributed data set.
Technical Director at Building Forensics .co.uk
8 年I wont bother answering your puerile and childish comments.I provided you with evidence you were wrong to accuse me and you refused to apologize.So be it,
Technical Director at Building Forensics .co.uk
8 年Are these Milestones supposed to be serious? We are given an example of measuring a speeding car with an accurate calibrated tool made for the purpose against a definitive target and most importantly a statutory law, none of which apply to mold sampling or indeed outcome. Next were told of the PELs TLVs etc are priori decision markers, none of which apply to mold but are presented as if they are. Next were shown the real time sampling results of cyclohexane which has no normal presence in the atmosphere which means exposure or presence is readily measured against TWAs very easily and results actually mean something when compared to legislation. So were now exposed to Upper and lower confidence levels and logarithmic assessments from accurate measurement, none of which can be utilised in mold measurement because no safe levels of exposure are listed and genetic and individual response would mean even the most accurate of sampling results would hold no real relevance against sampling which was 50% wrong. We are now told we can’t rely on lab results and this nugget comes from the misinterpretation of Bob Brandy who used a single use cassette and sent it to half a dozen labs where it was opened and closed and re posted around the country and guess what, different results from a tiny sampling regime of 6 cassettes. A point made earlier about sampling frequency and inaccuracy of small batch sampling appears to have been ignored because it obviously suits the authors point. Of course the issues of genus and species differences are ignored as the author focuses on spore count only and of course ignores the basis of the sampling hypothesis. The PARC episodes rely on factors which fail in all aspects when set against mold testing but the reality is that without legislation or known or even estimated exposure safety levels accuracy is pointless of course emphasised when we see not all people are affected by high exposure but some may be affected by low exposure. What is certain is that mold sampling can provide some indicators which combined with other criteria can help form a RISK and HAZARD assessment. I accept mold sampling has limitations and accuracy is a major aspect but PLEASE don’t shove gobbledegook down our throats and make out it has scientific relevance.