MEASURE FOR MEASURE
Understanding test accuracy and uncertainty ratios
by Christopher L. Grachanen
In metrological circles, there are many different statistics and figures of merit used to gauge the quality of measurement data. Of course, the well-accepted statistics of standard deviation and variance are routinely used to determine the variability of measurement data as well as to assign distributions to data to define the likelihood that these data fall within an expected interval or span.
In other words, they show that probability measurement data fall within a range of symmetrical or asymmetrical values, which are normally given in terms of percentages. These statistics are used to gain insight into measurement data. This insight is derived from an ensemble of influencing factors, including:
- Drift between measurement values.
- Instrumentation threshold-triggering inconsistencies.
- Measurement setup inconsistencies.
- Ambient environmental changes between measurements.
- Operator control inconsistencies between measurements.
- Operator reading interpretation inconsistencies between measurements.
- Instrumentation resolution rounding inconsistencies.
- Calibration correction errors.
- Instrumentation ranging errors.
- Operating voltage and load fluctuations.
These and other influences may contribute to measurement data uncertainty, and limit the data’s usefulness.
It is important that measurement data’s applicability to determine real-life performance of a device being tested and calibrated be qualified. The following example will help to clarify what I mean by qualifying measurement data.
Suppose you are tasked with determining the performance of an environmental chamber set to 20° Celsius. You take 10 measurements in a pristine laboratory-controlled environment and determine the mean temperature is 20.6° Celsius with a standard deviation of 0.05 degrees estimated at a two-sigma student’s T-distribution at nine degrees of freedom.
This, at first glance, would seem to be a reasonable representation of the in-site performance you would expect from the environmental chamber. However, if performance of the environmental chamber is influenced by line voltage disturbances so much that small percent changes (typical of unfiltered, in-site AC line voltage) produce temperature offsets in the range of a couple of tenths of degrees Celsius, the aforementioned measurement data would not be truly representative of the chamber’s performance under real-life operating conditions.
In this example, measurement data should be qualified by noting the range of line voltage values at the time of the measurements, or by taking measurements while intentionally adjusting line voltage amplitudes to values that are representative of the disturbances the chamber will typically experience during in-site operation.
Once measurement data are qualified for a particular device, metrology practitioners will often compute two figures of merit based on the accuracy of the unit under test (UUT) relative to the accuracy or uncertainty of the measurement ensemble (ME), such as the instrumentation and accessories used to derive the measurement data.
The first figure of merit is known as test accuracy ratio (TAR). TAR is the ratio of the accuracy tolerance of the UUT to the accuracy tolerance of the ME used to measure the UUT. TAR is computed as follows:
TAR = UUT tolerance / ME tolerance.
TAR provides a ballpark estimate as to the possible amount of error influence that can be attributable to the ME when measuring a UUT. It is assumed that the larger a TAR, the less error may be attributable to the ME and measurement data are assumed more representative of the actual performance of the UUT. Industry practice is to strive for at least a 4:1 TAR whenever possible.
The second figure of merit that metrology practitioners often compute is known as test uncertainty ratio (TUR). The classic definition of TUR is the ratio of the accuracy tolerance of UUT to the uncertainty of the ME used to measure the UUT. Classic TUR is computed using this equation:
Classic TUR = UUT tolerance / ME uncertainty.
ANSI/NCSL Z540.3-2006—Requirements for the calibration of measuring and test equipment,1 provides the following more descriptive and explicit definition of TUR helping to improve uniformity in its usage:
Z540.3 TUR = UUT tolerance span / 2 * ME uncertainty.
This definition defines TUR as the ratio of the span of the accuracy tolerance of UUT to twice the 95% expanded uncertainty associated with the ME.2
So simply stated, what is the difference between TURs and TARs? TURs take into account ME error contributors (uncertainties) which may not be included in a ME accuracy tolerance. As with TARs, a larger TUR implies measurement data are probably more representative of the actual performance of a UUT when compared to smaller TURs. It must be duly noted that ME tolerance and ME expanded uncertainty are not the same (computing expanded uncertainty is a subject that extends beyond the scope of this article).
From this discussion, you can ascertain that TAR and TUR provides metrology practitioners a ready means for estimating the possible magnitude of ME error influences on measurement data. So why compute TAR and TURs? The answer has to do with measurement risk and the liabilities associated with decisions based on measurement data. To reduce measurement risk, metrology practitioners strive to minimize ME error influences on measurement data ideally to the point of being insignificant. This helps equip decision makers with the best possible information—measurement data upon which to base their decisions.
Reference and note
- American National Standards Institute and National Conference of Standards Laboratories, ANSI/NCSL Z540.3-2006—Requirements for the calibration of measuring and test equipment.
- Test uncertainty ratios are often reported in calibration reports for each measurement parameter evaluated or is assumed not to dip below a specific ratio (normally 4:1 unless otherwise noted).
Christopher L. Grachanen is a master engineer and operations manager at Hewlett-Packard Co. in Houston. He earned an MBA from Regis University in Denver. Grachanen is a co-author of The Metrology Handbook (ASQ Quality Press, 2012), an ASQ fellow, an ASQ-certified calibration technician and the treasurer of the Measurement Quality Division.