MEASURE FOR MEASURE
Working With Fixed Value Measurement Standards
by Graeme C. Payne
Most instruments for measuring electrical quantities are calibrated by comparing their indication to a value provided by a measurement standard. Usually, the measurement standard can be adjusted to make the unit under test (UUT) indicate the nominal value, and the required input can be read from the standard.
In other cases, the measurement standard is considered fixed. In other words, it is not adjustable, yet it often changes over time. Two of many examples are voltage reference standards and resistance standards.
If a fixed measurement standard’s traceable value is very close to nominal (relative to the performance of the UUT), the difference does not affect the measurement result. However, there are many cases in which the measurement standard is far enough from the nominal that the measurement is affected by this offset, or bias.
Depending on the direction and magnitude of the bias, it can allow a type one error (false reject) or type two error (false accept) to be made when calibrating the UUT. A type one error can lead to an unnecessary and incorrect adjustment of the UUT. A type two error can result in an instrument’s being certified as performing within specification when it actually is not.
Consider the case of calibrating a common 6.5-digit digital multimeter (DMM) using the resistance ranges of a typical multifunction calibrator. The calibrator’s resistance ranges, measured in ohms, are labeled with nominal values, such as 10 Ω, 100 Ω, 1 kΩ, 10 kΩ. However, the actual calibrated value of each resistor varies over time, and the values will be different after each calibration of the multifunction calibrator itself.
If a calibration procedure is written using
only the nominal value of the range, there is a significant risk
of failing the UUT when it actually is reading correctly, or of
passing it when it actually is reading incorrectly. For example,
assume the DMM has the ranges and performance specifications
shown in Table 1. The specification equation is ± (%
reading + % range).
Continuing the example, assume the
multifunction calibrator has the ranges and performance
specifications shown in Table 2 and the listed values from its
most recent calibration. The relative parts per million (ppm) is
the performance specification’s 12-month limits relative to
the calibration standards used. Ucalibration ppm is the
calibration uncertainty taken from the most recent calibration
A few simple calculations show that if the calibration procedure is based on the nominal value of the calibrator instead of the actual value, there are substantial areas—nearly 60% of the available specification limits—in which the measurement result can be misclassified. Note that, for simplicity’s sake, details of measurement uncertainty are not discussed here.
In Figure 1 (p. 78), these areas are shaded. Four different situations are shown on the horizontal axis:
- Calibrator nominal is a nominal value of 10 kΩ, and the error bars show the limits of uncertainty for the calibrator (if the resistor is at the nominal value).
- DMM @ nominal represents a measurement of the nominal value with the error bars showing the UUT’s limits of uncertainty.
- Calibrator actual is the calibrator’s certificate value of 10.000646 kΩ, with the error bars showing the limits of uncertainty.
- DMM @ actual represents a
measurement of the calibrator value with the error bars showing
the UUT’s limits of uncertainty.
The shaded areas of Figure 1 indicate where there is a problem if the calibration procedure test limits are based on only the nominal value. If the DMM result is in the upper shaded area, it would be called out of specification when in fact it is still a good measurement—a type one error. If the DMM result is in the lower shaded area, it would be called within specification when in fact it is not—a type two error.
The person (or computer program) performing the calibration procedure must be able to determine the proper value to use as the measurement standard. He, she or it then must be able either to apply the predefined test limit interval (calculated from the nominal value) to the actual value of the standard or to calculate corrected test limits during the procedure based on the actual value of the standard.
The first way is simpler and almost always adequate. Either method relies on the ability to determine the value of the measurement standard. A person, of course, can read the value from the calibrator’s display or the calibration certificate.
If an automated procedure is used, the computer might use one of several methods. In one method, the resistance value could be read from a look-up table in the program’s data structure. Another method would be for the program to ask the calibrator what its value is—many calibrators are capable of reporting their current setting or output value.
An example using the second method is:
- For the 10 kΩ range, the multimeter test limits are given as ±0.0011 kΩ at the nominal value. This is something typically determined by a metrology engineer when the calibration program is developed. It is not changed because a given automated procedure is normally specific to one model of one manufacturer.
- The program sets the calibrator to full scale output on the 10 kΩ range and then sends a query to find out what that value is.
- The calibrator reports its value as 10.000646 kΩ.
- The program can calculate the lower and upper test limits for the DMM are 9.99955 kΩ and 10.00175 kΩ.
- The measurement is made, stored and compared to these test limits.
For most purposes, this method is sufficient. The more rigorous method requires more information (the actual performance specification equation of the UUT) and more calculation by the calibration technician or computer program. The difference often is, for practical purposes, meaningless. For this example, the limits from the second method are wider by 0.13 mΩ, an insignificant (and invisible) amount when calibrating a meter with resolution of 10 mΩ.
On the other hand, the effect of ignoring the bias of the measurement standard’s value is very significant. In this example, the width of both shaded bands combined is almost 60% of the DMM’s performance specification band. The DMM would be called out of specification if it measures only 20% high at this value. If the DMM is adjusted without taking the standard’s bias into account, the adjustment would make the DMM read low by 30% of its performance specification.
All of this shows that whenever a calibration is made using a fixed measurement standard that changes in value over time, the actual present value of the standard must be determined and accounted for. The effect of not doing so will significantly increase the risk of making incorrect adjustments and pass/fail decisions, either of which can lead directly to increased customer dissatisfaction.
GRAEME C. PAYNE is the chief scientist for IndySoft Inc. in Greenville, SC, and is president of GK Systems, a consulting firm specializing in measurement science. He is a co-author of The Metrology Handbook (ASQ Quality Press, 2004) and is the 2006-2007 chair of the ASQ Measurement Quality Division. Payne is a senior member of ASQ and a certified quality technician, calibration technician and quality engineer.