MEASURE FOR MEASURE
According to Specifications
The importance of getting your numbers right
by Dilip Shah
My previous column discussed the averages and computation of different standard deviations.1 On the heels of that, it makes sense to discuss basic measurement considerations and specifications.
I was working with a client that was failing its gage repeatability and reproducibility (R&R) study for a particular dimension on a part. Looking at the specification on the drawing, the dimension was in four decimal places with a ± 0.000 X specification.
In any calibration or measurement scenario, you need to reference specification of the measurement standards used or the unit under test (UUT). But published specifications can be difficult to interpret. If the specifications are produced by the equipment manufacturer’s marketing department, they may look similar to what is shown in the sidebar “‘Typical’ Specification.”
At 10 clucks +/- 2 mClucks, aka +/- 0.002 clucks (on a scale of 0-20 clucks and 0.01 cluck resolution).*
*Achieved only if the equipment is used with the left hand1 while standing on your right foot2 with your right eye closed3 and maintaining an environment of 23 degrees Fahrenheit (+/- 0.033 degrees Fahrenheit)4 using “hypertronic-wormhole” temperature control.5
- If used with the right hand, results will vary. And we won’t tell you by how much because we only use left-handed technicians.
- If standing on your left foot, you may get more tired, and we won’t guarantee performance.
- If right or both eyes are closed, you’re on your own. Good luck.
- Do not attempt to use at -5 degrees Celsius because we don’t sanction its use on the Celsius scale. And don’t even think about using the Rankine scale. We don’t even know what Rankine is.
- This was achieved once and never repeated. Fine-tuning hammer used at times. We call it single measurement bliss.
From Happy Days Instrument Co., where precision = accuracy = resolution or whatever. Enjoy our master-crafted instruments, knocked out one at a time.
This tongue-in-cheek representation of specifications makes it challenging to calibrate the equipment if you can’t replicate the conditions required for meeting the specifications or making the measurement.
For this column, I’m going to use the number convention per NIST Special Publication 811 (SP811). Normally, you would write 10.435 456 35 in the United States as 10.435,456,35. The same number written in Europe would be 10,435.456.35—that’s decimals replaced with commas and vice versa.
To avoid confusion, the international format uses a period for a decimal and a space after every three digits. For this and other useful formatting information and unit conversions, NIST SP811 is a great resource. OK, back to the column.
Luckily, most reputable equipment manufacturers provide specifications that can be interpreted—some more easily than others—for calibration, measurement and compliance decisions. Some examples of specifications:
- ± (20 ppm of reading + 2 ppm): The specification at 10 units of measure is 10 x 20 ppm + 2 ppm = ± 0.000 202 units
- ± (20 ppm of reading + 2 ppm of range): The specification at 10 units of measure on a 0 - 50 scale is (10 x 20 ppm) + (50 x 2 ppm) = ± 0.000 300 units
- ±1% of reading: The specification at 10 units of measure is 1% x 10 = ± 0.1 units
- ±1% of range: The specification at 10 units of measure on a 0 - 50 scale is ± 1% x 50 = ±0.5 units
After the interpretation of the UUT specification is completed, the next thing to do is choose the correct standard or instrument to calibrate the UUT or make the measurement.
For example, if the UUT specification on a scale of 0 to 10.000 0 was ± 0.01% of the reading and you were trying to verify the performance of the UUT at five units, the specification would be ± 0.01% x 5 = ± 0.000 5 units.
The first thing to note is that to verify the specification performance of the UUT, your standard would need a resolution of at least 0.000 00 units or higher. This is a basic 10:1 rule in which you would need the standard to have at least one meaningful decimal place higher resolution than the UUT being calibrated. If the standard has the same resolution as the UUT, it already puts you in a position to lose the precision—or standard deviation—of the measurement by up to 25%.
When the measurement is made for verification, any average or standard deviation calculated also should be reported to one decimal place higher than the measurement. Truncating or rounding measurement results and using the results for later stages of decision making—such as compliance, measurement uncertainty and gage R&R study—can have serious consequences when making a decision.
Set up for mistakes
Most data calculations are performed using software or spreadsheets, and this information (extra digits) can be recalled or reformatted. But if the data are entered manually, they’re going to be in their reduced, truncated or rounded format, and an error in each stage of calculation will occur.
In addition, too much trust is placed in the software and spreadsheets to deliver the “correct” decision or calculations. Remember: garbage in, garbage out.
Here’s an example that illustrates the importance of selecting the right resolution: If the source unit output is 10.436 5 units with a specification of ± 0.01% (0.001 043 65 units) of the output, the reading can be anywhere between 10.435 456 35 and 10.437 543 65 if measured with an instrument with 0.000 000 00 resolution (eight digits).
Using the rule of one extra digit of measurement would require you to have a measurement device with at least nine decimal places. If a lesser resolution device of four decimal places is used to measure this output value, the displayed output would be in the range of 10.435 5 to 10.437 5 units. The error in measurement would be 0.000 043 65 units, or 0.000 418% error.
In this example, the measurement error appears small. But the application dictates the error’s magnitude. If the measurement was part of a calculation for determining an angular trajectory for a model rocket in a science fair project, it may be negligible. If the application of this measurement was for an angular trajectory in a mission to Mars, the error may be significant.
Going back to my client failing the gage R&R study, we determined the following:
- The technician made the measurement with a four-decimal-place device, thus violating the rule of measuring with at least one extra decimal than the part dimension.
- When the technician measured the part dimension with a five-decimal-place device, the gage R&R study passed with flying colors.
- To explain why the parts passed the gage R&R study, we compared the technician’s five-decimal-place measurements with the data rounded to four decimal places. We also compared the percentage difference in the standard deviation of measurements. The resulting data are shown in Table 1.
So if you are aiming for the stars or a similar high-minded goal when it comes to your quality measurements, pay attention to the interpretation of the specifications and choose the instrument with appropriate resolution for your measurement application.
- Dilip Shah, “On Average,” Quality Progress, May 2012, pp. 44–45.
Dilip Shah is president of E = mc3 Solutions in Medina, OH. He is chair of ASQ’s Measurement Quality Division and past chair of Akron-Canton Section 0810, and is co-author of the second edition of The Metrology Handbook (ASQ Quality Press, 2012). Shah is an ASQ-certified quality engineer and calibration technician, and a senior member of ASQ.