2019

MEASURE FOR MEASURE

Keep Your Resolution

Remember accuracy, precision when estimating uncertainty

by Dilip Shah

When estimating measurement uncertainty for a measurement parameter, it is necessary to take into consideration the error permitted by accuracy specification, precision and resolution. ISO Guide 99 defines the terms as follows:

Accuracy: Closeness of agreement between a measured quantity value and a true quantity value of a measurand.

Accuracy defines the permissible measurement error from the nominal value. Under normal circumstance, we take more than one measurement (five to 10 measurements are ideal) to verify an instrument’s accuracy by comparing the average of those repeated measurements from the nominal value.

Precision: Closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified condition.

Precision is defined by the repeatability of the instrument and is normally expressed as a standard deviation. This is also known as Type A data for measurement uncertainty analysis.

Resolution: Smallest change in a quantity being measured that causes a perceptible change in the corresponding indication.

Resolution can depend on, for example, noise (internal or external) or friction. It may also depend on the value of a quantity being measured. Resolution of a displaying device is the smallest difference between displayed indications that can be meaningfully distinguished.

The terms accuracy and precision are sometimes interchanged incorrectly in product specification literature and by the end user. The relationship between accuracy and precision is illustrated in Figure 1.

Figure 1

Many times, an instrument display has a higher display resolution (such as more decimal places) than its accuracy and precision specification. This higher resolution does not equate to higher accuracy or precision.

The measurements themselves are reported according to the full resolution of the instrument’s range function. It is possible the last one or two decimal places on the instrument display may be measuring noise.

The best way to analyze this is to conduct an experiment to verify accuracy and precision. This can be performed via a repeatability study.

Low and high

There are times when the measuring instrument has a lower resolution and the measurement generated by the standard has a higher resolution or no displayed resolution.

For example, a precision gage block (accuracy/precision specified in µm or µin but no resolution displayed) is measured with a micrometer of lesser resolution. A 10-volt standard (accuracy/precision in µvolts but no resolution) is being measured with a 4.5-digit multimeter resolution.

In such a scenario, the accuracy and precision generated by the standard may not be discriminated due to limited resolution of the measuring instrument. The repeated measurements made and the accuracy and precision calculated may look better than the actual instrument being used to make the measurements and lead us to wrong conclusions.

It is best to illustrate the differences and relationship of these three terms with another example:

If we have a nominal measurement value of 10 and an instrument resolution of 0.001, it would be expressed as 10.000. If there is an accuracy specification of 0.01% associated with this instrument, any value measured should fall within 10 +/- 0.01% or 10.001 and 9.999 with 0.001 instrument resolution.

But if this same instrument had an accuracy specification of 0.001% associated with it, any value measured should fall within 10 +/- 0.01% or 10.0001 and 9.9999. The instrument only has a resolution of 0.001, however, therefore you would only read 10.000, not being able to resolve to the last digit required for the accuracy specification, as illustrated by Table 1.

Table 1

To calculate the accuracy and precision of the instrument, we would need to take repeated measurements. Typically, in the measurement uncertainty analysis, 10 or more measurements are required. The average would verify the instrument’s accuracy. The sample standard deviation of these repeated measurements would define the measurement’s precision (repeatability—Type A data).

Repeat yourself

Table 2 shows the repeated measurements made on an instrument with 0.001% accuracy, with instruments of varying resolutions to illustrate the example. When calculations of average and sample standard deviation are made, they are normally carried to an extra decimal place, even though the instrument may not have the required resolution.

Table 2

But calculations done with spreadsheets and calculators can give us many extra decimal places (resolution) that do not exist in the measuring instrument’s resolution. Thus, even though the calculated accuracy and precision looks good, the more dominant contributor may be the instrument resolution and should be taken into consideration.

If you were to consider the data in Table 2 and use it in a measurement uncertainty budget estimation, it becomes apparent that the decreasing instrument resolution becomes a bigger contributor to the measurement uncertainty, as shown in the percent contribution column of the measurement uncertainty budget in Table 3.

Table 3

In a measurement uncertainty analysis, you may need to consider the resolution of the source or standard (generate) of the measurement, as well as the unit under test. Both resolutions should be considered and their contributions to the total measurement uncertainty analyzed.

If the contribution is negligible, it can be ignored, but you won’t know until all the analysis is done in a manner shown in Table 3, in which the percent contribution column can provide the analysis for decision making.

Multi-range instruments tend to have different display resolutions at every range or sometimes within the range itself, depending on the quantity being measured. You may need to generate an uncertainty budget for each of those ranges if the resolution is a larger contributor.

The lesson you should take away from this column is that in the New Year, it’s important not to ignore your instrument resolutions.


Bibliography

  1. International Organization for Standardization, ISO/IEC Guide 99:2007—International vocabulary of metrology—Basic and general concepts and associated terms, 2007.
  2. Shah, Dilip, "Balanced Budget," Quality Progress, May 2009, pp. 54–55.
  3. Shah, Dilip, "In No Uncertain Terms," Quality Progress, January 2009, pp. 52–53.
  4. Shah, Dilip, "Standard Definition," Quality Progress, March 2009, pp. 52–53.
  5. Stein, Philip, "All You Ever Wanted to Know About Resolution," Quality Progress, July 2001, pp. 141–142.

Dilip Shah is president of E = mc3 Solutions in Medina, OH. He is a past chair of ASQ’s Measurement Quality Division and Akron–Canton Section 0810, and is co–author of The Metrology Handbook (ASQ Quality Press, 2004). Shah is an ASQ–certified quality engineer and calibration technician, and a senior member of ASQ.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers