All You Ever Wanted To Know About Resolution
How to calculate measurement resolution's contribution to an uncertainty budget
by Philip Stein
The ISO Guide to the Expression of Uncertainty in Measurement and many references based on it set forward a method for enumerating and quantifying various influences on a measurement that contribute to variation of the reported value. The method is often referred to as an uncertainty budget. In an uncertainty budget, the influence quantities are listed and estimated by statistical or other means. These estimates are then combined and expanded to yield a single number, the expanded uncertainty.
While each measurement process has its own budget, there are certain common factors--inherited calibration uncertainty, resolution, and repeatability or reproducibility--that should appear in all budgets. These factors may be estimated and declared as negligible where appropriate.
What is resolution?
Most quantities subject to measure are inherently continuous. This means that any value within the limits of the measurement range is possible. Most measurements are expressed in numbers, which are inherently limited to having a finite number of digits. It is not possible, then, to express any possible value, but only values for which numbers are available. This is what's meant by "limits to resolution."
In many cases, pure quantities subject to measure are accompanied by variation. If we express these quantities using a lot of digits (high resolution), some of these digits may only be expressing the noise. This is sometimes called empty magnification. More digits won't help here because the noise is larger than any subtle changes in the measured quantity, rendering those changes unable to be sensed.
There used to be a rule of thumb that said the resolution you report should be limited to the same magnitude as the noise (that is, the same number of digits). Lately, extensive digital and statistical processing of data has produced many cases where exactly one additional digit of resolution can contribute to the understanding of the data and may, in some cases, contain some small additional information that would otherwise be lost.
If a measurement is expressed with sufficient resolution so any information carried by at least the rightmost digit is obscured by other sources of uncertainty, then resolution may be said not to contribute significantly to the overall uncertainty budget. Any influence caused by resolution may therefore be deemed negligible in the budget calculation and be omitted, although you must include a note to this effect in the budget.
How does limited resolution come about?
In other cases, measurement resolution is limited by some property of the measurement process, such as digitizing electronics. This is when the limits to resolution do contribute to the observed uncertainty of the measurements. In some such cases, resolution may be the most important and largest contributor.
At this point we must make a further distinction. Some measurements with limited resolution are inherently discrete. A good example is any datum that is generated by a counting process. This is inherently different from the case we have been discussing, where the underlying quantity can take on any value, and the limitation of resolution is due to our attempts to assign it a number of limited length. In this column, I only want to discuss continuous quantities that we have digitized.
We now must make yet a further distinction. Was the assignment of a number to the quantity done by a person or by a machine? These two means of limiting resolution have very different statistical properties and must be treated in different ways.
Finally, the bottom line: Suppose I have a display process that is part of my measurement process, and it has a limited resolution of three digits. This means it can display any number between 000 and 999, and that's all. But say the display process has an analog quantity equal to 563.42875. The display can still show anything between 000 and 999, but if it is working exactly as we would like it, it will either display 563 or 564. Given the rest of the measurement process, unseen here, there may be no more meaning beyond three digits anyway, so this is not necessarily a flawed display method.
What uncertainty is contributed by resolution?
We can see the entire job of limiting resolution depends on how well you can split the last digit. If it is done well, a reading of 563 strongly supports the contention that the actual value of the quantity subject to measure lies between 562.5 and 563.5. Further, we can say that when we look at a display that reads 563, we have no information whatsoever about where in the interval 562.5 < x < 563.5 the value actually lies.
This is a common situation in uncertainty analysis. The common approach is to declare the reading 563 lies in the center of a rectangular distribution of width +/- 0.5. Further, the uncertainty contribution of such a distribution, when transformed to a standard deviation as required by the standard, comes out to 0.5 divided by the square root of 3.
Stating the uncertainty this way assumes the last digit was split perfectly or at least very well, and virtually all values greater than 563.5 will be assigned to 564, and all values less than 563.5 will be assigned to 563. People reading analog pointers or dials do this well. In this case, it makes good metrological sense to assign the uncertainty contribution to be half the interval divided by the square root of 3.
As it turns out, though, electronic digitizers are not always good at splitting the last digit. This fact is little known except by those who design these components, but it means uncertainty cannot be calculated the same way it is when people take the readings.
Digitizers, analog to digital converters, have a specification property known as differential nonlinearity. A converter with poor or large differential nonlinearity will still correctly read incoming voltages to an accuracy within one-half of the last digit. This means the reported value of the input is never more than one whole digit away from the true value--the value that would be reported in the theoretical case where there is no error.
In fact, analog to digital converters can be even worse than this. They can be nonmonotonic, where increasing input shows a decreasing display value in certain limited ranges. Nonmonotonic converters have largely been eliminated by incremental improvements in design, but large values of differential nonlinearity can still be found in many converters. This is inconsequential in many applications, but when the data are analyzed statistically, as they are in uncertainty calculations, problems can and do occur.
Specifically, the last digit is not always split evenly and could be split by a converter, so using our previous example, a display that reads 563 could represent an interval of 562.1 < x < 563.9. In many cases, when this degree of asymmetry occurs, the extra space is stolen from adjacent values, so a display reading 564 would likely represent 563.9 < x < 564.1, and so forth. Unfortunately, the extra space is not always stolen from adjacent values, but rather from values elsewhere in the total range, so correction for the problem cannot always be made in the same manner.
In virtually every instrument with a digital readout, a limited resolution answer is produced by an analog to digital converter. This means the uncertainty calculation must consider the unknown quantity subject to measure lies between the displayed value 1 and the displayed value +1. It is then appropriate to state the value lies in the center of a rectangular distribution of width +/- 1. The uncertainty contribution of such a distribution, when transformed to a standard deviation as required by the standard, comes out to 1 divided by the square root of 3. As expected, that is twice as large as the uncertainty due to resolution when the display is converted to a number by manual means.
What's a practitioner to do?
This discussion leads to a practical issue: When you are evaluating an uncertainty budget for a measurement, you must always specify some resolution component. If the resolution is small enough to be neglected, state this and move on. If the resolution is a major or even dominant part of the uncertainty, you have two possible options:
- Calculate the resolution as 1 least significant digit divided by the square root of 3. This is a good estimate of the likely maximum error due to resolution, including differential nonlinearity effects.
- Demonstrate the display process does a good job of splitting the last digit. If you can prove that, then the resolution might be as small as one-half least significant digit divided by the square root of 3.
In the second option, proof can be in the form of a reference to manufacturer's specs (especially if the specs can be believed because many manufacturers of instruments with digital displays are not aware of this problem), an analysis of the design of the digitizing system to show it splits the last digit correctly or an experiment.
However, it is not enough to perform one experiment that slowly advances the analog input in steps smaller than the resolution to demonstrate the last digit is split correctly. Depending on the details of the converter design, this may have to be done at many values of the input.
Choosing the proper experiment is strongly dependent on the internal operation of the converter. If these internal technical details are not available or not understood by the laboratory, an experiment is unlikely to demonstrate the degree of differential nonlinearity. The best approach is to take the resolution as 1 divided by the square root of 3 and not attempt to do better.
There are many more technical details about the internal design of digitizers and how the various approaches to analog to digital conversion are immune to differential nonlinearity issues. These are outside the scope of this article, but are easily found in electronics engineering literature.
PHILIP STEIN is a metrology and quality consultant in private practice in Pennington, NJ. He holds a master's degree in measurement science from The George Washington University in Washington, DC, and is an ASQ Fellow.