2019

MEASURE FOR MEASURE

Comparing Specifications Made Easy

by Graeme C. Payne

Manufacturers have many different ways to specify the performance of measuring instruments. It is often a struggle for us, the instrument users, to derive useful information from that multitude, to compare instrument capabilities or even decide what calibration standard has the required capability.

Table 1 shows some of the formats commonly seen in electronic equipment. Variations of these are also found in other types of measuring instruments.

There is no standardized format for describing the performance specifications of an instrument, and the characteristics of instruments are varied enough that it would be a difficult thing to accomplish. Yet some sort of method is needed to make various applications easier for metrology engineers and calibration technicians.

Ideally, the method also could be adapted easily to computer systems, eventually allowing computer applications to automatically do some of the work. Of course, when a computer application does something automatically, it’s because many people behind the scenes have put a lot of work, time and expense into creating software that drives it.

Convert to Real Values

One way to make things easier immediately is to get rid of all the differing formats as early in the process as possible and convert everything to real values in the units of measure being used.

Table 1: Some Specification Formats

Specifications are used to describe the expected performance limits of several substantially identical products.1 One way we use specifications is to set test limits for calibration procedures.

Conceptually, a basic performance specification consists of three parts:

  1. An output term that defines a proportion of the applied value.
  2. A scale term that defines a proportion of the full scale or the measuring range.
  3. A floor term that defines a residual value that is always present.

A complete basic specification will have the general form: 

Nominal value ± (output + scale + floor) 

Few specifications will include all three terms—most common are output and scale or output and floor. Older analog instruments often will have only the scale term. Missing items are evaluated as zero. Other information can be contained in a specification as well, but that won’t be discussed here.

As Table 1 shows, a lot of the information is expressed as percentages or other ratios. Ratios are difficult to use directly for comparisons. It is much easier to compare things, and to make your computer do more, if all values are expressed in terms of the applicable units of measure.

Table 2 lists some specifications and other information about several digital multimeters and an analog multimeter in the same way they would be shown in the manufacturer’s literature. These specifications are all for the range that measures one volt (V) DC for a 12-month calibration interval. You can see parts per million, percentages and numbers of digits.

Table 3 shows the same information restated as real numbers, along with some added columns, which can be ignored for now. The key is that the specification has been consistently restated in terms of the unit of measure. In this case, the unit is the volt, but it works equally well for amperes, ohms, kilograms, pascals, watts, kelvins, meters or any other measuring unit.

The first part of the restated specification (volts per volt in this example) is the quantity per unit of measure. You find this at the test point value by doing the math implied in the performance specification. A couple of examples, with an assumed input of one volt, are:

Meter B: (1.0 x 0.004%) = 0.00004 volts

Meter F: (1.0 x 0.1%) = 0.001 volts

The quantity per unit of measure is a multiplier and is a constant for a specific range on a specific model of measuring instrument. Multiply the desired (or indicated) value by that amount and you get the output term of the specification in the units of measure.

The second part of the restated specification is the floor. You find this at the test point value by doing the math implied in the performance specification. A couple of examples using the same meters as before are:

Meter B: (1.2 x 0.0007%) = 0.000008 volts

Meter F: (0.001 x 1) = 0.001 volts

The floor is a constant for a specific range on a specific model of measuring instrument and is added to whatever is calculated in the previous step.

After adding these two parts, the result is a single value that defines the instrument specification in terms of the unit of measure for that input value. Using meter F as an example, for an input of 3.800 V you have (3.8 x 0.001) + 0.001 = 0.005, a value that can be used directly to compare against other meters or calibrators that have been evaluated in the same way.

This is telling you the meter can be expected to display a value of 3.800 V ± 0.005 V when that input is applied (all values have been rounded to the display resolution). The same thing can be done for calibrators, as shown in Tables 4 and 5 (p. 78).

The extra columns in Tables 3 and 5 show how other information can be documented and used along with this. The temperature column is primarily for reference. The distribution column is a reminder of what distribution you are assuming when analyzing type B uncertainty components. The divisor column is a reminder of the divisor to use.

Notice in these examples the manual for meter B states its specifications are based on ± 4 sigma limits and all three of the calibrators state a confidence interval of 99%, which immediately implies a normal distribution and 3 sigma.

Using These Numbers To Calibrate

Now you have a nice matrix of numbers and reminders to look at. What else can you do with it? Table 6 (p. 78) shows just one example.

Table 2: Typical Multimeter Specifications

This table lists all the meters to be calibrated in the rows and the available calibrators in the columns. The values in the matrix are calculated directly from the other spreadsheets using a specific applied value—one volt in this example. The single specification values are divided by the appropriate divisor to obtain an approximation of the standard deviation. Then the standard deviation of the meter is divided by the standard deviation of the calibrator. The number you get is an approximation of the test uncertainty ratio (TUR) for that input.

Table 3: Multimeter Specifications Restated as Real Values

You can use the conditional formatting feature of the spreadsheet application to highlight cells with a value less than four, indicating a TUR less than 4:1. You can quickly see, for example, that meter D can be calibrated on this range using two of the calibrators, but calibrator III is not suitable for it.

Table 4: Typical Calibrator Specifications

Keep in mind this is a simplified tool that gives reasonable approximations. There are other ways to do the calculations that might be more acceptable to a professional mathematician or statistician. For instance, some might say that standard deviations should be converted to variances before being divided and that the answer should be converted back to standard deviations.

Table 5: Calibrator Specifications Restated as Real Values

In a strict mathematical sense that (and other potential objections) might be true. However, the computations are more complex and time consuming to set up, and the differences are small enough that they usually will not make a practical difference with the measuring instruments and standards you are using.

Table 6: Example of Test

Uncertainty Ratio

Estimate

These examples are done in a spreadsheet application, and that is probably suitable for a small to moderate inventory of workload models. Also, these examples show the quantity per unit and floor values in long form rather than scaling them to millivolts or microvolts.

The long form is largely personal preference but is also used for good reason: If all the values used in a calculation are scaled the same way, there is a much smaller chance of making calculation errors. In a spreadsheet or database, you can pull values from different places without having to worry about dividing that one by 1,000 or multiplying the other one by 1 million—and sometimes getting it wrong.

When using commercial or open source spreadsheet programs you also must keep in mind the inherent limit of 15 digits of accuracy and problems caused by binary representation of floating point decimal numbers. These calculations do not use statistical functions, but if they did, the process would be complicated by having to account for known errors in the statistical methods of Microsoft Excel, a topic for another time.

As a final note, understand that ideas for this column come from situations I encounter in my work or situations other people encounter and tell me about. If you have suggestions for topics, or are interested in volunteering to write one or more columns, e-mail graeme.asq@gksystems.biz.


REFERENCE

  1. Jay Bucher, ed., The Metrology Handbook, ASQ Quality Press, 2004.

GRAEME C. PAYNE is president of GK Systems, a consulting firm specializing in measurement science. He contributed to The Metrology Hand-book (ASQ Quality Press, 2004) and is the 2006-2007 chair of the ASQ Measurement Quality Division. Payne is a senior member of ASQ, a certified quality technician, calibration technician and quality engineer, and a member of the National Conference of Standards Laboratories International.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers