MEASURE FOR MEASURE
Challenges of Instrument Innovations
by Graeme C. Payne
Humans are innovators. We are always inventing new materials, methods, devices or ideas. From time to time, certain innovations cause major, unforeseen shifts in theory and practice.
A few examples in my own lifetime and professional specialty include the inventions of the integrated circuit and general purpose microprocessor, and the discoveries of the Josephson effect and the quantum Hall effect.
A more recent innovation is the development of synthetic instruments. While this is certainly not as fundamental as the other examples, it does have important implications for quality management and measurement science.
Instruments Until Now
Until recently, measuring
instruments—however complex they might appear—have
been essentially single purpose tools. Such an instrument is
relatively easy to understand, use and calibrate. Measuring
instruments might be built to perform multiple functions, as with
a digital multimeter that can measure voltage, current,
resistance and maybe a few other parameters. But that digital
multimeter cannot be used as, for example, a spectrum
Multifunction instruments are only a little more complex to use and have a larger set of parameters to be calibrated but still are relatively simple. Many measuring instruments now have microprocessors and are controlled by permanent software. That software does not add much to the calibration effort, as it is tested by implication during the performance verification; if the instrument responds properly to the input, the software has to be correct.
Even in large automated test equipment (ATE) systems, in which a number of different instruments might be combined to perform a complex suite of tests under external computer control, each instrument is individually understandable and calibratable.
An ATE system as a whole is more complex
because the operation of the system must be validated somehow
after the individual instruments are calibrated.
To visualize the basic concepts, a measuring instrument can be viewed as a series of interconnected blocks. There is a signal conditioner that receives, scales and standardizes the input; an analog-to-digital converter; a user interface system and an operation control system.
Traditionally, all of these are physically in one unit, as shown in Figure 1. An instrument that produces an output is similar in concept, except that the signal flow is in the opposite direction. In both cases, the instrument manufacturer defines the hardware and software features and functions.
For the past 20 years or so, many ATE systems have used special instruments that plug into a system rack and are only controlled through a computer program. The principal difference between these and conventional instruments is that there are no controls on the instrument itself—only inputs or outputs. Because they are software controlled, they have often been called virtual instruments, even though there is a real device at the other end of the computer interface.
The Rise of Synthetic
In the last several years, the innovative idea of a synthetic instrument has risen. This came out of a Department of Defense initiative started in the mid-1990s to improve performance and reduce costs of ATE systems.
Synthetic instruments go beyond conventional instruments in an innovative way. Each component of a synthetic instrument—each functional block—is a separate device. A synthetic instrument is defined as a reconfigurable system of hardware and software elements, linked by standardized interfaces, to make measurements or generate signals. Figure 2 (p. 70) shows the same measuring function as Figure 1, but redrawn as a reconfigurable synthetic instrument.
What does this imply for quality management and measurement science? Several things must be considered with synthetic instruments that would not be of concern in a more traditional hardware and software environment:
- In a system using synthetic instruments,
an instrument is no longer a single, fixed assembly of hardware
and (usually) software. Because modules and software are easily
reconfigured, the “instrument” can actually become an
It is possible—even probable—that modules being connected to form a synthetic instrument, as well as the software used to operate them, are made by different companies.
- Hardware configuration control of a
synthetic instrument system is critical. Any change to the
hardware has the potential to affect every measurement the system
In an older system, changes in one instrument—a voltmeter, for example—would affect only the measurements made by that instrument. In a system with synthetic instruments, changing a module (an analog-to-digital converter, for example) could affect every type of measurement that uses that module.
- Software configuration control is even
more critical. Any change to the software has the potential to
affect every measurement the system makes.
In a traditional system, one software module typically controls the instruments required for one test. In a system with synthetic instruments, the software will configure and interconnect the hardware modules needed for each synthetic instrument, control the operation of each instrument, possibly control the unit under test and provide all of the user interfaces.
Some of the numeric processing for measurement functions might be performed by software. Any one software module has the potential to affect all uses of the test system.
- Each hardware module will need to be
calibrated in terms of its own inputs, outputs and functionality.
But, calibration of each module does not ensure each synthetic
instrument is calibrated.
Each synthetic instrument created by the changing configurations or hardware and software will also have to be calibrated. That is the only way to ensure the modules and software all work together as intended to produce a reliable and traceable measurement or signal.
- Parts of the software that are not fully exercised by calibration of each synthetic instrument calibration in the system will also have to be validated. For example, a digital multimeter might have internal statistical functions, such as the ability to report the mean and standard deviation of a user defined number of measurements.
In a traditional instrument, those functions are in the permanent memory, so they only have to be validated once. In an ATE composed of synthetic instruments, those functions are in the control software and will have to be revalidated whenever any of the hardware or software is changed.
Synthetic instruments are just the most recent in a long line of innovations that affect the way we design measuring instruments, make measurements and control measuring devices. As an innovation like this is introduced, we need to make certain the whole measurement system is maintained in a state that ensures the correct things are being measured—with the correct instruments—and that all measurements are traceable to the International System of Units.
GRAEME C. PAYNE is president of GK Systems, a consulting firm specializing in measurement science. He is a contributor to The Metrology Handbook (ASQ Qual-ity Press, 2004) and is past chair of the ASQ Measurement Quality Division. Payne is a senior member of ASQ, a certified quality technician, calibration technician and quality engineer.