Validating Calibration Processes And Software
Measurement assurance provides a sound foundation
by Philip Stein
As defined in the ISO/IEC 17025 international standard, validation is the "confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled." When a lab writes its own calibration procedures, both the procedures and software that can affect the results of measurements need to be validated.
The validation only has to be as extensive as is necessary to meet the needs of the given application, so if a procedure is written for a 3-and-a-half digit multimeter, there's no need to verify the lab's ability to deal with part per million uncertainty.
Process Validation Requirements
Still, process validation can be quite a chore. It's important to check not only how well the method works in normal use, but also to explore the effects of the environment such as temperature and humidity, variations in setup and other technician controlled issues, and errors caused by defects in the instrument being calibrated. Normal statistical variation can add to the observed uncertainty, so repeatability or reproducibility should also be explored.
Some industries, especially regulated ones, have other requirements for process management that are also called validation. This can cause confusion and even conflict, since these rules often don't make sense in the calibration world. Some validation requirements even call for operating a method beyond the capabilities or ratings of the equipment involved--to make sure no harm will result when things are destroyed. These requirements might be important when testing a consumer product that can be used in unpredictable circumstances, but the calibration laboratory is a benign place--the operators are trained, and the equipment was built with safeguards to prevent overloading.
More to the point, if an instrument were to be overloaded and damaged during calibration, it would give suspect or incorrect results and the process would fail. The process is self-checking, and the probability of passing a damaged unit and returning it to a client as satisfactory is extremely small.
Software for calibration must also be validated. This is a requirement of the standard and also of many industries. Software validation is a common practice for manufacturers of general-purpose programs such as word processors and spreadsheets. Formal validation is also mandatory for most military and other large government software projects.
Measurement and calibration equipment often has embedded software. These days, an instrument that appears to be a black box with a few front panel controls is likely to be completely controlled by one or more computers that are invisible to the user. Other instruments, often in the analytical field, are simple apparatuses intended to act as the physical or chemical eyes and ears for software running on a desktop computer.
In either case, the software that makes the measurement is subject to the same errors as any software and requires validation. This is often more important for software that drives instrumentation because the scientist or technician using the equipment is unlikely to notice problems unless the final answer is way off. Even then, although the instrument may be one candidate, there are often other more plausible reasons for the error, and the software may never be suspected.
Software validation is an interesting topic in itself. Early approaches to verifying the correct operation of computers stalled and ultimately failed in their attempt to test every operation performed for correctness. There are just too many combinations, and the same code will often operate differently when the data are different, so you have to perform every test with every possible number it could encounter--an impossible task.
Modern techniques do include testing all operations for some selected values of the data. Intelligent choices of numerical values for these test cases--those based on a knowledge of the internal workings of the computer--can often ferret out subtle problems. These test cases must then be run each time the software, the compiler that produced the software or the operating system on the computer is changed. Any of these variations can possibly jeopardize the validity of previous tests.
There is a little used alternative to validation of an instrument and its software. I've written before in this column about measurement assurance--the use of check standards and simple statistical process control tools such as control charts--to monitor the ongoing health and stability of an entire measurement or calibration process. If the chart is in control and remains in control, this is strong, supportable, quantitative evidence the entire process is operating consistently. Other tools such as calibration can determine the process is operating correctly, and the evidence of consistency demonstrates continuity of correct operation.
If the process being monitored this way includes a software component, and the chart remains in control, then this is good evidence the software is working. A change in the software that is not reflected in a change in the chart means it is very likely the entire measurement process has remained stable and thus is validated.
There is a reasonable concern that, though measurement assurance can properly validate a process operating in routine conditions, it does not explore what happens when something goes wrong. A proper selection of values and properties for the check standards can exercise the measurement process and its control chart over a reasonably expected range, and although this test does not attempt to find every possible mode of failure, neither can any other validation method.
The best part of using measurement assurance is it confers many other benefits without additional effort or cost. Since it provides a continual monitor of process health, it will let you know quickly if there are problems, and corrective action can be taken before many defective calibration results are produced. The ongoing information about process variation (as might appear on a range chart) can be used to produce an uncertainty budget. Calibration intervals for the reference standards used in the process may also be set, and usually lengthened, based on the same data.
Measurement assurance is a powerful tool with many advantages. It should be used as evidence for process validation as well.
PHILIP STEIN is a metrology and quality consultant in private practice in Pennington, NJ. He holds a master's degree in measurement science from the George Washington University in Washington, DC, and is an ASQ Fellow.