Quality Tools for Metrology
by Philip Stein
Why is there a metrology column in Quality Progress? Many quality professionals need and use measurements every day in their work--from business measurements in quality management to dimensional and electrical measurements in manufacturing, research or design. What you may not realize, though, is that quality practitioners already know a lot about metrology. They know things that most measurement professionals are only dimly aware of or don't know at all.
In the late 1960s, I worked with Churchill Eisenhart, Joe Cameron and Paul Pontius at the Bureau of Standards. Together, these men invented and refined a new approach to metrology--measurement assurance.
This is where you, the quality practitioner, come in. Measurement assurance was designed by its analogy to quality assurance. Measurement assurance is a system of processes, procedures and tools used to assure that measurements (products) are made correctly and economically, with the least possibility of failure (scrap and rework).
Underlying the concept of measurement assurance is the principle that measurement is a process (something that is planned to happen the same way every time). The founders of measurement assurance viewed measurement as a manufacturing process whose output is numbers. Because it is a process, all of the familiar process management and control tools from the quality arena can be used. Cause and effect diagrams, Pareto charts, scatter plots, histograms and especially control charts are the basic tools of measurement assurance.
In my travels as a consultant and assessor, I find many people making measurements, but very few using quality tools. In some cases, the tools are used, but incorrectly. These situations cry out for a good ASQ certified quality engineer. Unfortunately, this need is rarely recognized.
Quality in support of measurement
The purpose of measurement is to assign a number to some quantity, object or process. One of the most important requirements of a measurement is repeatability. You want measurements to happen the same way every time so that, within limits, you get the same answer every time you measure the same thing.
Does this sound familiar? It's the same as the definition of process that was previously stated. This definition also contains the kernel of the basic principle of measurement assurance: Measuring the same thing repeatedly should give the same answer (except for ordinary statistical variability). Of course, this can't be done with destructive tests--tests that by the very act of measuring change the item being measured--but we can deal with this situation in another column.
Set aside the object you'd like to measure. We'll call this object a check standard. It doesn't have to be a standard; in fact, we don't even have to know its value very well. The job of the check standard is to remain unchanging through many measurements over a period of time. In a perfect world, you would get exactly the same answer each time you measure this check standard, even after measuring it several times in a row. However, this isn't a perfect world. Many repeated measurements will yield a distribution of answers, centered around the best estimate of the actual value of the check standard. By evaluating this distribution, you can make an estimate of the measurement uncertainty.
The ISO 9000 series of quality management system standards requires, as described in paragraph 4.11.1, that "Inspection, measuring and test equipment shall be used in a manner which ensures that the measurement uncertainty is known and is consistent with the required measurement capability." Measurement assurance is, therefore, a way to satisfy this requirement. If you're measuring widgets, and the test is nondestructive, use a widget as a check standard. In this way, any variation in the measurement process (not just in the test equipment) will be captured by the measurement distribution and, appropriately, reported as part of the total uncertainty.
This part of measurement assurance does an elegant, quality job of determining the uncertainty of your measurement process, but that's not all that's needed. You could make many repeat measurements, and all of them might be wrong. If your equipment is not properly adjusted, it could exhibit a consistent difference from what it should be reading (that is, a value not linked to national standards). This is often called a bias and is corrected by calibrating your test system. The calibration also has uncertainty that must be added to the uncertainty determined by your measurement assurance tests (the calibration uncertainty and the check standard uncertainty add root-sum-square, not arithmetically).
Calibration must be done periodically, at least according to most quality standards. Between calibrations, though, there is a risk that the measurement system will change, very possibly causing an increased rejection rate or unnecessary rework. Usually only the test instruments are calibrated, leaving the rest of the measurement system (such as the fixtures, operators and data analysis) unchecked. Measurement assurance examines all of those factors and more, making it very appropriate for evaluating ongoing performance of the measurement process.
Monitoring the measurement system
What quality tool is used to continually monitor a process to detect undesired changes? It's statistical process control, of course. Control charts are the most effective and best tools in the measurement assurance armamentarium, and they're the least used and least understood by far. While most quality personnel know how to use control charts and lots of good training is available to bring metrologists up to speed, these two groups of people don't often talk to each other, and many opportunities are lost.
Monitoring your measurement system is easy. Take one or more check standards, and put them through the same measuring processes as your usual production work. If you're working in a laboratory, "production" refers to your regular measurement workload. If you're in a manufacturing environment, mix the check standard with the product and measure it the same as you would anything else. Just be sure to recover the check standard after the measurement and not ship it to a customer.
As previously described, we make single measurements of the check standard every time, and the data are variables, numerical results of the process. When this situation occurs, use an IXMR (individual X moving range) control chart to monitor the measurement.
If the measurement process has considerable noise or short-term variation, it might be best to use several check standards or to measure one check standard several times in a row. When you do this, plot the answers on an X-bar and R chart. The repeated measurements are considered one subgroup. Analyze the charts just as you would any other control chart, using all the rules (not just "one point beyond three sigma") to detect signals. When an out of control condition exists, it's a message that something has changed and an investigation should be conducted--possibly followed by root cause analysis and corrective action. All of this should be extremely familiar from applications in other areas of quality.
In many cases, measurements remain stable for months or years. The control chart will be extremely boring--or will it?
Suppose we're producing a subassembly for use later in a process. The measurement that we're charting is an important part of our outgoing inspection. One afternoon, an obviously agitated foreman from the final assembly room comes into our area. He says that the lot of material we sent him that morning is full of defective material--he can't process it at all. When he looks at our inspection results and finds that we detected no problems, his natural reaction is to suspect that our inspections were at fault--our inspections didn't find the obviously bad stuff.
We ask him to accompany us to the inspection station where the control chart is posted. Once there, we see that the measurement process at our outgoing test has been under control for weeks. The chart is completely without any interesting signals.
Is that boring? In this case, it's crucial. We'll have to look elsewhere for the foreman's problem. In fact, anywhere control charts are in use, the same principle holds true. While the chart will in fact signal changes in conditions, one of its most powerful properties is that it can tell us, with excellent quantitative support, that conditions have not changed.
Finally, measurement assurance can help us with another thorny problem: How often should we calibrate? For decades, metrologists and measurement engineers have debated how to address this problem. It's basically a risk management effort, a trade-off of the cost of frequent calibrations against the potentially greater cost of continuing to measure when the answers are wrong. Once again, measurement assurance can help.
If it's nearing time for calibration, look at the control chart. The chart will show any drift of the measurement process, and if it's not drifting, calibration may not be required. In fact, the only flaw is that if the check standard drifts in one direction and the measuring equipment drifts in the opposite direction by about the same amount, the chart will appear constant even though there is a problem.
Are such complementary drifts likely? Not very, but they're not impossible either. If the control chart is stable, it's best to calibrate once in a while anyway, but certainly at a lower frequency than would be necessary without measurement assurance. If calibrations are particularly difficult and expensive, and you would like to avoid them, use more than one check standard. If you do so, everything would have to drift in just the right way to hide a problem, and the chances of that are slim indeed.
PHILIP STEIN is a metrology and quality consultant in private practice in Pennington, NJ. He holds a master's degree in measurement science from The George Washington University in Washington, DC, and is an ASQ Fellow. For more information, go to www.measurement.com.