Measuring for Keeps

Measurement uncertainty can be determined, even in destructive testing

by Philip Stein

The fundamental principle of measurements, one I've discussed many times in this column, is that measurement uncertainty can be determined in most situations. If the process of measurement does not affect the item being measured, repeated measurements of a single item will yield statistical information about the variability of the measurement process.

But what if measurement does change the item being measured? Is there anything you can do to characterize your measurement uncertainty?

Many tests and measurements do not leave the measured item, the unit under test (UUT), unchanged. The UUT is often a single-use item, such as an automotive air bag inflator. These inflators are tiny explosive devices that rapidly emit a large puff of vapor (enough to inflate an air bag in a few milliseconds). These devices cannot be 100% tested.

Often, a measurement is made on a test object deliberately manufactured for the purpose of being destroyed in the test process. For example, a metal or plastic coupon may be stretched during testing to determine the force required to break it (or sometimes its elongation). In short, these are destructive tests.

Some measurements do not destroy the specimen, but still do change it. When you are testing the curing properties of rubber with a rheometer, the sample is cured and can't be reused for further tests or inclusion in a product.

Testing makes it better

My favorite destructive test actually makes the UUT better when measuring it. An integrated high voltage transformer is a television component that generates the 20,000 or so volts needed to properly light up the picture tube. It consists of a series of magnetic windings, a ferrite core and some diodes--all potted in an insulating compound. One of the most important characteristics to measure in such a transformer is its own breakdown voltage. A voltage is applied that is higher than the rated value in order to cause a stressful situation that will show up marginal units.

When the assembly is potted, residual contamination on the wires and other parts can act as a bit of a conductor and, during a breakdown test, can contribute to failure or even be the most important failure mechanism. During a test, though, the application of high voltage can and does make chemical or physical changes in the contamination and most of the time reduces its deleterious effects. In other words, testing makes the transformer better.

At the same time, other failure mechanisms and physical or chemical processes are accelerated by the breakdown test. After a few repeated shots, the component reverses its trend toward health and gets worse again. After about a dozen tests, further stressing drives the component toward certain death.

The measurement issue

When you are testing product in manufacturing, pass-fail decisions and, ultimately, the disposition of each item or lot of product depend on the results of a measurement. If measurement uncertainty is small compared to product variability, it can safely be ignored. Most of the time, the variability of measurement is ignored even if it is important.

Many quality practitioners are only vaguely aware of the entire issue, but ISO 9001:1994 does require an organization to deal with it:

Inspection, measuring and test equipment shall be used in a manner which ensures that the measurement uncertainty is known and is consistent with the required measurement capability.1

However, this is rarely done, and when it is done, it is rarely done well.

In many cases, measurement uncertainty is not small compared to product variability. This will inevitably lead to some incorrect inspection decisions, both of Type I (reject acceptable product) and Type II (accept defective product).

If you don't want to ship bad stuff, which will increase your warranty costs and annoy your customers, what can you do?

The best thing to do is to reduce measurement error and eliminate the problem. Sometimes this works fine; other times it's too hard to do, and you're stuck. If you can't reduce error, you can use "guardbanding." This technique narrows the effective specification by the amount of the likely uncertainty (perhaps 2 sigma of the measurement process, not of the product). Now, any product that measures acceptable is 95% likely to truly be acceptable even if the measurement was incorrect because of measurement variation.

This is a fine and effective technique, but it requires knowledge of the uncertainty. After all, you adjusted the acceptance limits by an amount related to the measurement error, so you must know what the error actually is. (To find the error, measure an unchanging unit many times.) How, then, do you approach this problem for destructive testing where you can't easily measure the error?

Uncertainty of destructive tests

Because you can't make repeated measurements on a single sample, do the next best thing: Make repeated measurements on a group of similar samples. Start by identifying a time when production is running smoothly and without interruption. Evaluate the resulting product according to your knowledge of its operation, and collect and save a large group of samples when everything is going well. You can assume this group is smoothly distributed about some (unknown) mean and members of the group are as similar as is reasonable to expect. This then becomes your unchanging unit for testing repeatability.

When you make repeated destructive measurements of items from this group, the spread of the resulting values will be due to a combination of product variation and measurement variation. These variations are confounded--without other evidence, you can't tell how much variation came from which source. However, you do know the total variation and can answer some of the same questions as you would with nondestructive testing.

Is the overall variation observed small enough to satisfy your customers? Does the variation meet your specifications? Can the information coming from this measurement be used to stabilize the manufacturing process? If the answer is yes, then the measurement is useful in its current state even if its variability is larger than it would be if there were unchanging samples.

If the answer is no, the measurement must be improved before it can be useful. Many times, this discovery is just the beginning of a journey. It's possible some other property of the unit being tested could provide necessary information without destruction. Perhaps more or different data could be collected during a destructive test. Improvement can come from small things as well as breakthrough changes. Sometimes a slightly different measurement technique can reduce variability enough to make the difference and enable a measurement to be useful when it was not before.

Once the measurement has been deemed acceptable, further items from the original large group are periodically used as a check standard. For nondestructive tests, repeated measurements of a check standard are placed on a control chart to monitor the continued stability of the measurement process.

It's exactly the same here. The control chart will show whether the measurement process has remained stable because the units tested are members of the same original group. The product variability error is still in the mix, of course, so the charts may not be quite as sensitive, but the principle remains the same. Measurement monitoring in this way can be very effective.

This leaves us with only one more issue: The original collection of samples can't last forever.

When you're down to, say, the last 10% of the original batch, collect another batch from the process when it's running well, just as before. Now measure all that remains of the first group and an equal number of samples from the new group. This will enable you to determine statistically how to bridge the gap. You can make changes in the centerline and control limits of the chart based on this experiment and leave yourself with a working chart that should apply to ongoing monitoring using the new samples. This is analogous to adjusting the chart from run to run in short-run statistical process control (short-run SPC). The process remains the same, although the mean and variability may change between runs.

This is not an entirely satisfactory process because the variation of the product samples desensitizes the operation of the monitoring tools. Nevertheless it does work, and it's the best you can do unless nondestructive measurements of the same parameters can be developed.


1. ANSI/ISO/ASQC Q9001-1994, Quality Systems--Model for Quality Assurance in Design, Development, Production, Installation and Servicing (Milwaukee: ASQ Quality Press, 1994).

PHILIP STEIN is a metrology and quality consultant in private practice in Pennington, NJ. He holds a master's degree in measurement science from the George Washington University in Washington, DC, and is an ASQ Fellow.

If you would like to comment on this article, please post your remarks on the Quality Progress Discussion Board, or e-mail them to editor@asq.org.

Average Rating


Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ

Featured advertisers