2019

STATISTICS ROUNDTABLE

Are You Making Decisions in a Fog?

by Ronald D. Snee

“When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.”

Lord Kelvin1

 Just as water makes up two-thirds of the world’s surface, measurement constitutes an enormous part of the scientific method and scientific problem solving. Scru-pulous pursuit of scientific knowledge entails recognizing and formulating a problem, collecting data through observation and experiment, and formulating and testing hypotheses.

Measurement is required to monitor and improve processes in scientific and commercial settings, but not all measurement methods are created equal. Many are highly variable, unrepeatable or imprecise. Six Sigma projects, for example, have identified the inadequacies of many measurement systems.

Given the wide disparities among measurement systems, Lord Kelvin might have taken it one step further and said measuring inaccurately or inadequately could be worse than not measuring at all. If you use inaccurate or inadequate measurements you believe to be satisfactory, you may forge blindly ahead and make erroneous decisions. If you are to avoid making decisions in a fog, you must understand measurement is a process that must be continually measured, monitored and improved.

Poor Measurement Can Lead to Poor Decisions

The observed variation in process output measurements is not simply the variation in the process itself; it is the variation in the process plus the variation in measurement that results from an inadequate measurement system. Increased measurement variation results in increased variation in the measured values for the outputs of processes—with costly consequences.

For example, a manufacturing company was experiencing a high number of defects in its process for producing a high value precious metal bar. These defects were costing the company $150,000 per year. When the measurement system was analyzed during a Six Sigma project, the company discovered the gage used in the inspection of the bars had not been calibrated in two years. Once the gage was properly calibrated, the defects disappeared. The company then realized another product was experiencing the same problem. It saved an additional $133,000 annually when the second gage was calibrated.

Such stories are common in the process and assembly industries. In my experience, as many as 50% of the measurement systems studied as part of Six Sigma projects need to be improved.


As Table 1 shows, these variations in measurement systems can stem from many sources:

  • Repeatability variations occur when one person measures the same characteristic several times with the same gage and gets varying results.
  • Reproducibility variations occur when different individuals measure the same characteristic with the same gage and get varying results or when different gages produce varying measurements of the same thing.
  • Stability variations occur when there’s a change in the gage’s warmup time, operator or environmental conditions, or the time of the day, week or month a measurement is taken.
  • Linearity variations occur when there are differences in the accuracy values through the expected operating range of the equipment. Linearity is represented by the slope of a best fit line through the accuracy values from the smallest to the largest size of the gage.
  • Bias variations occur when there is a statistically significant, systematic shift of a gage reading from its true master value.

Monitoring a process using a highly variable measurement system is like watching a parade through binoculars that are out of focus. It will likely result in poor decisions with costly business results. For example, faulty measurements can lead a company to literally give product away by overfilling its packages. They can also cause a company to release defective products and reject acceptable products.

Process capability statistics that underestimate the true performance of the process, such as when Cp and Cpk are smaller than indicated by the data because of measurement variation, can lead a company to reject good product and spend time on unnecessary process improvement studies when the measurement system is really what’s at fault.

R-square statistics for underestimated regression models can result in the rejection of a process model that could be used to predict process performance and develop process improvement and control procedures. Process operating windows may also be narrower than the data indicate, giving the company a false sense of security regarding effective control procedures.

Excessive measurement variation gone unrecognized creates the impression the process is performing more poorly than it actually is. The result is increased costs, overspending on process controls, rejection of good quality product, initiation of unneeded improvement studies, decreased employee morale and loss of confidence in the process and its ability to deliver quality product.

Reducing measurement variation results in both practical and statistical benefits:

  • Less product is given away.
  • Fewer product inspection errors mean lower costs of quality and more time to concentrate on substantive process problems rather than measurement problems.
  • The sample size for experiments and process and product monitoring is reduced, leading to greater efficiency and lower costs.

Far-Reaching Statistical Implications

Despite the benefits of an accurate measurement system, the statistical literature is largely silent on the subject of measurement variation. There is also no mention of it in the standard works on regression analysis and little mention of it in the literature on design of experiments (DoE) and statistical process control.

The few books and articles that do mention it include the Measurement Systems Analysis Reference Manual, Concepts for R&R Studies, the Statistical Manual of the Association of Official Analytical Chemists, “Ruggedness Evaluation of Test Methods” and Evaluating the Measurement Test Process.2,3,4,5,6 In addition, Statistics for Experimenters discusses the simultaneous estimation of process variation, sampling variation and measurement variation.7

This is surprising, considering variation in a measurement system can significantly affect numerous statistical studies, including:

  • Regression models: Variations may lead to the underestimation of R-square statistics. As a result, you don’t know how good a fit you have to the data or how much of the residual variance is due to measurement variation. True model prediction accuracy is difficult to estimate.
  • DoE: The larger the measurement variation, the larger—and more time consuming and costly—the experiments you must conduct to address the variation. On the other hand, ignoring measurement variation can reduce the sensitivity of your experiment, calling into question its efficacy and accuracy. High measurement variation can also affect the determination of critical factors and the identification of process operating windows. The real operating windows will be smaller than you think.

These challenges have been addressed by Six Sigma methodology in Leading Six Sigma, Six Sigma Beyond the Factory Floor and Breakthrough Business Results With MTV.8,9,10 In these approaches, assessment of the measurement system is built into the methodology of using DoE to improve processes and products. In most other approaches, this piece of the DoE roadmap is absent.

  • Process capability statistics: High measurement variation reduces Cp and Cpk, resulting in the underestimation of true process capability. With the exception of Concepts for R&R Studies,11 the literature on statistical process control largely ignores this crucial effect of measurement variation.

Improving Measurement Methods

Because customers will continually demand higher quality while you strive to reduce the cost of quality, it is essential to understand the far-reaching effects of variations in the measurement system and address them systematically. This effort begins with gage repeatability and reproducibility (gage R&R) studies.

Two sources of variation can arise in measuring the product of any process:

  1. Variation in the process.
  2. Variation in the system used to measure the process.

A gage R&R study seeks to distinguish process variation from measurement system variation and reduce measurement variation due to repeatability and reproducibility.

But how do you reduce it? The statistics literature and much of the Six Sigma literature is silent on this critical step. There are, however, some simple steps you can take to instantly improve your measurement system:

  • Check adherence to SOPs, operator technique and gage maintenance.
  • Consider whether operator capability is an issue that can be enhanced by training.
  • Use another measurement system.
  • Base process measurements on averages of two or more measurements rather than a single, highly variable measurement.
  • Use method ruggedness analysis to identify ways to improve method performance.

Method ruggedness analysis is a particularly powerful means of finding ways to improve the performance of a method. Many variables, including method, materials, people, equipment, tools and environment, affect the measurement process.

The effects of these variables can be tested using the DoE based measurement method ruggedness discussed in the Statistical Manual of the Association of Official Analytical Chemists and “Ruggedness Evaluation of Test Methods.”12,13 Astonishingly, these DoE methods are not discussed in the leading books on DoE.

Consider the case of an analytical laboratory that was experiencing large variations in viscosity measurements.14 Because viscosity was a key quality characteristic of a high volume product, the lab initiated a viscosity measurement ruggedness study to de- termine which test method variables, if any, were influencing the viscosity measurement.

Seven factors were studied using a 16-run fractional factorial design. Five factors were found to be important. One of those factors involved two so-called identical spindles that happened to be producing the largest source of variation. The SOPs were revised to better control the important factors. Additional supervisory attention and training was also instituted to ensure the new SOPs were followed.

Recommendations

Three long-term, systematic steps can be taken to ensure we do not continue to make decisions in a fog. First, we can make evaluation of measurement variation part of all statistical studies. From DoE practice and roadmaps to regression practice and Cp and Cpk calculations, assessment of the measurement system should play an integral role.

Second, we can make periodic gage R&R studies part of a regular program of process maintenance and conduct them when conditions change and when there is a new operator or gage.

Third, when writing about statistics and process improvement, we can discuss the importance of measurement systems. These crucial issues will then find their way into statistics texts and further dispel the fog.


REFERENCES

  1. Bartlett’s Familiar Quotations, 14th edition, 1968, p. 723a.
  2. Measurement Systems Analysis Reference Manual, Automotive Industry Action Group, 1990.
  3. L.B. Barrentine, Concepts for R&R Studies, second edition, ASQ Quality Press, 2003.
  4. W.J. Youden and E.H. Steiner, Statistical Manual of the Association of Official Analytical Chemists, 1975.
  5. Grant Wernimont, “Ruggedness Evaluation of Test Methods,” ASTM Standardization News, Vol. 5, pp. 61-64.
  6. D.J. Wheeler and R.W. Lyday, Evaluating the Measurement Process, second edition, SPC Press, 1989.
  7. George E.P.W. Box, William G. Hunter and J. Stuart Hunter, Statistics for Experimenters: Design, Innovation and Discovery, second edition, Wiley-Interscience, 2005.
  8. R.D. Snee and R.W. Hoerl, Leading Six Sigma, Pearson Prentice Hall, 2003.
  9. R.D. Snee and R.W. Hoerl, Six Sigma Beyond the Factory Floor, Pearson Prentice Hall, 2005.
  10. Charles Holland and David Cochran, Breakthrough Business Results With MTV, John Wiley and Sons, 2005.
  11. Barrentine, Concepts for R&R Studies, see reference 3.
  12. Youden, Statistical Manual of the Association of Official Analytical Chemists, see reference 4.
  13. Wernimont, “Ruggedness Evaluation of Test Methods,” see reference 5.
  14. R.D. Snee, L.B. Hare and J.R. Trout, eds., “Experimenting With a Large Number of Variables,” Experiments in Industry, ASQ Quality Press, 1985.

RONALD D. SNEE is principal of process and organizational excellence at Tunnell Consulting in King of Prussia, PA. He earned a doctorate in applied and mathematical statistics from Rutgers University, New Brunswick, NJ. Snee has also received ASQ’s Shewhart and Grant medals and is an ASQ Fellow.

Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers