Going on Feel
Monitor and improve process stability to make customers happy
by Ronald D. Snee and Roger W. Hoerl
Customers consider consistent quality to be one of the most important product and service attributes. Often, it’s the most important. While the highest quality for a given price always tops the customers’ requirement list, sometimes customers will sacrifice a little quality for a more consistent product.
Consider this: You have two suppliers for a given product, and the required delivery time is 10 days. Supplier one delivers its product in an average of 10 days, with a typical range of six to 14 days. Supplier two has a slightly longer average delivery time—11 days—with a typical range of nine to 13 days.
Which supplier would you choose? Many customers would say the second supplier—the one with with a 10% longer average delivery time—to get consistency of delivery.
Jack Welch emphasized that customers "feel" variation and lack of consistency in a product much more so than the "average" in a product.1 In other words, variation can produce an emotional response from customers. Customers never see, or feel, the average product—only product variation and problems that excessive variation causes them.
Consistent quality is produced by stable processes. The commonly used method of assessing process stability is control charts. When assignable causes are found, an investigation can uncover the causes of the variation and determine the proper corrective action. This gives the impression that process stability is a go or no-go situation. In other words, the process is stable or not.
Figure 1 shows the weight of tablets produced by two different presses in the same production run.2 The tablet weights of both presses were well within the control limits, and the production of the two processes appeared to be stable during the production run. All tablet weights were well within specifications of 1.02-1.08.
In Figure 2, you see the assay of pharmaceutical batches during a three-year period.3 The process appeared to be stable in the first year, started to drift the second year and experienced increased variation in the third year. The trend was detected in the third year and a correction made about mid-year. During the three-year period, all product was well within product specifications of 90-105.
In both examples, the control limits provide a go or no-go decision regarding process stability, but they do not provide a measure of stability. From a practical perspective, you need a measurement to tell you when you need to be concerned about stability. Some levels of instability are worse than others. The following metric helps make such decisions.
Process stability metric
You can build a process stability metric by expressing the long term (LT) and short-term (ST) variance of the process in terms of variance components:4
Total variance = ST variance +
The LT component in this formula is the variance in the average of the distribution of process measurements over time. The LT variance component measures process stability, which decreases as the LT variance component increases.
We recommend the guidelines in Table 1 for assessing process stability. If the LT variance is less than 20% of the total variance, process stability is unlikely to be a problem. When the LT variance is in the 20 to 30% range, process stability may be an issue. When the LT variance exceeds 30% of the total variance, investigation and corrective action should be considered.
These recommendations are guidelines. Depending on a given situation, the associated business objectives, and the science and engineering underlying the process, different values certainly can be used.
In creating the guidelines in Table 1, we used the common practice of expressing the LT variance as a percentage of the total variance. For a given process, for example, if the LT variance component is 12 and the ST variance component is four, the total variance for the process is 12 + 4 = 16, and the LT variance is 100(12)/16 = 75% of the total variance.
The standard deviation of the process—typically used to measure the spread of the distribution of process measurements—is the square root of the total variance. In this example, the process standard deviation is the square root of 16, and the three standard-deviation limits for the process is +/- 3(4) = 12. This range incorporates LT and ST variance.
Basis for this guidance
Six Sigma teaches us that the best LT process performance you could typically expect to achieve is a shift in the process average of less than +/-1.5 ST standard deviations. This is not science; it is a rule of thumb based on extensive experience in diverse application areas.
This rule of thumb implies that the process average can vary by +/-1.5 ST standard deviations, giving a total width of 2(1.5) = 3 ST standard deviations for the distribution of the average (LT variance). This implies that six standard deviations (total process width) of the LT distribution are equal to three ST standard deviations. In other words, the LT standard deviation is one-half of the ST standard deviation because:
Total variance = ST variance
+ LT variance = ST variance +
¼ (ST variance) = 1.25 ST variance.
We use one-fourth instead of one-half because this equation is for variances, which are the standard deviations squared. Therefore, if LT standard deviation equals one-half of the ST standard deviation, then LT variance equals one-fourth of the ST variance. On a percentage basis, this results in:
LT variance/ total variance =
¼ (ST variance)/1.25 (ST variance) =
0.2, or 20%.
Therefore, any LT variance less than 20% of total variance is considered good, and improvement work may not be worth the effort.
LT variance too large?
When should we be concerned that the LT variance is too large? We reasoned that investigation and corrective action should be considered when the LT variance exceeds 30% of the total variance. But the spread of a distribution is typically measured by the standard deviation, not the variance.
To convert this guideline to standard deviations, note if the LT variance component is 30% of the total variance, then the LT standard deviation will be 100(square root of 0.30) = 54.8% of the total standard deviation. An LT variance of more than 30% implies the LT standard deviation is more than 50% as large as the total standard deviation.
In the published measurement systems analysis (MSA) standard, the gage repeatability and reproducibility (GR&R) variance is considered to be acceptable when the GR&R variance is less than 30% of the total variance.5 Therefore, our recommendation of 30% as the upper end of the gray area for which stability may be a problem is consistent with the GR&R recommendation.
The stability assessments for the examples in Figures 1 and 2 are summarized in Table 2. For Figure 1, the LT variance percentage is 18% and 44% for tablet presses A and B, respectively. This leads us to conclude that press A is more stable than press B. Further stability is not a problem for press A, but press B is showing some instability. A closer look at Figure 1 shows some shifting above and below the press B average. These shifts also are detected by the special cause tests.6
For Figure 2, Table 2 shows that the LT variance is 6%, 23% and 64% for years one, two and three. This finding leads to the conclusion that stability is not a problem in years one and two, but it is an issue in year three, which is clear in Figure 2. We are satisfied that the LT variance percentage is nicely summarizing the variation observed in Figures 1 and 2, and providing a useful measure of process stability.
From a computational perspective, the LT and ST variance components were calculated by grouping the successive averages in nonoverlapping groups of two and computing the variance components using a standard nested analysis of variance (ANOVA) approach.
For press A (Figure 1), the 90 samples were divided into 45 groups of two samples each, and the LT and ST variance components estimated using a one-variable ANOVA (between group-within group). This results in the ST variance being estimated from the variance between the two observations in each of the nonoverlapping groups. This approach also provides an F-test of significance for the LT variance component. The resulting p-values associated with the F-ratios are shown in Table 2.
Process stability improved
An important use of the LT variance is that processes can be ranked by LT variance percentage, which provides a method of prioritizing processes with respect to stability. Given this prioritization, improvement projects can be assigned for those processes that have the largest LT variance percentage statistics and the poorest stability.
It’s also important to note we often see processes that have stability issues, and yet the product produced by these processes is within product specifications. This was the case in the examples shown in Figures 1 and 2. The fact that the product was within specification is good. The stability concerns are related to the prediction of future process performance.
It is difficult, if not impossible, to accurately predict the future performance of an unstable process. In these situations, special attention may be needed for unstable processes to ensure the product produced continues to be within specification as the process continues to operate and improvement efforts are under way to improve process stability.
Keep in mind that a stable process can still produce variation unacceptable to customers. If there is no significant LT variance, all of the total variance is due to ST variability. ST variance is arguably more difficult to reduce than LT variance because ST variance often reflects the best the process can do, but not always. Recall that control limits represent the voice of the process, and spec limits represent the voice of the customer. Both voices are important, but we should never confuse them.
Customer satisfaction improved
Never lose sight of the fact that customers—the reason for our being—are sensitive to product consistency, perhaps more so than the average value. Product consistency is a direct result from process stability. We need to place even greater attention on process stability.
LT variance is a good metric to measure process stability and helps determine when corrective action is needed. It is also useful in developing improvement strategies by comparing the stability of a collection of processes to prioritize the process with respect to stability issues, and to determine which process or processes to start to use for improvement activities.
- Jack Welch, Jack: Straight From the Gut, Warner Business Books, 2001.
- Ronald D. Snee, "Crucial Considerations in Monitoring Process Performance and Quality," Pharmaceutical Technology Analytical Technology, October 2010, sections 38-41.
- Douglas C. Montgomery, Introduction to Statistical Quality Control, sixth edition, John Wiley and Sons, 2009.
- Automotive Industry Action Group, Measurement System Analysis Reference Manual, second edition, Automotive Industry Action Group, 1995.
- Montgomery, Introduction to Statistical Quality Control, see reference 4.
© 2012 Ronald D. Snee and Roger W. Hoerl
Ronald D. Snee is president of Snee Associates LLC in Newark, DE. He has a doctorate in applied and mathematical statistics from Rutgers University in New Brunswick, NJ. Snee has received ASQ’s Shewhart and Grant Medals. He is an ASQ fellow and an academician in the International Academy for Quality.
Roger W. Hoerl is manager of GE Global Research’s applied statistics lab. He has a doctorate in applied statistics from the University of Delaware in Newark. Hoerl is an ASQ fellow, a recipient of the ASQ’s Shewhart Medal and Brumbaugh Award, and an academician in the International Academy for Quality.