2019

STATISTICS ROUNDTABLE

Wanderlust and Memory

How the EWMA model improves on Shewhart charts by accounting for process memory

by Lynne B. Hare

Pop quiz. Question one: How do you derive Shewhart (X-bar and R, or X-bar and S) control chart limits?

  1. Control chart limits are the same as specification limits.
  2. Control chart limits are moved toward the target from the specifications limits to allow for sampling variation.
  3. Run the process and wait until it achieves stability. Take samples of size four or five—or whatever is convenient—every few minutes. Calculate the average range of the sets of these periodic samples and multiply it by A2 from the control chart tables. Add the result to and subtract it from the target to derive the limits.
  4. Take 30 consecutive samples. Calculate their standard deviation. Divide it by the square root of the usual sample size. Multiply that by three, and add and subtract the result from the target.
  5. None of the above.

Question two: What is the purpose of a Shewhart control chart?

  1. To please regulatory officials.
  2. To maintain a process record for legal purposes.
  3. To tell when to adjust the process.
  4. To identify sources of assignable cause variation.
  5. None of the above.

That’s it. Only two questions.

Question one

If you answered A or B, I’m sorry, but you got it wrong. Your process doesn’t know about specifications; it doesn’t care about specifications. At best, it will operate within its inherent, intrinsic capability.

But more than likely, there will be random shocks and interventions of assignable cause variation that disturb the process and cause the end-product variation to exceed capability. Specifications are, or should be, based on product performance from an end-user perspective, not process performance from a manufacturing perspective. All, or almost all, product should be within specification limits.

If you answered C, you may have been influenced by some texts on statistical process control or some training that recommends 10 easy steps to process perfection that you saw on commercials listing 800 numbers on late-night TV.

One clue to C being the wrong answer is that if you wait until the process achieves stability, you could wait forever. Besides, if you actually waited for the achievement of stability before you took data, how would you ever know the process is stable? And how do you measure stability in the first place?

Answer D has two major flaws. One is that those 30 consecutive samples may or may not be representative of the process short-term variation. Would you get the same result if you took a different set of 30 samples? Another flaw is that samples taken adjacently from most processes are highly correlated. That is, when the value of one sample is high (low), it is likely that its neighbor sample is also high (low).

The results are considered autocorrelated. If data are autocorrelated, their standard deviation is not the same as that of independent samples taken from the same process. And when you make decisions on the basis of control charts, you are—knowingly or not—assuming independent samples. More on that later.

So the answer to question one is "none of the above." Control chart limits should be based on the estimate of the capability standard deviation derived from a carefully conducted process capability study.1

Question two

You get partial credit if you answered either A or B. It is not that these reasons were part of Walter Shewhart’s intent in developing control charts, but it is true that many organizations rely on control chart data as a legal record and, when needed, to show regulatory officials that they are on the ball.

Grudgingly, I’ll also give partial credit for C, but only because triggering process adjustment has become a common use for control charts. "Wait a minute," you say. "I thought that’s what control charts are for." Not so. Shewhart’s original intent was answer D: to detect the entry of assignable cause variation in the process. If you want to signal the need for process adjustment, there are better devices than Shewhart control charts.

The Shewhart model is:

y = f(x) + ε,

meaning that some observed process value, y, is a function of process settings, x, plus noise, ε. The process mean is fixed, and the variation is experienced around that fixed mean. This is a useful way to look at processes, and as the late, amazingly great George E.P. Box said, "All models are wrong, some are useful." The Shewhart model serves us well; it is useful.

Another way of looking at processes is to avoid the assumption of a fixed mean. After all, processes have a certain wanderlust, don’t they? Just turn your back on them for a while and see what happens.

A model that fits the wanderlust situations well is the exponentially weighted moving average (EWMA) model:

zt = λxt + (1-λ)Zt-1.

This is a recursive model in that the best estimate of the present process mean, zv, is a factor, λ(0 < λ < 1) times the present observed value, plus (1 – λ) times the previous estimate. The previous estimate is based on the one before it, and so on, all the way back to the beginning of the process. The value of λ determines the memory of the process. If λ is close to one, there is no memory; if it is close to zero, the process would put elephants to shame.

Clients have assured me that their processes have no memory and that each batch is thoroughly independent of its predecessors. I have my doubts. Aren’t these batches made with some of the same raw materials, on the same equipment and by some of the same people? There is, it seems, good reason to claim the existence of process memory.

Moreover, the memory is confirmed by the data analysis. To explore the memory, you calculate autocorrelations and partial autocorrelations of the data using time series models. Autocorrelations measure the dependency of observations on those preceding them at various lags, while partial autocorrelations measure the additional correlations at fixed lags not accounted for by correlations at previous lags. A partial autocorrelation coefficient at lag k is obtained by fitting to the data an autoregressive model of order k:

zt  = θ1zt-1 + θ2zt-2 + ... +θtzt-k + at.

The z’s are the data at the various lags up to k, the θ’s are coefficients to be estimated, and a is the error, or random shock, at time t. It looks scary, but the software does all the work for you.

Chances are high that your production data show trends and autocorrelations. They can usually be modeled satisfactorily by moving averages of successive differences. You accumulate successive differences among observations, second minus the first, third minus the second and so on. Next, you calculate moving averages of these differences. The differencing removes linear trends. In the notation of autoregressive integrated moving average (ARIMA) models, this is an ARIMA(0,1,1) model:

zt - zt-1 = at - θat-1.

This model is the equivalent of the EWMA model shown earlier. The smoothing constant, λ, is 1-θ.

Using EWMA

Because the EWMA model takes into account the process memory and some of its wanderlust, it provides more precise indicators of the need for adjustments than the Shewhart model does. This may leave you with some questions.

Question one: What do you do with the repeated observations taken at each sampling time?

Answer one: Run the EWMA chart on the means of the subgroup observations. You also might want to generate a chart for the standard deviation on those observations to run simultaneously with your EWMA chart.

Question two: What should the smoothing constant be?

Answer two: For most industrial processes, the smoothing constant can range from λ = 0.05 to 0.40. It turns out that process adjustment is not extremely sensitive to the choice of constant, so λ = 0.25 might be a good starting point. If you have data from an extended process, you can fit an ARIMA(0,1,1) model to it and find the moving average coefficient (θ). The data generated estimate of λ is 1-θ.

Question three: In the January issue of QP, I ranted about the Nelson rules for Shewhart charts.2 Should you believe me now, then or not at all?

Answer three: The Nelson rules apply, and they are meant for processes that are stable and without memory. EWMA charts may be accompanied by similar rules—written for the same purposes—and these charts are designed for nonstationary processes with memory, which means most of them. Many EWMA software packages can be coaxed into yielding residuals, which are the differences between observations and their EWMAs. A powerful, informative control device is an EWMA chart accompanied by a Shewhart chart of the residuals.

None of this is meant to suggest that your use of Shewhart charts is inappropriate. Harkening back to what Box said about all models being wrong and some useful, EWMA charts represent an improvement over Shewhart charts for processes with memory and wanderlust, and that means most of them.


Acknowledgments

  • Hare thanks J. Stuart Hunter, Mark Vandeven and Janice Shade for helpful improvement ideas in preparing this column.

References and Note

  1. For more information, read chapter 8, "Process and System Capability Analysis," of Doug Montgomery’s Introduction to Statistical Quality Control, seventh edition, John Wiley & Sons, Inc., 2013.
  2. Lynne B. Hare, "Follow the Rules," Quality Progress, January 2013, pp. 56-57.

Bibliography

  • Box, George E.P. , An Accidental Statistician, John Wiley & Sons, Inc., 2013.
  • Box, George E.P. , "George’s Column: Feedback Control by Manual Adjustment," Quality Engineering, 1991, Vol. 4, No. 1, pp. 143-151.
  • Box, George E.P.  and Luceño, M. Paniagua-Quiñones, Statistical Control by Monitoring and Feedback Adjustment, second edition, 2009.
  • Hunter, J. Stuart, "A One-Point Plot Equivalent to the Shewhart Chart With Western Electric Rules," Quality Engineering, 1989, Vol. 2, No. 1, pp. 13-19
  • Shewhart, Walter, Economic Control of Quality of Manufactured Product, D. Van Nostrand, 1931. Reissued by ASQ Quality Press in 1980.

Lynne B. Hare is a statistical consultant. He holds a doctorate in statistics from Rutgers University in New Brunswick, NJ. He is past chairman of the ASQ Statistics Division and a fellow of both ASQ and the American Statistical Association.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers