In many manufacturing environments, such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

Keywords: Adaptive Filtering, Autocorrelated Observations.

*by***STEPHEN V. CROWDER, Sandia National Laboratories,
Albuquerque, NM 87185 and LARRY ESHLEMAN, Cornell University,
Ithaca, NY 14853 **

*INTRODUCTION*

Advanced manufacturing technology, pressured by the need for greater efficiency and responsiveness to the customer, now relies heavily on strategies such as low volume production runs to more quickly meet the changing demands of the customer and to lower manufacturing costs. In many industries this situation has included a dramatic shift in production philosophy from the delivery of a few big orders to many smaller ones. As a result, there is a need to develop statistical process control (SPC) techniques that are useful in low volume manufacturing environments, where relevant data may be scarce. This is especially true in the nuclear weapons complex, with shrinking budgets and decreasing need for large quantities of new weapons. For the small lot manufacturing that remains, it is particularly important to have modern, appropriate statistical methods in place to support and enhance quality improvement efforts.

In a low volume manufacturing environment, the data necessary to accurately estimate process parameters such as the process mean and standard deviation, and the limits for standard statistical control charts, are usually not available prior to the start of production. The usual recommendation for establishing valid control limits for the chart, for example, is to gather around 25 samples of 4 to 6 observations each as a baseline (see Montgomery (1996)). Quesenberry (1993) recommends a sample of size 300 to establish permanent limits for the individuals chart. Samples this large may not be feasible in a low volume environment, since the number of data points needed to satisfy these recommendations might exceed the total number of parts produced in a short production run. Woodall, Crowder, and Wade (1995) discuss various control chart approaches that have been proposed for the low volume problem. One approach is to pool data from similar parts or processes in an attempt to overcome the data limitations. A second main approach is to adjust the standard limits on the control chart (or equivalently transform the data) to achieve the desired Type I error probability. A third approach emphasizes the monitoring and controlling of process inputs rather than product characteristics. A fourth approach is to use control charts that have greater sensitivity than the standard Shewhart charts. Examples of these charts include the CUSUM chart and the EWMA chart. Crowder and Halbleib (1995) present more details associated with the various techniques that have been proposed to address this problem.

In this paper we develop a control chart that has greater sensitivity than the standard Shewhart chart. In particular we investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. We consider the cases in which the model parameters are known and unknown, and include a discussion of the associated estimators in each case. For the case in which the parameters are known, we describe the estimation procedure and discuss how the filtering algorithm uses the data from the time series, as well as other properties of the resulting Bayesian estimator. For the case in which the parameters are not known, more typical in a low volume manufacturing environment, we describe a two-stage estimation procedure combining maximum likelihood estimation in the first stage with Bayesian estimation in the second stage. This adaptive procedure assumes that initial process parameters are not known, and updates the estimates as more data become available.

Much has appeared in the recent literature on process control charting with autocorrelated data. See Vander Wiel (1996), and Wardell, et al (1992, 1994), for example. Most of the attention has been given to monitoring the residuals from a specified time series model to detect mean level shifts, with emphasis on average run lengths. Little attention has been given to the problem of sample size requirements for possibly short time series associated with low volume production. The recommendations above for very large sample sizes are based on how large the sample size must be for the estimated control limits to perform essentially like the known control limits. The emphasis here, however, is on estimation and minimization of mean squared error (MSE) rather than the traditional hypothesis testing and run length analyses associated with control charting and control limits. With low volume production and small lots, hypothesis testing and run length analyses may not even be meaningful, since the expected run length may well exceed the entire production quantity. Using simulation techniques, we investigate the small sample properties of a two-stage estimation procedure by evaluating mean squared errors and convergence rates of parameter estimates. Based on the results of this investigation we make recommendations regarding sample size requirements for estimating parameters from a possibly short time series of autocorrelated data. We show that far fewer data values are needed than is typically recommended for control charting applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex, applying the adaptive filtering algorithm to a time series of battery cathode weight data. Finally we discuss extensions to more general time series models.

Read Full Article (PDF, 415 KB)