2017

STATISTICS ROUNDTABLE

Design of Experiments: A Single Experiment or Sequential Learning

by Christine Anderson-Cook

When we look to run an experiment, usually we want to gain a better understanding of the underlying relationship between the input factors of interest (or explanatory variables) and the response of interest.

Because we must make key decisions on how to run an experiment in the absence of the knowledge we hope to gain from the testing, we can find ourselves in a tight spot. If only we knew ahead of time what factors were important, what ranges of those factors were most advantageous, and which model was most reasonable to characterize the underlying relationship.

It is probably not possible or sensible to try to think in terms of designing a single experiment to learn everything of importance about the product or process of interest. Therefore, the concept of sequential experimentation often makes the most sense.

Sometimes, like an agricultural experiment that needs an entire growing season to obtain responses, a single experiment might be forced on us. However, in many industrial applications, this time restriction does not exist, and we are free to gather data in several well-thought-out increments. Many books on design of experiments make it all too easy to focus on a single design as the solution to understanding a process, and the different roles of different types of experiments could be easily forgotten.

We can think of sequential experimentation as a series of small data collection exercises: At each stage we strive to build from the understanding gained by the previous experiment and refine our model to answer more precisely the final question of interest.

Typical Progression For Experimentation

The typical progression might include these stages:

  1. Preliminary planning—to ensure the right problem is tackled.
  2. An exploratory or screening experiment—to consider the many possible input factors that might be influential and refine that group of factors to a smaller collection of the most important factors.
  3. Shifting the region of interest to close to optimum—to change the ranges of the remaining variables until the optimum is contained in the region in which further experimentation will occur.
  4. A response surface methodology (RSM) experiment—to more precisely estimate the relationship between the important factors and the response, and to identify the most likely location for the process or product to be optimized.
  5. A confirmatory experiment—to verify the information gained at the previous stages is correct and future data can be collected at the optimal combination of the input factors.1,2

Preliminary planning: Brain-storming, historical studies and a preliminary pilot experiment can help define and refine some of the definition and measurement issues associated with a research problem: What is the characteristic of interest? Can I find an appropriate numerical measure of that characteristic? Can I measure it precisely? What are some of the input factors I should consider as potentially important?

An exploratory or screening experiment: This stage begins with a list of potentially important factors and seeks to explore the relative importance of these on changing the responseor responses of interest. Here we are likely most interested in the gross magnitude of the effects and determining which of the potentially vast number of explanatory variables we initially explored can be dropped from further consideration.

Shifting the region of interest to close to optimum: A first order model or a first order model with interactions is typically a suitable choice for the exploration phase. Ideally, we would like to eliminate factors contributing substantially less to changes in the response than some of the “heavy hitters” that are dominating.

The path of steepest ascent (or descent) is a commonly used method for building on knowledge gained during the screening experiment to move to the most advantageous region of the design space in which we are close to the optimum value of our response. We strive to move throughout the larger space efficiently and without expending too many resources before performing the detailed study with an RSM experiment.

An RSM experiment: The heart of the study is frequently the RSM design, which takes the most influential subset of the initial factors and closely explores the relationship between the factors and the response. A higher order model (such as second or third order) is commonly needed, because near to the optimum of the response there might be considerable curvature in the surface modeled.

A confirmatory experiment: Once our RSM design and analysis have suggested one or more combinations of the important factors in which the response optimum is estimated to be located, we should run a small confirmatory experiment to verify that our data and model have produced a verifiable result that we can depend on for longer term production scale implementation.

This notion of sequential experimentation can be used at several points within the larger progression of experiments. Consider a few simple examples to illustrate the advantages of this approach.

Example: Full Factorial Design or Not

Consider a fractional factorial design in the context of the screening phase. If we are interested in studying a large number of factors and assessing their relative importance in influencing the response, then a full factorial design or a high resolution fractional factorial design can be too expensive.

For example, consider a case in which we have 10 factors (A, B,…J) and we believe some proportion of these or their two-factor interactions might influence our response. A full factorial design would require 210 = 1,024 experimental runs and could estimate all of the main effects and all of the interactions (up to the nine-factor interaction). This is an excessively large experiment that estimates many terms we are not interested in and we don’t believe are likely to exist or be influential on the response.

An alternative would be to run a resolution V design—meaning the shortest word in the defining equation of the experiment is of length five. We can estimate all of the main effects and second order interactions without any confounding.3 This would lead to a design with 2V10-3 = 128 observations, which still might be too large for determining which factors are active.

Another alternative, which uses the idea of sequential experimentation, would be to run a 2IV10-5 fractional factorial with 32 observations, which can estimate all of the main effects (assuming three-way interactions are negligible) and confounds some two-way interactions with others.

After seeing the data, we might be very lucky and have only main effects active. In that case, we are finished. Or more likely, there might be some ambiguity of which two-way interactions are affecting the response. A second 32-run fractional factorial—called a fold over design—can be run to resolve these issues. In rare cases, a second fold over design might be needed to further clarify the relationship between factors and response.

In this way, we are able to run a smaller experiment (of 32, 64 or 96 total runs) and still are likely to gain the same level of understanding about the factors as gained with the single, larger 128-run experiment.

Example: Central Composite Design

It sometimes makes sense to run a central composite design during the RSM phase with blocking, even when it might not be required by constraints on the number of runs possible at one time.

For example, I recently helped a scientist design an experiment involving three factors. The goal was to understand the response of interest—in this case, yield—around current operating conditions. The current settings of the three factor values had been determined through trial and error but were known to produce a stable response and a satisfactory yield.

The robustness of these conditions was not known. There might be easy gains in yield by moving in the surrounding region of the design space. However, there was reason to believe some instability might begin to exist if we changed the three factors too dramatically.

In this case, a central composite design composed of eight observations from a 23 factorial, six axial points and several center runs (see Figure 1) might be a logical choice for understanding changes in yield as a function of the three input factors. Deciding on the range of the factors—centered around the current operating conditions—proved rather difficult, as experts were not sure how far the stability of the process would extend.

The factorial runs are shown in blue, the axial runs in black, and the four center runs in white.

In this case, running the experiment in two blocks provided some of the advantages of sequential learning, first through running the factorial portion of the design (the vertices of the cube) with two center runs.

Based on the observed behavior at the edges of the region, the range for each factor’s axial levels could then be determined before running the second block of the design with two center runs. This approach hopefully helped avoid choosing values that were probably:

  • Too extreme and result in a wasted run because of process instability.
  • Not too close to the current conditions to be redundant to existing knowledge.

While not typically presented as having flexibility to choose different axial values for each of the three factors, the central composite design can easily be adapted to accommodate this. The only byproduct of this lack of symmetry is a different precision for each of the parameter estimates and a less rotatable design when predicting in the design region.

By running center runs in each of the two blocks, we have some ability to directly assess the blocking effect and obtain an estimate of pure error. The size of the entire experiment has not been affected by choosing to run this experiment in two blocks, but we have potentially gained valuable information that will help us avoid wasting experimental runs on nonproductive combinations of factors.

Sequential Approaches Make Sense

Whenever possible in designing industrial experiments, it makes sense to look for opportunities to incorporate sequential approaches. Unlike agricultural experiments, responses from industrial experiments can frequently be obtained in a shorter timeframe. The process of sequential experimentation allows us to adapt the design plan as we learn, reduce waste and improve the quality of the results.

A cautionary note about sequential experimentation: While there are many advantages to using this approach—improving the quality of our understanding even as we spend less by collecting fewer data—we must be careful when combining the results from the different phases of experimentation to appropriately model changes in the process over time with blocking effects.

When sequential experimentation is feasible, it can lead to enormous gains in efficiency and guide our learning to give more precise answers to the questions of interest.


REFERENCES AND NOTES

  1. The progression through a sequence of experiments has been well documented in several books and papers. G.E.P. Box and K.B. Wilson, “On the Experimental Attainment of Optimal Conditions,” Journal of the Royal Statistical Society, Series B, Vol. 13, 1951, pp. 1-45, were the first to talk about the process and advantages of sequential experimentation for attaining optima in an industrial setting.
  2. R.H. Myers and D.C. Montgomery, Response Surface Methodology: Process and Product Optimization Using Designed Experiments, Wiley & Sons, 2002. The book is organized around the notion of collecting data and learning sequentially.
  3. Ibid.

CHRISTINE ANDERSON-COOK is a technical staff member of Los Alamos National Laboratory in Los Alamos, NM. She earned a doctorate in statistics from the University of Waterloo in Ontario, Canada. Anderson-Cook is a senior member of ASQ.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers