In a mixture experiment, the design factors are the proportions of the components of a mixture, and the response variables depend only on these component proportions. In addition to the mixture components, the experimenter may be interested in other variables that can be varied independently of one another and of the mixture components. Such mixture-process experiments are common in industry. There are many strategies based on different design criteria that are used to create designs involving both types of variables. We develop variance dispersion graphs (VDGs) to evaluate mixture-process designs and illustrate how the graphs are used with two examples.
Key Words: Mixture Experiments; Prediction Variance; Variance Dispersion Graphs.
By HEIDI B. GOLDFARB, The Dial Corporation, Scottsdale, AZ 85254
CONNIE M. BORROR, Drexel University, Philadelphia, PA 19104
DOUGLAS C. MONTGOMERY, Arizona State University, Tempe, AZ 85287-5906
CHRISTINE M. ANDERSON-COOK, Virginia Polytechnic Institute and State University, Blacksburg, VA, 24061-0439
MIXTURE-PROCESS experiments are commonly encountered in many fields, including the chemical, food, pharmaceutical, and consumer products industries. These experiments combine mixture components, which are restricted to sum to a constant and may have further individual component restrictions, with process variables that can vary independently of one another and of the mixture components. There are a variety of combined models and designs that can be used in these situations, depending on the objective of the study and the maximum number of runs that can be performed. See Cornell (2002) for more details on mixture-process experiments. Because mixture-process experiments tend to be large, often some compromise will have to be made with respect to the number of mixture combinations, the number of process combinations, or both. Some strategies for selecting designs have been developed by Cornell and Gorman (1984), Czitrom (1988, 1990), and Kowalski, Cornell, and Vining (2000, 2002). These methods generally focus on minimizing the variances of the estimated coefficients. While the precision of coefficient estimates is certainly important, experimenters often use the final models to make predictions. Therefore, we consider prediction variance in our design evaluation. With the methodology introduced in the next section, we evaluate some of the aforementioned designs with respect to Scaled Prediction Variance (SPV) over the entire design space. Box and Hunter (1957) advocated the use of SPV distributions to evaluate designs. They noted the advantage of these over single-number summaries used in many of the alphabetic optimality criteria.
Read Full Article (PDF, 636 KB)