2019

STATISTICS ROUNDTABLE

Painting by the Numbers

Basics of mixture design generation, data modeling and interpretation

by Lynne B. Hare

Have you ever gone to a paint store to match a color sample? Were you amazed when they produced an exact match of your sample’s hue, color and saturation? You went home, spread it on the walls right next to the original, and no one could tell the difference.

A paint store miracle? Not really.

The paint-mixing machines have built-in colorimeters to measure the sample, and they have built-in devices for depositing the required pigment combinations to match it. The driving software engine contains a mixture model that instructs server motors to deposit the right amount of pigments to blend with the white base paint to create a paint that matches your sample closer than your eye’s ability to detect the difference.

It’s a customer pleaser—a money maker to be sure. And it is a great feat of statistical engineering.

It’s a shame it only applies to paint. Wait. That’s not right. What about detergents, soaps, body washes, all sorts of food items, drugs and cosmetics? Almost all of them are blends, of course. But how do you get the blending model?

Statistical DoE to the rescue

It turns out that experimentation with mixtures is little different than factorial experimentation in principle. The major difference is that the sum of the mixture ingredients (components) is a constant. That sum is usually one or 100%, or it can be made equivalent to one using a transformation.

The important distinction is that for mixtures, the response depends on the proportions of the ingredients. And as the proportion of a given ingredient increases, the proportion of at least one other ingredient must decrease. This is the case for all mixtures. The responses may differ, so if you are blending gasolines, you might be interested in an octane rating. If you are blending juices, you might be interested in consumer perception. The constraint holds, however: If you increase the proportion of orange juice, the proportion of at least one other type of juice, perhaps guava, must decrease.

OK, so maybe you’re not interested in guava juice blends. Still, you get the picture: Factorial experimentation won’t work, but knowledge of the technology helps.

If you wanted to construct a model for blending ingredients such as gasoline and juices, but never together, you might recall the way you constructed models for the study of independent variables such as mixing time and temperature. There, you conducted factorial experiments manipulating these independent variables in logical combinations of extreme lows and highs to permit an efficient estimation of their effects. In the simplest of situations, two-level, the factorial designs and their fractions prove to be the most appropriate.

For mixtures, the story is pretty much the same. You would want low, high and perhaps intermediate levels of each ingredient. To model most efficiently the effects of the proportions of orange, guava and pineapple juices, you would want to examine the extremes and perhaps some intermediate blends. The extreme orange juice high level is one, or 100% with 0% of each of the other two juices, and the extreme orange juice low is 0% with 50% of each of the other two juices.

If you can afford the cost of experimentation, you also might look at the three 50/50 blends and even the center-point blend consisting of 33-1/3% of each of the juices—giving a total of seven unique experimental blends listed in Table 1. To ensure that your model really fits the blending responses well, you might even include runs eight to 10.

Table 1

Using the responses from such an experiment, it is possible to generate a prediction model. It would look like the typical factorial regression model except that it would not have a constant term (b0). If the model fit the data very well, it could be used to predict the average consumer attitude toward all blends, including those intermediate to the 10 actually run.

A simple mixture model might include only linear terms such as:

η = β1 z1 + β2 z2 + β3 z3 ,

in which η is the "true" response, the β’s are coefficients estimated from the consumer-response data and the z’s represent the proportions of the juices.

Quite often, the nature of the responses is more complex than can be explained by this simple linear model, and more complex models should be examined. A useful second order model is:

η = β1 z1 + β2 z2 + β3 z3 + β12z1 z2 + β13z1 z3 + β23z2 z3.

Of course, there are many more complicated models that can be evaluated. Regardless of the model, care should be taken with the interpretation of model terms. A second order term, such as z1z2, is not an "interaction" as it might be called in factorial experimentation. Instead, it is simply a nonlinear blending term—a measure of how some components work together to influence the response, but taking into account all the remaining components in the mixture.

Many mixture experiment situations are not so simple as to permit ingredients to range from zero to 100% of the mix. Chocolate pudding, for example, is composed of cornstarch, sugar, salt, whole milk, chocolate chips and vanilla extract. If you try to make it with 100% of any of these ingredients, it won’t work. Believe me, I’ve tried.

An experimental design—such as the one in Table 1—cannot be used. In its place, we shift to designs formed by computer algorithms that select a specified number of experimental combinations from a full set of candidate points based on the constraints stated in terms of upper and lower bounds of each of the components.

While the thinking is the same, the math is more complex, and great care must be taken to avoid overfitting models by including terms that are highly correlated with other terms in the model. It can be a major hassle, but there is good software around to help you. You also should talk to your local, friendly statistician.

After a suitable model is found to fit the data, the task of interpretation begins. If the mixture in question has only a few components—say two or three—mixture-response surfaces can aid the interpretation. These take the form of contours of a constant response superimposed on a three-component space called a simplex, shown in Figure 1. This illustrates the results of blending three vegetable oil solids sources and measuring the solid fat index at 50° F.

Figure 1

If the mixture is composed of many more ingredients, the use of response surfaces is cumbersome. Many practitioners employ mixture trace plots. They rely on the choice of a point of interest, which could be the current product formulation, the center of the design or the boss’s favorite formula. To and from that chosen formula, they add or subtract a given component in tiny increments while keeping all other components in constant relative proportion to their presence at the chosen formula.

The resulting trace, then, depicts what you would see if you could watch the change in response while incrementing or decrementing a given component away from the reference blend.

In situations such as the pudding example, the design space is highly irregular, and individual component trace lines are longer or shorter depending on the distances between their bounds. The trace is useful because it shows what happens to the response as each component is varied independently, insofar as that is actually possible, from each other.

Figure 2 shows that some components, notably B, E and F, seem relatively inert. The hint is that future experimentation might take place with them being held constant at some convenient proportion, the real drama being with A, C and D.

Figure 2

Interpretation of the results of mixture experiments also is greatly facilitated by the availability of software to provide simulations of multiple responses. Scientists and engineers typically measure more than one response and have in mind desired levels of each of several responses for product or process success. While traces may be informative of individual responses, combining them can be cumbersome. Simulation results can easily number in the thousands without taxing computer resources, and the resulting tables can be sorted to identify component mixtures fulfilling multiple goals simultaneously.

Mathematically, the experimental design generation, data modeling and interpretation are more complex than perhaps I have made them seem here. Conceptually, however, it’s as easy as, well, painting by the numbers.


Acknowledgement

The author has dedicated this column to John A. Cornell, known as "Dr. Mixtures" in statistical circles, who died in July. He was 75. Cornell, a fellow of ASQ and the American Statistical Association, authored Experiments With Mixtures, second edition (Wiley and Sons, 1990), and served as editor of the Journal of Quality Technology from 1989 to 1991. He also was the recipient of the W.J. Youden Prize for the best expository that appeared in Technometrics in 1973, ASQ’s Chemical and Process Industries Division’s Shewell Award in 1981, ASQ’s Brumbaugh Award in 1995 and ASQ’s Shewhart Medal in 2000. Cornell "represented the best in statistics and statistical consulting," Hare said. Visit http://tinyurl.com/cornell-obit for a full obituary.


Lynne B. Hare is a statistical consultant. He holds a doctorate in statistics from Rutgers University in New Brunswick, NJ. He is past chair of the ASQ Statistics Division and a fellow of ASQ and the American Statistical Association.




--Miguel Pereda, 09-15-2016

Average Rating

Rating

Out of 1 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers