EVOP: An Underused Method
A path to productivity and quality—and the technique is free
by Lynne B. Hare
EVOP? An acronym for a new zero defects automotive program called "every vehicle operates perfectly"? No. "Each variable obfuscates production"? No. Well, then what is it?
Evolutionary operation (EVOP) is a technique for process improvement based on the principle that processes generate products or services together with data useful for providing guidance for improvement.
Recognizing that it is highly inefficient and wasteful to ignore these data, George E.P. Box devised a plan to put them to use and published it with crystal clarity in 1957.1 He dubbed it "evolutionary operation" because of its analogy with genetic evolutionary processes whereby the drive to survive through mutations encourages improvements in the physical characteristics of organisms.
To measure what happens when a variable is changed, Box knew that it was necessary to change it. While that sounds simple enough, it is important to recall that there are many who believe incorrectly that reliable causative relationships can be discovered through the analysis of passive data. Those are the "quick check" data used to monitor and guide the process. They are not research aids: They do not contain the necessary or sufficient information for process improvement.
The Box strategy, then, was to take advantage of the process’s continual operation and to induce slight and repeated deviations of key operating variables, all within specification and all centered on the center of the specification limits. The resulting data do contain information for process improvement.
Administratively, this is accomplished as a joint R&D and manufacturing effort, pooling product and process knowledge, with full awareness, involvement and blessings of the plant manager, R&D leadership and organizational stakeholders.
Technically, it is accomplished by an operational team of workers in close contact with the process. They induce and monitor the results of systematic, small and iterative process changes. Then, they report their findings back to the administrators periodically.
Here’s a simple example. Suppose two key factors thought to have major influence on the process yield are catalyst concentration and temperature. The specification range for catalyst concentration is 0.20 to 0.40%, while temperature is 110 to 130°C.
Ordinarily, the process is centered on a concentration of 0.30% and a temperature of 120°C. It is believed that serious losses could be incurred if the process is permitted to run out of specification, so the levels chosen for them are 0.25% and 0.35%, and 115°C and 125°C, respectively—well within the specification limits.
Figure 1 represents this scheme. The cycle of settings is run in the order shown by the circled numbers, with yield data collected at each setting.
After five cycles, suppose the data are as presented in Table 1.
These data should be analyzed and interpreted following every cycle after the initial few to reflect the current process state and potential. An updated summary board, similar to that shown in Table 2, should be posted for all to see. Notice that the mean yields, by setting, are posted in a pattern similar to that in Figure 1.
Below that is the standard deviation. Its estimation can be a bit tricky. The one shown here is based on fitting a simple factorial model in concentration, temperature and the concentration-by-temperature interaction, along with a single degree-of-freedom term to measure lack of fit. (Significant lack of fit might signify curvature, suggesting the need for a more elaborate design or the arrival at a local maxima or minima.) Other good candidate models will give similar, but not exactly the same, estimates of the standard deviation.
Error limits for factor effects are shown next. These are calculated as ±ts / √n, in which t is the appropriate significance point of the Student’s t distribution—usually 2 will serve for practical purposes—s is the error standard deviation described earlier, and n is the cycle number.
An effect is simply the average difference in response from the high to low level of the factor.
For this design, the effects are estimated as:
- Concentration effect = 12 (y–3+ y–4– y–2– y–5).
- Temperature effect = 12 (y–3+ y–5– y–2– y–4).
- Concentration-by-temperature interaction effect = 12 (y–2+ y–3– y–4– y–5).
For more on these estimates, see Box and J. Stuart Hunter’s 1959 Technometrics article,2 but recognize that their work was intended for those carrying out calculations by hand. Ease and speed are facilitated by the subsequent proliferation of user-friendly statistical software, but their explanation of details is lucid.
If the span represented by the effect estimate plus or minus the error limits does not contain zero, the effect is considered different from zero, meaning that it is real; it did not happen by chance alone.
At this point in the example, it is tempting to say that the process should be changed to a new, higher catalyst concentration because its interval does not contain zero. But this is a decision for the administrative team because if this is real, the cost and benefit must be taken into account, the R&D experts should be able to rationalize the voice of the data with the underpinning theory, and the other responses hinted at but not shown in Table 2 must be considered.
When factor settings are changed, the EVOP moves to a new phase, and the cycles begin anew with reassessment of the standard deviation.
Question one: For many processes, it is not possible to control settings precisely. How can you carry out an EVOP process under these circumstances?
Answer: The temperature setting in the earlier example might be one such case. While precision of control is to be desired, it is not essential. EVOP users should set factors to the desired targets and strive to get as close as possible.
Question two: Life is never as simple as a 22 factorial plus center point design. Can you use other designs?
Answer: Yes, indeed. Both references show 23 designs. And actually, any sensible experimental design can be used as the EVOP base.
Question three: How would we know which factors to include in the design? We have many and they all seem important.
Answer: Here’s a suggestion. Form a team of product or service and process-knowledgeable people, and create a cause and effect matrix. Form your basic experimental design on the outcome. Get help from your local, friendly statistician if it looks too complicated.
Question four: We know that some factors entering our process are uncontrolled and uncontrollable. We usually have to tweak the process to make it work. How can we carry out an EVOP program in this environment?
Answer: Sometimes, there are concomitant variables lurking to influence the process outcome. If you can measure them, even though you can’t control them, your local friendly statistician can attempt to account for them while modeling the data to quantify their influence.
Here’s an additional thought. On occasion, I have learned from clients that their product cannot be made in a laboratory environment. Throughout the years, it has evolved so far away from its bench-top birth, improvements and discovery of cause-and-effect relationships can be made only in the production facility.
In the past, when things went wrong, an R&D expert would visit the production facility and induce educated tweaks until things went right again. Next, they would wait until things went wrong again and repeat the process. There is scant learning in this behavior.
The introduction of EVOP has been a great help in identifying causal relationships that might not otherwise be discovered. The result is that problems can get fixed and stay fixed.
EVOP is a useful tool in the manufacturing and service process environment because it generates ideas for process improvement, helps establish cause-and-effect relationships, and if done properly, it all comes for free.
- George E.P. Box, "Evolutionary Operation: A Method for Increasing Industrial Productivity," Journal of the Royal Statistics Society, June 1957.
- George E.P. Box and J. Stuart Hunter, "Condensed Calculations for Evolutionary Operation Programs," Technometrics, February 1959.
Lynne B. Hare is a statistical consultant. He holds a doctorate in statistics from Rutgers University in New Brunswick, NJ. He is past chairman of the ASQ Statistics Division and a fellow of both ASQ and the American Statistical Association.