2019

STATISTICS ROUNDTABLE

More is Better

Inducing more variation in an experiment can lead to better understanding

Central to the core goals of Six Sigma and other quality improvement initiatives is the idea that reducing variation in processes is a vital part of successfully enhancing customer satisfaction and bottom-line results.1,2

Running processes with the responses of interest consistently close to the desired target values can greatly reduce waste and provide predictably desirable products and services. Reducing variability is clearly an important goal that, when reached, results in powerful and far-reaching benefits.

Although variance reduction is an important goal, misconceptions quite often exist regarding the notion of reducing variability. For instance, I recently encountered a situation involving the design stage of experimentation in which known sources of variation were intentionally omitted from the plan because including them would have increased the total observed variability of the process.

In this situation, the goal of the experiment was to understand the sources of variability of the entire process and then use this information for process optimization. To keep response variation down during experimental planning, input raw materials were intentionally restricted to a single supplier since it was known that some characteristics of the input varied considerably between different suppliers.

In a second situation, an experiment was designed to study a key performance metric for a group of complex systems. Complex systems involve collections of multiple processes, components and nested subsystems in which the overall system is difficult to understand. Often, multiple sources of data are involved, and system performance estimation is based on the synthesis of system, subsystem and component level data.

In this second experiment, only the most consistent subpopulation was chosen because other subpopulations were known to have greater variability. The goal of this experiment was to understand the sources of variability for the entire population of systems and use this to predict future performance.

In both examples, when asked what prompted the decisions to restrict the scope of the experiments, variance reduction was cited as the motivation. It was known there was considerable additional variation from the omitted factors, but the actual size of their variability was not known. The thinking was that by eliminating some sources of variability, it would be easier to see the functionality and impact of the other inputs.

I also suspect that since gathering this information was explicitly avoided, quantifying the total variability across all known input sources (including raw materials or all subpopulations) was something the experimenters were also happy to avoid.

Having a full assessment of the wide range of observed responses would have to be reported to someone in the management chain, and the experimenters wanted to avoid being the bearers of bad news.

Common misconceptions

So what is the common misconception present in these two examples that ties to the general goal of reducing variability? It is a misunderstanding of the timing in which variance reduction should take place. At the conclusion of the entire project, we want to choose input levels to operate close to the target while maintaining small variation. This is when we want to reduce variability.

During the experimentation phases, however, we want to gain understanding about the process by inducing as much variation in the responses as possible by manipulating the inputs through a wide range of potential values. George Box, one of the founding fathers of industrial statistics, is quoted as saying: “To see how a system functions when you have interfered with it, you have to interfere with it.”3

In this situation, to understand the variation in a response, you have to make the response vary through the direct manipulation of the inputs. It is only by doing this that we gain understanding about the impact of factors on the responses. It also allows us to explore what values of the inputs produce desired response values. Indeed, we should be willing to knowingly produce some less desirable parts during the experimentation phase to gain the understanding required to optimize the process to its ideal settings during production.

Recall that during the course of a project, we likely want to run several sequential experiments that build on the knowledge gained from previous phases. If in the early stages we can gain an understanding of which inputs are responsible for the largest proportions of variability of the process, this can help us focus our subsequent effort where it will matter the most.

Scouting things out

What is my response to the fear of knowing the full scale of the variability of the process? As with a scouting report for a sports team’s next opponent, I think knowing the whole truth is preferable to making decisions based on partial information.

A good scouting report tells you not only the weaknesses of the opposition that can be exploited, but also the opponent’s strengths that must be consciously and vigorously defended against.

In our experiment, we want to know the strengths of the process (places of consistency and meeting target) as well as its weaknesses (where it might be inconsistent and off target).

When the total variance is known, the conversation with managers will have an entirely different focus. If we were really lucky, the ignored source of variability might not be as bad as originally feared. More likely, the feared big sources of variability will be important.

Either way, knowing the relative magnitude of different contributors to variability will allow for a sound variance reduction plan to be developed. The net result of this all-encompassing experiment might be to recalibrate where energy is directed to improving the process. Ultimately, this is the right solution.

Consider again the first example in which the initial decision was to restrict the experiment to just a single supplier. We could run an experiment in which we vary a lot of the other inputs and find an optimal combination of them to get close to our desired targets with minimal variability.

When we use that solution in the production environment, what will the likely result be? There will be much more variability than during the experimentation phase, and there is a good chance that the mean of our response will also not be on target.

It is also possible that all of our fine-tuning of the process for that one supplier of raw materials represents a tiny fraction of the total variability when materials from all the suppliers are considered. In other words, the real solution might be that we need to make the materials from the supplier more uniform to control our process. If this is the case, then our experimental efforts have solved the wrong problem.

Smaller sources of variation

Let’s address the fear of not being able to identify other smaller sources of variation as a reason for excluding the bigger potential sources of variability. In an observational study in which we cannot control the different combinations of factor inputs considered, this might be a potential risk.

However, if we are able to run a designed experiment in which strategic levels of inputs are chosen, we can select combinations that allow us to estimate the factor effects independently.

For example, suppose that in the first phase of our experiment the goal is to understand the main effects of our factors and their two-way interactions on the response. We could specify two levels of each continuous input factor near the extremes of the ranges we expect to observe during normal production.

For more than two suppliers of raw materials, we could include all suppliers as separate levels of a categorical factor.

Alternately, we could select a subset of the suppliers that were thought to represent the entire range of characteristics of interest, if this information about difference between suppliers is available. If we then ran a factorial or fractional factorial design,4 we would be able to estimate the effects of all our factors separately, regardless of their relative sizes.

If we had proceeded with characterizing the performance of the entire population with just one subpopulation in the second example, our expectations would likely have been overly optimistic. Clearly, our goal is to realistically assess the true variability of the process, but it can be helpful to think about the trade-offs between the two types of errors we can make in this estimation.

Over and under

During the experimentation phase, we can either underestimate the variability or overestimate it. If we underestimate it during the experimentation phase, we might initially make our lives simpler because there is less problem solving needed. However, sometime during production, we will get a nasty shock about the true variability.

This might lead to an urgent fire drill response to fix an unexpected problem.

If we initially overestimate the variability, we might invest more to reduce the variability than absolutely necessary to meet our targets. During production, we will have a process that exceeds expectations.

Clearly, both of these incorrect assessments of the process have different associated costs. The timing and size of the costs will vary for each application. If we have a shortsighted view of the experimentation phase, then underestimating the variability might not seem like such a bad risk.

However, if we think about the bigger picture and the resources involved in the production phases, often we will be more willing to strive to protect ourselves against the risk of underestimating the true variability of our process.

So, during the experimentation phase of a study, inducing more variation in the response can lead to improved understanding, which will enable us to reach the goal of reducing variability during production. Sometimes making things worse can make the final product much better.


REFERENCES

  1. Roger Hoerl and Ron Snee, Statistical Thinking: Improving Business Performance, Duxbury, 2001.
  2. Ron Snee and Roger Hoerl, Six Sigma Beyond the Factory Floor: Deployment Strategies for Financial Services, Health Care and the Rest of the Real Economy, Prentice Hall, 2004.
  3. T.P. Ryan, Modern Experimental Design, Wiley, 2007.
  4. Douglas C. Montgomery, Design and Analysis of Experiments, sixth edition, Wiley, 2004.

CHRISTINE M. ANDERSON-COOK is a technical staff member of Los Alamos National Laboratory in Los Alamos, NM. She earned a doctorate in statistics from the University of Waterloo in Ontario. Anderson-Cook is a senior member of ASQ and a fellow of the American Statistical Assn.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers