2018

Control Charting at the 30,000-Foot-Level

by Forrest Breyfogle

For a given process, do you think everyone would create a similar looking control chart and make a similar statement relative to process control and predictability? Would they make similar statements about the process capability for given specification limits? Not necessarily. Process statements are not only a function of process characteristics and sampling chance differences but can also be dependent on sampling approach.

This can have dramatic implications:

  • One person could describe a process as out of control, which would lead to activities that immediately address process perturbations as abnormalities, and another person could describe the process as being in control. For the second interpretation, the perturbations are perceived as fluctuations typically expected within the process, where any long-lasting improvement effort involves looking at the whole process.
  • One person could describe a process as not predictable, and another person could describe it as predictable.

To illustrate how different interpretations can occur, let's analyze the following process time series data to determine the process's state of control and predictability and then its capability relative to customer specifications of 95 to 105 (see Table 1).

This type of data traditionally leads to an and R control chart, as shown in Figure 1 (p. 68). Whenever a measurement on a control chart is beyond the upper control limit (UCL) or lower control limit (LCL), the process is said to be out of control. Out of control conditions are called special cause conditions, and they can trigger a causal problem investigation. Since so many out of control conditions are apparent in Figure 1, many causal investigations could have been initiated. But out of control processes are not predictable, and no process capability statement should be made about how the process is expected to perform in the future relative to its specification limits.

When creating a sampling plan, we may select only one sample instead of several samples for each subgroup. Let's say this is what happened and only the first measurement was observed for each of the 10 subgroups. For this situation we would create an XmR control chart like the one shown in Figure 2.

This control chart is very different from the and R chart shown in Figure 1. Since the plotted values are within the control limits, we can conclude only common cause variability exists and the process should be considered to be in control or predictable.

The dramatic difference between the limits of these two control charts is caused by the differing approaches to determining sampling standard deviation, which is a control limit calculation term. To illustrate this, let's examine how these two control chart limit calculations are made.

For charts, the UCL and LCL are calculated from the relationships in which is the overall average of the subgroups, A2 is a constant depending on subgroup size and is the average range within subgroups.

For X charts the UCL and LCL are calculated from the relationships in which is the average moving range between subgroups.

The limits for the chart are derived from within-subgroup variability (), while sampling standard deviations for XmR charts are calculated from between-subgroup variability ().

The 30,000-Foot-Level View

Which control charting technique is most appropriate? It depends on how you categorize the source of variability relative to common and special causes. To explain, I will use a manufacturing situation, though the same applies in transactional environments.

Let's say new supplier shipments are received daily and there is a difference between the shipments, which unknowingly affects the output of the process. To answer our original common vs. special cause question, we need to decide whether the impact of day-to-day differences on our process should be considered a component of common or special cause variability. If these day-today differences are a noise variable to our process that we cannot control, we will probably use a control charting procedure that considers the day-to-day variability a common cause.

For this to occur, we need a sampling plan in which the impact from this type of noise variable occurs between subgroupings. I call the plan "infrequent sampling/subgrouping sampling" and the view of the process at this level the "30,000-foot-level view." When creating control charts at the 30,000-foot-level, we need to include between-subgroup variability within our control chart limit calculations, as was achieved in the earlier XmR charting procedure.

Building on Six Sigma's Strengths

Keith Moe, past executive vice president of 3M, says a business must create more customers and cash (MC2). Its existence (E) depends on this. In other words, E=MC2 (see "How To Achieve MC2," p. 67).

When selecting projects, we should have a system that focuses on MC2, as opposed to creating projects that may not be in direct alignment to business needs. Let's call this approach smarter Six Sigma solutions (S4). The S4 methodology takes some unique paths, including creating a system in which operational metrics and strategic plans pull (used here as a lean term) for the creation of meaningful projects aligned with business needs.

Operational and Six Sigma project responses should be tracked so the right activities occur. We need to create a system of metrics that pulls for the creation of Six Sigma projects when a process will not produce a consistent desirable response. By selecting the right measurement approach, we can get out of the firefighting mode where we're working to fix common cause, noncompliant occurrences as though they were special cause conditions.

To illustrate this approach, let's say we have a 30,000-foot-level response that is aligned to the needs of the business (MC2). In some situations, we will have many responses for each subgrouping--we'll use the wait time for all daily incoming calls to a call center for this example. We would like to use a methodology that has infrequent sampling or subgrouping with multiple samples for each subgroup but still addresses between-subgroup variability when calculating the control limits.

In this situation, we can track the subgroup mean and log standard deviation using two XmR control charts to assess whether the process is in control or predictable. For in control or predictable processes, the data can later be used to determine the overall process capability and performance metric.

For the data in Table 1 (p. 67), this approach would lead to the XmR control charts shown in Figure 3 for the mean and Figure 4 for the natural log of the standard deviation.

Our data analysis conclusion using this approach is that the process is in control or predictable, which is quite different from the conclusion we made from Figure 1. A process capability analysis of all the data collectively yields Figure 5 (p. 70).

Rather than report process capability in Cp, Cpk, Pp and Ppk units, it is more meaningful to report a process capability and performance metric of expected overall performance parts per million. In this case, it's 270,836.79 or 27% nonconformance.

Figures 3 and 4 indicate the process is predictable. Our best estimate is that, in the future, the process would have a nonconformance rate of about 27%. This prediction is based on the assumption there will be no overall change in future process response levels. This assumption will be continually assessed as we monitor the 30,000-foot-level control chart.

We then need to calculate the cost of poor quality (COPQ) or the cost of doing nothing different (CODND) for this process. If the COPQ/CODND amounts are unsatisfactory, we should pull (used as a lean term) for creating a Six Sigma project.

With this overall approach, the entire system is assessed when process improvements are addressed. This differs from the use of Figure 1 (p. 68), which could lead you to create unstructured activity that is unproductive if you're trying to understand the cause of isolated events that are really common cause.

Metrics drive behavior; however, you need to be sure to use the most appropriate sampling and control charting techniques. The 30,000-foot-level control and corresponding capability and performance metrics give a high-level view of what the customer is feeling. My intent in looking at the big picture is not to get you to use these metrics to fix problems; it is to get you to separate common cause process variability from special cause conditions, which may require immediate attention because something changed, perhaps dramatically.


FORREST BREYFOGLE is president and CEO of Smarter Solutions Inc. in Austin, TX. He earned a master's degree in mechanical engineering from the University of Texas-Austin. Breyfogle is a certified quality engineer and reliability engineer and an ASQ Fellow. He is also the author of Implementing Six Sigma and co-author of Managing Six Sigma, both published by John Wiley and Sons.



interesting article
--Aylin N. Sener, 06-22-2015



--, 10-04-2013

Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers