2018

3.4 PER MILLION

Control Charting at the 30,000-Foot-Level, Part 3

by Forrest Breyfogle III

In my November 2003 “3.4 per Million” column (p. 67), I described a traditional and a 30,000-foot-level procedure for creating control charts and making process capability/performance metric assessments for a continuous response.

And in my November 2004 column (p. 85), I made a similar comparison for attribute data. In this column, I will extend the 30,000-foot-level control charting and process capability/ performance metric assessment to infrequent failure data.

The 30,000-foot-level control chart is a Smarter Six Sigma Solutions approach1,2 that quantifies what the internal or external customer of a process is experiencing over time. It tracks the output of a process at a high level and is not intended to be used to determine if and when timely process input adjustments should be made.

For example, a 30,000-foot-level metric can address the overall customer experience of time spent during checkout at a grocery store. A store would use a more frequent tracking and adjustment mechanism to adjust cashier coverage during natural peak and valley demand periods.

How well this input adjustment is managed can dramatically impact both the customer experience and the company’s profitability. The 30,000-foot-level chart tracks the impact this and other process inputs have on the response output.

When using a 30,000-foot-level control chart, you do not want to just monitor data over some predetermined recent period, such as three, six or 12 months. Instead, you should consider presenting data on the control chart at least since the process’s last shift, which can extend for several years.

When a 30,000-foot-level control chart is in control or stable, you can say the process is predictable. This prediction statement could be made using data from the complete time period of the control chart, if the process was stable for the entire chart, or the last six weeks, if that is when the last process shift occurred.

If the process is predictable, you can then make a process prediction statement based on the assumption that nothing changes either positively or negatively in the system. This approach presents this prediction statement in an easily understandable, no-nonsense format.

Whenever the prediction statement is undesirable, you can create a project to improve the response of the process output.3 I refer to this strategy as a 30,000-foot-level metric pulling (using a lean term) for process improvement or design project creation.

Separating Special Cause From Common Cause Events

In my earlier columns, I elaborated on how traditional control charting can lead to reacting to what could be considered common cause variability as though it were special cause. In the real world, this can lead to many firefighting activities that have little, if any, long-lasting benefits. Following are the high points of this earlier discussion.

Walter Shewhart’s traditional control charting techniques give focus to the identification of assignable causes. However, W. Edwards Deming notes, “We shall speak of faults of the system as common causes of trouble, and faults from fleeting events as special causes.”4 Based on these authoritative descriptions, you could conclude:

  • Shewhart describes a special cause as an assignable cause that could be internal or external to the system.
  • Deming describes a special cause as an unusual system event.

There is a fundamental difference between assignable causes and unusual events. Because of this, the control charting terms “common cause variability” and “special cause variability” can lead to different interpretations and action plans. I suggest creating 30,000-foot-level control charts that identify special cause conditions that are consistent with Deming’s description.

To accomplish this, you can use an infrequent subgrouping or sampling plan with this approach.5 The selection of a subgrouping interval for high-level control charts, such as 30,000-foot-level, is such that the typical variability from input variables that could affect the response will occur between these subgroupings.

For example, any typical differences that occur between working shifts, raw material lots, departments or machines that affect the output variable level can be thought to originate from common cause variability. This list of variables could lead you to a daily subgrouping interval in which the data within each subgroup interval would be a randomly selected datum point or a compilation of data.

You would then need to create a control chart strategy so the magnitude of the between-subgroup variability affects the lower control limit (LCL) and upper control limit (UCL) calculations.

An individuals (X) chart can help you accomplish this. Unlike an x– and R chart, a p-chart or a c-chart, an individuals chart has control limits that are a function of between-subgroup variability. For X charts, UCL and LCL are usually calculated from the relationships UCL = x + 2.66(MR) and LCL = x – 2.66(MR) where MR— is the average moving range between adjacent subgroups.

An individuals chart works well when the underlying distribution from which the samples are taken is normal; however, an individuals chart is not robust to non-normality.6 In the real world, non-normal conditions can occur frequently. One example of a non-normal condition is a natural boundary condition. In this situation, the control chart can cause false signals where common cause variability appears to be special cause variability.

The following example will extend this 30,000-foot-level control charting and process capability/performance assessment methodology to infrequent failure data.

Infrequent Failures

Infrequent failures in a company could appear as accidents or service outages. These types of failures might be reported in a format similar to Table 1 (p. 66), where No. is the number of failures that occurred during the month.


Because you are counting the number of monthly defects, you might select a c-chart to track the number of monthly defects (see Figure 1, p. 66), but it would not be very useful. Many months are zero, and it would be difficult to determine whether the process improved or degraded.


Now let’s say the times between failures were recorded and presented in the format shown in Table 2 (p. 66). An individuals control chart of this data is shown in Figure 2 (p. 70). This chart indicates your process is predictable. However, I need to point out that, in general, a normal distribution may not adequately represent the distribution of times between failure data that are being analyzed. A data normalizing transformation may be needed before creating an individuals chart.




Because this process is in control, you could estimate the future mean time between failure rate would be about 84 days. This centerline of 84 could then be converted into an average annual or monthly failure rate.

This type of situation also lends itself to including an 80% frequency of occurrence value as part of a process capability/performance metric report. This value can help others better understand the natural variability that is expected from the current process.

The probability plot in Figure 3 (p. 70) indicates a median of 84 days, with an 80% frequency of occurrence from about 50 days to 118 days. Consider a cost analysis of these failures indicates improvement is needed. This would be the 30,000-foot-level metric pulling for a project creation.


Now let’s say the project’s change was implemented and resulted in the 30,000-foot-level chart shown in Figure 4 (p. 70). This figure indicates the process has reached a new stability or predictability level.


Figure 5 (p. 70) shows the implication of the process change, where the new process capability/performance metric estimate has a median of about 113 and a frequency of occurrence of about 106 to 121. This prediction estimate can be refined as soon as more data become available.

Pulling for the Creation Of Projects

The selection of projects within Six Sigma is critical. However, organizations often work on projects that may not be important to the overall business. They could even be suboptimizing processes to the detriment of the overall enterprise.

Within this methodology,7 operational high level metrics at the enterprise level pull for the creation of projects. These projects can then follow a refined define, measure, analyze, improve, control roadmap8 that includes lean tools for process improvement or a define, measure, analyze, design, verify roadmap for product or process design needs.


REFERENCES

  1. Forrest Breyfogle III, “Control Charting at the 30,000-Foot-Level,” Quality Progress, November 2003, p. 67.
  2. Forrest Breyfogle III, “Control Charting at the 30,000-Foot-Level, Part 2” Quality Progress, November 2004, p. 85.
  3. Breyfogle, “Control Charting at the 30,000-Foot-Level,” see references 1 and 2.
  4. W. Edwards Deming, Out of the Crisis, MIT Press, 1986.
  5. Breyfogle, “Control Charting at the 30,000-Foot-Level,” see references 1 and 2.
  6. Forrest Breyfogle III, “XmR Control Charts and Data Normality,” Feb. 15, 2004, www.smartersolutions.com/pdfs/XmRControlChartDataNormality.pdf.
  7. Breyfogle, “Control Charting at the 30,000-Foot-Level,” see references 1 and 2.
  8. Forrest Breyfogle III, Implementing Six Sigma, second edition, Wiley, 2003, Figure 43.13, p. 817.

FORREST BREYFOGLE III is president and CEO of Smarter Solutions Inc. in Austin, TX. He earned a master’s degree in mechanical engineering from the University of Texas-Austin. Breyfogle is the author of Implementing Six Sigma and co-author of Managing Six Sigma, both published by John Wiley and Sons. He is also an ASQ Fellow and recipient of the 2004 Crosby Medal for Implementing Six Sigma.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers