2018

I

3.4 PER MILLION

3.4 PER MILLION

Control Charting at the 30,000-Foot-Level, Part 2
by Forrest Breyfogle III

n my November 2003 “3.4 per Million” column (p. 67), I described a traditional and a 30,000-foot-level procedure for creating control charts and making process capability/performance assessments for a continuous response. In this column, I will extend this methodology to an attribute response.

Separating Special Cause
From Common Cause Events

The control charting terms “common cause” and “special cause” variability can lead to different interpretations and action plans. To address this, I will present what I call a Shewhart approach and then a Deming approach. I will elaborate on my preferred methodology and explain how it can be integrated with a lean Six Sigma project-by-project improvement strategy.

In the 1920s, Walter Shewhart of Bell Laboratories developed a theory that there are two components to variation: a steady component from random variation and intermittent variation due to assignable causes.1 Shewhart’s improvement approach was that assignable causes could be removed with an effective diagnostic program, while random causes could not be removed without making basic process changes.

From this work, Shewhart developed the standard control chart. This control chart used three standard deviation limits of the sampling distribution to separate steady component from assignable causes. Shewhart’s control charts came into wide use in the 1940s because of war production efforts. Western Electric was later credited with adding sequence and runs tests to control charts.2

W. Edwards Deming later gained fame for his work with Japan in its process improvement efforts after World War II. Later in his career, he made significant headway helping American industries become more competitive. Within his work, Deming noted:

• “A fault in the interpretation of observations, seen everywhere, is to suppose that every event (defect, mistake, accident) is attributable to someone (usually the one nearest at hand), or is related to some special event.

• “We shall speak of faults of the system as common causes of trouble, and faults from fleeting events as special causes.

• “Confusion between common causes and special causes leads to frustration of everyone, and leads to greater variability and higher costs, exactly contrary to what is needed.

• “I should estimate that in my experience, most troubles and most possibilities for improvement add up to proportions something like this: 94% belong to the system (responsibility of management), 6% [are] special.”3

From these authoritative descriptions, we could paraphrase their conclusions as:

• Shewhart: A special cause is an assignable cause that could be internal or external to the system.

• Deming: A special cause is an unusual event external to the system.

This basic philosophic difference between Shewhart and Deming impacts process tracking. Consider, for example, that a key process input variable (KPIV) affects a process output. You might not know how this KPIV affects your process or even whether it adversely impacts the output of the process relative to customer needs. This type of KPIV could be created from differences between daily raw material batches or the number of daily phone calls received by a call center, which differ by day of the week.

The question is: Should these KPIVs (raw material batches or day of the week) be considered a special cause? A Deming approach would view normal output levels from these KPIVs as common cause; however, since these variables are assignable, a Shewhart approach would consider their impact to the process as special cause.

The distinction between the two approaches is not trivial; a business would approach the solution differently depending on which approach was indicated. Therefore, it is important to understand the implications of the two alternatives before making a procedure selection.

I would like to suggest an approach that builds upon the strengths of Six Sigma and is in alignment with Deming’s approach. I will refer to it as smarter Six Sigma solutions (S4). With this approach, I will track the organization using high level metrics so typical response levels from inputs within the system (even though they are assignable) will be reported as common cause variability.

For this to occur, I need an infrequent subgrouping/sampling plan so potential input variables, which can affect the response, occur between these subgrouping categories. I
then need to create a control chart so the magnitude of the between-
subgroup variability affects the lower control limit (LCL) and upper
control limit (UCL) calculations.

With this approach, high level business metrics such as revenue and profit would typically be tracked using a monthly infrequent subgrouping/sampling plan. High level operational metrics such as cycle time, inventory, a critical part dimension and defective rates might have a daily or weekly infrequent subgrouping/sampling plan.4

Within the S4 approach, high level business metrics, which are not bounded by typical annual or quarterly boundaries, are referred to as satellite level metrics. High level operational or key process output variable (KPOV) metrics are referred to as 30,000-foot-level metrics.

Attribute Process Capability/ Performance Metrics

To illustrate how different actions can result from these interpretations of special cause variability, let’s analyze the following process time-series data to determine whether the process is in control/predictable and then describe its process capability/performance metric.

Consider the daily transactions shown in Table 1, which include the noted nonconformances and calculated nonconformance rate for each period. Traditionally, proportion (p) nonconformance rates are tracked over time using a p chart to detect special cause occurrences. This approach would be appropriate with a Shewhart strategy.

Whenever a measurement is beyond the LCL or UCL on a control chart, the process is said to be out of control. Out of control conditions are special cause conditions, which can trigger causal problem investigations.

For the p chart of this data, which is shown in Figure 1, many causal investigations could have been initiated because there are many out of control signals. Out of control processes are not predictable; hence, no process capability claim should be made.

For p charts, the LCL and UCL are:

From these equations, the LCL and UCL are determined using the average nonconformance rate (p–) and subgroup size (n). When the subgroup size is large, as it can be in many business situations, the distance between the LCL and UCL can become quite small. Variability from day-to-day material lot differences or day-to-day transaction differences can create the type of out of control signals shown in Figure 1.

An individuals (X) chart is a control chart that captures between-subgroup variability. When adjacent subgroups are used to determine average moving range (MR—), the X chart has an LCL and UCL of:

The control limits are a function of the average moving range between adjacent subgroups. The X chart is not robust to nonnormal data;5 therefore, for some situations, data need to be transformed when creating the control chart.

When attribute control chart subgroup sizes are similar, an X chart can often be used in lieu of a p chart. The advantage of this approach is that between-subgroup variability will impact control chart limit calculations. An X chart of the nonconformance rate in Table 1 is shown in Figure 2.

This X chart indicates the process is in control and is quite different from the conclusion drawn from the control chart in Figure 1. When a process is in control, it can also be said to be predictable. When a process is in control/predictable, we can not only make a statement about the past but also use historical data to make a statement about what we might expect in the future, assuming things stay the same.

The process capability/performance metric for this process can then be said to have a noncompliance rate of about 0.021. That is, because the process is in control/predictable, I estimate the future nonconformance rate will be about 0.021, unless a significant change is made to the process or something else happens that either positively or negatively affects the overall response.

This situation also implies that Band-Aid or firefighting efforts can waste resources when fundamental business process improvements are really what’s needed.

If improvement is needed for this 30,000-foot-level metric, a Pareto chart of defect reasons can give insight to where improvement efforts should focus. The most frequent defect type could be the focus of a new Six Sigma project. For this Six Sigma implementation strategy, I could say common cause measurement improvement needs are pulling for the creation of a Six Sigma project.

A subtle, but important, distinction between the two approaches is the customer view of the process. In the example above, the Shewhart approach (p chart) encourages a firefighting response for each instance outside the control limits, while the S4 approach encourages looking at the issue as an organic whole—an issue of capability rather than stability.

If the problem is an ongoing one, the S4 view is more aligned with the customer view (whether internal or external) of process performance. The process is stable, though perhaps not satisfactory, from the customer perspective.

Pulling for the Creation
Of Projects

The selection of projects within Six Sigma is critical. However, organizations often work on projects that may not be important to the overall business. With this procedure, organizations could even be suboptimizing processes to the detriment of the overall enterprise.

Business existence and excellence (E) depend on more customers and cash (MC2). The previously described S4 system focuses on E = MC2 for project selection.

Within S4, operational high level metrics at the enterprise level pull (used as a lean term) for the creation of projects. These projects can then follow a refined define, measure, analyze, improve, control (DMAIC) roadmap that includes lean tools for process improvement or a define, measure, analyze, design, verify (DMADV) roadmap for product or process design needs.

 

REFERENCES

1. Walter A. Shewhart, Economic Control of Quality of Manufactured Product, ASQ Quality Press, 1931, reprinted in 1980.

2. Western Electric, Statistical Quality Control Handbook, Western Electric Co., 1956.

3. W. Edwards Deming, Out of the Crisis, MIT Press, 1986.

4. Forrest Breyfogle III, Implementing Six Sigma, second edition, Wiley, 2003.

5. Forrest Breyfogle III, “XmR Control
Charts and Data Normality,” Feb. 15, 2004, www.smartersolutions.com/pdfs/XmRControl
ChartDataNormality.pdf.

 


FORREST BREYFOGLE III is president and CEO of Smarter Solutions Inc. in Austin, TX. He earned a master’s degree in mechanical engineering from the University of Texas-Austin. Breyfogle is the author of Implementing Six Sigma and co-author of Managing Six Sigma, both published by John Wiley and Sons. He is also an ASQ Fellow.

Use the right approach to determine special cause variability.

QUALITYPROGRESS I NOVEMBER 2004 I 85

3.4 PER MILLION


TABLE 1

86 I NOVEMBER 2004 I www.asq.org

DayNonconformances Subgroup sizeNonconformance rate

128710,0000.0287

231110,0000.0311

322210,0000.0222

413510,0000.0135

518810,0000.0188

617510,0000.0175

714210,0000.0142

821510,0000.0215

927210,0000.0272

1016510,0000.0165

1115510,0000.0155

1216010,0000.0160

1322410,0000.0224

1424510,0000.0245

1510310,0000.0103

1627310,0000.0273

1729410,0000.0294

1821710,0000.0217

1921010,0000.0210

2024110,0000.0241

Process Time-Series Data

 


FIGURE 1

p Chart of Nonconformance Rate

QUALITYPROGRESS I NOVEMBER 2004 I 87


FIGURE 2

X Chart of Nonconformance Rate

comment

If you would like to comment on this article, please post your remarks on the Quality Progress Discussion Board at www.asq.org, or e-mail them to editor@asq.org.

Please

 
 

Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers