2018

3.4 PER MILLION

Insight or Folly?

Resolve issues with process capability indexes, business metrics

by Forrest W. Breyfogle III

In lean Six Sigma, much training effort is spent on conveying the importance of having a measurement system so that consistent and correct decisions are made relative to assessing part quality and process attributes. In this training, measurement systems analysis (MSA) and associated gage repeatability and reproducibility (R&R) studies are integral. 

It seems MSA should be a reporting consideration for all forms of measurement, including business performance metrics. It also seems a focus should be placed on metric statements that are in a clear language. In the real world, however, how often are these goals achieved?

Every organization’s goal should be to achieve the three Rs of business: Everyone does the right things in the right way at the right time. One tool that provides direction for the three Rs goal is process performance metrics; that is, a process performance report-out should lead to the most appropriate action or inaction, which is independent of the person compiling the information. This basic right-behavior objective is like an inspection gage MSA, which ensures inspectors can adequately determine whether a manufactured component should be accepted or rejected. 

Because of this performance-reporting need, it seems that management and practitioners—from a conceptual MSA point of view—would be assessing the health of current scorecard and metric reporting systems. However, this doesn’t seem to occur. The question is: Why don’t we examine business metrics and process capability indexes reporting from an MSA point of view with the same level of intensity we do for product quality metrics?

To quantify the magnitude of this issue, consider different approaches someone might use to report process performance. You might choose a bar chart, a pie chart, a red-yellow-green scorecard, process capability indexes (that is, Cp, Cpk, Pp, Ppk and Cpm) or a table of numbers. For a given process, each of these reporting methods can provide a very different and somewhat subjective picture of how a process performs and whether any actions should be taken. 

To illustrate the magnitude of this issue, an example later in this column will show how reported Cp, Cpk, Pp and Ppk process capability indexes can be sensitive to process sampling procedures—a conceptual MSA issue.

In addition, these reports describe historically what happened, which may not be representative of the future. What is really desired is a statement about what is expected in the future so changes can be made, if needed.

Metric reporting should lead to the most appropriate action or inaction. However, process-metric decisions are often a function of how an individual or a group chose its process sampling, data analysis and reporting procedures. From a conceptual MSA point of view, the reporting of process performance should be independent of the person doing the sampling and reporting. Also, it is most desirable to use a predictive measurement system in which the only difference between individual process reporting is from chance sampling variability.  

Organizations benefit when managers use predictive measurement reporting throughout their business functional process map. Practitioners can enhance the understanding of the benefits of this system when providing illustrative report-outs that compare current scorecard metric reporting to predictive-performance metric reporting system.

Process capability indexes

The process capability index Cp represents the allowable tolerance interval spread in relation to the actual spread of the data when the data follow a normal distribution. This equation is:

Equation

in which USL and LSL are the upper and lower specification limits, respectively, and the spread of the distribution is described as six times standard deviation; that is, 6σ.

Cp addresses only the spread of the process; Cpk is used to address the spread and mean (μ) shift of the process concurrently. Mathematically, Cpk can be represented as the minimum value of the two quantities:

Equation

Pp and Ppk indexes are sometimes referred to as long-term capability or performance indexes. The relationship between Pp and Ppk is similar to that between Cp  and Cpk. The index Pp represents the allowable tolerance spread relative to the actual spread of the data when the data follow a normal distribution. This equation is:

Equation

in which, again, USL and LSL are the upper and lower specification limits. No quantification for data centering is described within this Pp relationship.

Mathematically, Ppk can be represented as the minimum value of the two quantities:

Equation

Consider the confusion in calculating the seemingly simple statistic of standard deviation. Although standard deviation is an integral part of calculating process capability, the method used to calculate the value rarely seems to be adequately scrutinized.

In some cases, it is impossible to get a specific desired result if data are not collected in the appropriate fashion. Consider the following sources of continuous data:

  • Situation one: An and R control chart with subgroups of sample size of 5.
  • Situation two: An X chart with individual measurements.
  • Situation three: A random sample of measurements from a population.

Table 1

For these three situations, Cp, Cpk, Pp and Pp, a standard deviation estimate () is determined through the relationships in Table 1.

In Table 1, is the overall sample mean, xi is an individual sample (i) from a total sample size N, is the mean subgroup range, is the mean range between adjacent subgroups, N is the total sample size, and d2 is a factor for constructing variables control charts; that is, d2 = 1.128 for a two-observation sample and 2.326 for a five-operation sample.

The data set in Table 2 illustrates the impact different data collection techniques can have on reported process capability metrics. When reporting process capability indexes, it is important that the data from which the metric is calculated comes from a stable process. In other words, the process is in control.1

Table 2

To quantify the capability of this process, you could have selected only one sample instead of five for each subgroup. These two scenarios would result in the following standard deviation calculations:

Equation
Equation
Equation

In the second standard deviation calculation, consider that sample one in Table 2 was the individual reading for each subgroup.

Table 3

For a specification of 95 – 105, a statistical analysis program used similar standard deviations when determining the process capability results in Table 3.

In Table 3, there is large difference between the Cp and Cpk values for a subgrouping sample size of one sample versus five. An examination of the standard deviation equations provides the reason for the large difference: Cp and Cpk calculations—which used an and R chart—had their equation’s standard deviation determined by averaging within subgroups. For the individual’s chart, standard deviation was calculated between subgroups.

Conceptual MSA issues

With an effective conceptual MSA system, process sampling plans should have no effect on process performance statements.

Because of the differences noted earlier, you can conclude process capability reporting can have MSA issues because a sample of five versus one did not provide similar answers; that is, the differences being only the result of luck-of-the-draw sampling. 

In this analysis, you might notice Pp and Ppk are similar for the two sampling procedures. An and R control chart analysis, however, would indicate the process was out of control; therefore, a process capability analysis would not be appropriate for this form of control-charting analysis.2

Other conceptual MSA issues with process capability indexes reporting include:

  • The physical implication of reported process capability indexes is uncertain and possibly wrong.
  • Without an accompanying statement of process stability from a control chart, all process capability indexes are questionable in value. Any process capability assessment of an unstable process is improper and often deceptive.
  • Process capability indexes do not provide a predictive performance statement.

Predictive reporting alternative

From a conceptual MSA point of view, Table 4 describes three reasons for statistical business performance charting (SBPC), or 30,000-foot-level3 tracking, and reporting for transactional and manufacturing process outputs.

Process performance reporting using process capability indexes, bar charts, pie charts, red-yellow-green scorecards or tables of numbers can provide differing process performance assessments, a conceptual MSA issue. In addition, process performance reporting does not structurally address the action options in Table 4.4

Table 4

The following example illustrates a system for describing a process-output performance from a 30,000-foot-level. For this SBPC reporting, an individual’s control chart subgrouping frequency is made so typical variability from input variables occurs between subgroups. 

Data from regions of stability can be used to estimate the nonconformance rate of a process during those timeframes. If there is a recent region of stability, data from this region can be considered a random sample of the future from which a prediction can be made. This prediction presumes no fundamental positive or negative changes will occur in the future relative to the process inputs or its execution steps.

If, at some point in time, the output of a stable process is performing at an undesirable, nonconformance level, an organization can initiate a project to change process inputs or take steps to improve a process performance level.

For continuous data, a probability plot can provide an estimate of the process nonconformance rate in either percentage or defects per million opportunities units. For attribute data, the process estimated nonconformance rate is simply the overall combined subgroup failure rates in the region of process stability.

Figures 1 and 2 illustrate 30,000-foot-level reporting and the results of an improvement project for both continuous and attribute data.5

Figure 1

Figure 2

Integrating SBPC

When reporting how a process is performing using capability indexes, the magnitude of the reported metrics for a given situation can be a function of sampling procedures. For example, different conclusions could be made when process data are analyzed from an individuals chart report-out (one sample per subgroup) versus a and R chart reporting (multiple samples per subgroup); that is, a conceptual process performance MSA issue. 

Traditional organizational performance measurement reporting systems can use tables of numbers, stacked bar charts, pie charts and red-yellow-green, goal-based scorecards. For a given situation, one person may choose one reporting scheme, while another uses a completely different approach. These differences can lead to a different conclusion about what is happening and should be done, as shown in Table 4. 

In addition, these traditional reporting methods provide only an assessment of historical data and make no predictive statements. Using this form of metric reporting to run a business is similar to driving a car by only looking at the rearview mirror—a dangerous practice.

When predictive SBPC system reporting is used to track interconnected business process map functions, an alternative, forward-looking dashboard performance reporting system becomes available. With this 30,000-foot-level metric system, organizations can systematically evaluate future expected performance and make appropriate adjustments if they don’t like what they see. 

Organizations can benefit when SBPC techniques are integrated within a business system that analytically and innovatively determines strategies with the alignment of improvement projects that positively impact the overall business.6


References and Notes

  1. Forrest Breyfogle III, "Control Charting at the 30,000-foot-level," Quality Progress, November 2003, pp. 67-70. The data used in Table 2 were also used in the 2003 article to compare traditional and R process stability assessment to 30,000-foot-level operational-metric reporting. In the 2003 article, a traditional control chart indicated that the process was out of control, while 30,000-foot-level reporting indicated the process was in control. The article also described the advantages of a 30,000-foot-level assessment when compared to traditional reporting.
  2. Ibid. The article also notes technical reasons why individual control charting is preferred over and R control charting.
  3. Forrest Breyfogle III, Integrated Enterprise Excellence Volume II—Business Deployment: A Leaders’ Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard, Bridgeway Books, 2008.
  4. Ibid.
  5. Forrest Breyfogle III, "Control Charting at the 30,000-foot-level, Part 2" Quality Progress, November 2004, pp. 85–87. This article describes the advantages of attribute-failure-rate individuals charting (as shown in Figure 2) over a p-chart.
  6. Forrest Breyfogle III, "Control and Grow Your Enterprise," Quality Progress, February 2009, pp. 54–56.

Forrest W. Breyfogle III is founder and CEO of Smarter Solutions, Inc. in Austin, TX.  He earned a master’s degree in mechanical engineering from the University of Texas. Breyfogle is the author of a series of books on the Integrated Enterprise Excellence System. He is an ASQ fellow and recipient of the 2004 Crosby Medal.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers