2018

3.4 PER MILLION

Control Charting at the 30,000-Foot-Level

by Forrest Breyfogle III

In my “3.4 per Million” columns of years past, I first described a traditional and a 30,000-foot-level procedure for creating control charts and making process capability/performance assessments for a continuous response with multiple sampled subgroupings. Later, I made a similar comparison for attribute data. Then I described an alternative approach when there are infrequent failures.1,2,3

Now, I will extend the 30,000-foot-level control charting and process capability/performance assessment to non-normal individuals data that can have zero and negative values.

The 30,000-foot-level control chart is a Smarter Six Sigma Solutions (S4) methodology that quantifies what the customer of a process is experiencing. It tracks the output of a process at a high level and is not intended to be used to determine whether and when timely process input adjustments should be made.

This approach is consistent with W. Edwards Deming’s philosophy, which differentiates common cause variability from special cause variability by describing special cause as a fleeting event experienced by the system.4 This philosophy is in contrast to traditional control charting in which in-process/assignable cause conditions, which might not be experiencing an unusual amount of variability, can create out of control signals that are to have timely resolution.

For a 30,000-foot-level control chart, we do not want simply to monitor data over some recent period of time—for example, three months, six months or 12 months. We would like to present data on the control chart since at least the process’s last shift, which can extend for several years.

When a 30,000-foot-level control chart is in control or stable, we can say the process is predictable. This prediction statement could be for the complete time period of the control chart or for the last six weeks, if that is when a process shift was demonstrated.

If the process is predictable, we then can make a process prediction statement. This statement will be made on the assumption that nothing changes either positively or negatively in the system. Within S4, this prediction statement will be in a no-nonsense format that everyone can understand easily. If the prediction statement is not what we desire, we then can work to shift the process to the better by creating an S4 project. I refer to this strategy as a 30,000-foot-level metric pulling (using a lean term) for project creation.

Example: Days’ Sales Outstanding

The following example will describe the application of this concept to individual values from a non-normal distribution that has zero and negative values.

The prompt payment of invoices is important to a company. Consider that a days’ sales outstanding (DSO) metric, which tracks prompt invoice payment for a company, follows the underlying distribution shown in Figure 1.


These randomly generated data from a 3-parameter log-normal distribution simulates a real situation in which zero days represent the situation when a payment was received precisely on its due date. By simple observation, we note this distribution does not have a bell curve appearance, in which tails extend equally on both sides of its median value and approach a theoretically infinite value. This distribution has a long tail to the right.

The difference in invoice payment durations, which affects the spread of this distribution, could originate from differences in payment practices between the companies invoiced, invoice amounts (larger invoice amounts might take longer to get be paid), day of the week invoiced and invoicing department. With the S4 methodology, any impact by these input variables on payment duration is considered a source for common cause variability.

Consider nothing has changed within this process over time. Next we want to simulate what might be expected if we were to track a process sample over time. In my “3.4 per Million” column in 2003, I considered multiple subgroup continuous response samples.5 I now will simulate what might be expected if only one paid invoice were randomly selected weekly.

For example, assume only one sample per subgroup was collected, because it is hard to collect the data. Relative to the decision for weekly vs. more frequent subgrouping, I decided not to have a daily subgrouping because I thought there might be a difference by day of the week; hence, the decision to select a weekly subgrouping. An individuals control chart plot of one randomly selected weekly payment duration is shown in Figure 2.


Because these data are randomly selected, we would expect the control chart to have a random scatter within the upper and lower boundaries. This chart doesn’t have this appearance. The data in this chart seem to exhibit a lower boundary condition at about -25 days. Although you could experience an out of control condition for these random data through chance, you would not expect to experience this with only 35 tracked subgroupings.

The X chart is not robust to non-normal data.6 Therefore, in some situations, such as this one, data need to be transformed when creating the control chart.

As Figure 3 shows, the normal distribution does not fit well. Both the 3-parameter Weibull and lognormal distributions fit well, while the 2-parameter lognormal and Weibull distributions could not be considered because these distributions cannot accept zero or negative values.


In general, I prefer the Weibull distribution for reliability analyses and the lognormal for transactional situations. Because of this, the individuals chart shown in Figure 4 is created through a logarithm transformation, after adjusting the data set by subtracting the 3-parameter lognormal threshold estimate. That is, datum point value (Thresh)—in which the Figure 5 Thresh estimate is –29.08. This transformed individuals plot indicates that the process is predictable.


The next question to answer is: What is predicted?

Traditional Six Sigma process capability units such as Cp, Cpk, Pp, Ppk and sigma quality level require the input of a specification, which does not exist in most transactional environments. In addition, these units can cause much confusion even if a specification does exist. I prefer to answer this prediction question in terms that everybody understands. I call this form of reporting “process capability/performance metric.”7

I could simply estimate from a probability plot the reported percentage that is not paid on time or is paid beyond a certain number of days late. However, this form of reporting does not give us a picture of the amount of variability that exists within the current process. A median response value with an 80% frequency of occurrence gives us a much more descriptive picture of the expectations from our process, which is easy for anyone—from the CEO to the line operator level—to understand.

From reviewing Figure 5, I note that the 3-parameter lognormal distribution fits well and seems appropriate for the modeled situation. From reviewing Figure 6 (p. 62), which does not consider the 3-parameter threshold adjustment, we estimate the process capability/performance metric to be a median of 0.7 days with an 80% frequency of occurrence of -20.9 days to 78.8 days.



The logarithmic x-axis in the “with threshold” plot (Figure 5) resulted in a straight line probability plot, which is used to assess model fit.

The linear x-axis in the “without threshold” plot (Figure 6) has a curved probability plot, which is used to describe the process capability/performance metric in easy-to-understand terms.

S4 Project Selection

Typically, organizations evaluate the success of their Six Sigma deployment as the collective financial benefit from project completions, in which projects are pushed for creation. For example, they brainstorm for projects and rank them to see which to work on first.

However, when we step back to the collective enterprise view, we often do not see a project’s financial benefits within the overall organization’s return on investment or profit margins. With a Six Sigma push for project creation system, organizations could even be suboptimizing processes to the detriment of the overall enterprise.8

In S4, the collective examination of responses from an organization’s 30,000-foot-level metrics, along with an enterprise analysis, can provide insight into which metrics need improvement. This approach best ensures the overall enterprise ROI and profit margins benefit from project completion. That is, enterprise metric improvement needs to provide a pull system for S4 project creation.

Another E=MC2

Business existence and excellence (E) depend on more customers and cash (MC2). The previously described S4 system focuses on E=MC2 for enterprise management and project selection, including the tracking of 30,000-foot-level metrics that are not normally distributed with negative response data.

In S4, operational high level metrics at the enterprise level pull (used as a lean term) for the creation of projects. These projects then can follow a refined define, measure, analyze, improve and control road map9 that includes lean tools for process improvement or a define, measure, analyze, design and verify road map for product or process design needs.


NOTE

Minitab 14 was used for all statistical analyses.


REFERENCES

  1. Forrest Breyfogle III, “Control Charting at the 30,000-Foot-Level,” Quality Progress, November 2003, pp. 67-70.
  2. Forrest Breyfogle III, “Control Charting at the 30,000-Foot-Level, Part 2,” Quality Progress, November 2004, pp. 85-87.
  3. Forrest Breyfogle III, “Control Charting at the 30,000-Foot-Level, Part 3,” Quality Progress, November 2005, pp. 66-70.
  4. W. Edwards Deming, Out of the Crisis, MIT Press, 1986.
  5. Breyfogle, “Control Charting at the 30,000-Foot-Level,” see reference 1.
  6. Forrest Breyfogle III, XmR Control Charts and Data Normality, Feb. 15, 2004, www.smartersolutions.com/pdfs/XmRControlChartDataNormality.pdf.
  7. Forrest Breyfogle III, Implementing Six Sigma, 2nd edition, Wiley, 2003.. Forrest Breyfogle III, “21 Common Problems and What to Do About Them,” Six Sigma Forum Magazine, August 2005, pp. 35-37.
  8. Breyfogle, Implementing Six Sigma, see reference 6.

FORREST BREYFOGLE III is founder and CEO of Smarter Solutions Inc. in Austin, TX. He earned a master’s degree in mechanical engineering from the University of Texas-Austin. Breyfogle is the author of Implementing Six Sigma, co-author of Managing Six Sigma and the author/co-author of five other books on lean Six Sigma and the balanced scorecard. He is an ASQ fellow and recipient of the 2004 Crosby Medal for Implementing Six Sigma.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers