2017

3.4 PER MILLION

No Specification? No Problem

Improving process performance when missing specifications

by Forrest W. Breyfogle III

In a column earlier this year,1 I referenced a nine-step approach2 for determining an organization’s long-lasting operational metrics and how to decide where to focus improvement efforts so the entire enterprise benefits. The techniques provided enhancements to the balanced scorecard method.3

Step two of this nine-step system is to create a business fundamental performance map or value chain, which links functional processes with performance measures that can be tracked at the 30,000-foot level.4-7

Through this approach, measurements track quality, cost and time performance of each function over time. If performance function is not satisfactory relative to big-picture enterprise needs and desires, the process for creating that metric will need improvement. With this approach, a business metric improvement need can create a pull for process improvement effort to enhance its measurement performance.

A figure in my previous column8 provided a value-chain example, which included metrics such as lead time, work in process (WIP) and profit margins. In 30,000-foot-level reporting, if the process’s individuals control chart has a recent region of stability, you can conclude the process is predictable.

The next obvious question is: What is predicted for the metric? To address this in terms of percentage nonconformance, process capability and performance index (Cp, Cpk, Pp and Ppk), or sigma quality level, a specification is needed. But many metrics don’t have one.

To get around this shortcoming, organizations sometimes create targets and analyze them as if they were specifications. But this practice can yield deceptive results because targets are often subjective, and then you may be playing games with these objectives.

That’s why it’s important to know how to deal with 30,000-foot-level metric reporting when there is no specification such as lead time and WIP. The following techniques also can be applied to satellite-level metric reporting, which has a similar format to 30,000-foot-level reporting, except a financial measurement (such as profit margin) is being tracked.

Median and frequency

A useful approach for this no-specification situation is to describe an estimated median and an 80% frequency of occurrence for the stable regions of the process metric at the 30,000-foot level. With this form of reporting, four out of five events are expected to occur in this range of values. This percentage value can be determined mathematically using a Z table or a statistical computer program.

Figures 1 and 2 illustrate this approach. Figure 1 is an individuals control chart that indicates predictability (that is, a recent region of stability). Data from the latest stable region of the 30,000-foot-level control chart can be considered a random sample of how you expect the process to perform in the future without any process improvement events. This is shown in Figure 2.

Figure 1

Figure 2

With a histogram, as illustrated in Figure 2, it is difficult to determine the desired 10% and 90% area-under-the-curve tailed values. Therefore, the reporting of a median and 80% frequency of occurrence rate using a probability plot is a better reporting alternative. This type of presentation provides a good process baseline from which desired improvements can be assessed. From this plot, quick estimations also are available for differing percentage and response levels.

This approach also can be applied to non-normal distribution situations, which often occur in transactional processes in which zero is a lower bound. For example, the time to conduct a task cannot be a negative number. The only difference is that an individuals control chart for single readings would need a normalizing transformation, and the appropriate probability distribution would need to reflect this transformation (for instance, lognormal).

Tending to attendance issues

About 15 years ago, when I became chair of ASQ’s Austin Section, I thought section meeting attendance was important to address and chose an improvement in this metric as a measure of success for my term.9

The process of setting up and conducting a professional society session meeting with a program is more involved than you might think. Steps in this process include confirming a guest speaker and topic selections, arranging a meeting room, finding ways to announce and promote the meeting, and addressing many other tasks and issues.

To determine whether attendance improved during my term, we needed a baseline that would indicate expected results if nothing was done differently from the previous meeting creation process. During my term, we could assess whether our efforts to improve attendance were effective.

This situation does not differ much from a metric that might be expected from business or service processes: A process exists that needs to be improved, but there are no real specification limits. Some organizations have set a goal and used this as a specification limit to determine process capability and performance indexes. But this should be avoided because the practice can yield questionable results, as noted earlier.

What you would like is an alternative approach that can quantify—in easy-to-understand terms—how the process is performing and when an improvement was made.

Most people attending a monthly ASQ section meeting are members of the local section, so attendance is technically an attribute response (a member attends a specific meeting or does not) that can be modeled using a binomial distribution. But if we could track how many members attended meetings as a continuous response, this could provide an easier to understand and more actionable measurement response.

The normal distribution can be used to approximate a binomial distribution when np and n (1-p) are at least five, with n being the sample size, and p being the proportion attending meetings. Because the ASQ Austin Section membership was about 800 (n) during the baseline timeframe and the proportion of people attending meetings was between 4 and 8% (0.04 and 0.08), a normal distribution could be used to approximate meeting attendance for these section meetings.

With this continuous response tracking approach, previous Austin section meeting attendance could be reported at the 30,000-foot level, as shown in Figure 3. The individuals control chart in this report-out reveals the process has a recent region of stability, so you can conclude the process is predictable.

Figure 3

Based on this, capability and performance metric statement can be made: The estimated median section-meeting attendance is 45 with an 80% frequency of occurrence for attendance between 34 and 57. If a larger attendance is desired than what is predicted, process improvements are needed.

Establishing a goal

A stretch goal was set to increase monthly mean attendance by 50%. We knew the stretch goal was going to be exceptionally difficult to meet because we needed to reduce the frequency of our newsletter to every other month due to recent cash-flow issues. Our focus was not trying to drive improved attendance through the output measurement (that is, "do better" because attendance is not meeting our goal).

The executive committee had some control over the implementation of process changes but no direct control over how many people actually decided to attend meetings. The proposed process changes (many seem common today but weren’t in the late 1990s) we focused on implementing with the executive committee team were:

  • Work closely with the program chair to define interesting programs and secure commitments from all presenters before the September meeting.
  • Create an email distribution list for ASQ members and others. Send notices during the weekend before the meeting.
  • Build a website.
  • Submit meeting notices to newspapers and other public media.
  • Videotape programs for broadcast on cable TV.
  • Arrange for door prizes for meeting attendees.
  • Send welcome letters to visitors and new members.
  • Post job openings on the website and email notices to those who might be interested.
  • Submit a "From the Chair" article to the newsletter chair on time so the newsletter is mailed on time.

The term of a section chair was July 1 to June 30. There were no June, July and August meetings. My term encompassed meetings from September 1997 to May 1998. Figure 4 includes the baseline metrics attendance during my term.

Figure 4

The first meeting during my term had an out-of-control point to the better. In this meeting, there was a panel discussion that had an unusually large number of attendees. This point was excluded from the future estimate because it was believed to be a special cause. But leadership should consider setting up this type of meeting in the future because it seemed like this program format could draw more attendees than the norm.

The control chart indicated a shift to greater attendance. Also, a t-test indicated a significant improvement in attendance during my tenure as section chair, which presumably was from our process improvement efforts. This level of attendance could be expected in the future if the new process was sustained with future section chairs. Estimated values for previous and expected future attendance rates are included in Table 1.

Table 1

A best estimate for the new process was there would be an average of 11 more people attending. Also, the variability in attendance between meetings might have been reduced from 23 (57-34) to 18 (65-47) for 80% of the meetings.

What good metrics lead to

It is important to have good metrics that lead to the 3Rs of business: Everyone doing the right things and doing them right at the right time.

The described method for reporting and improving process capability and performance when there is no specification is a method that can help organizations achieve this objective.


References

  1. Forrest W. Breyfogle, "Inputs Into Action," Quality Progress, January 2012, p. 52-55.
  2. Forrest W. Breyfogle, Integrated Enterprise Excellence Volume II—Business Deployment: A Leaders’ Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard, Bridgeway Books/Citius Publishing, 2008.
  3. Robert S. Kaplan and David P. Norton, "The Balanced Scorecard—Measures that Drive Performance," Harvard Business Review, January-February 1992.
  4. Forrest W. Breyfogle, "Control Charting at the 30,000-foot-level," Quality Progress, November 2003, pp. 67-70.
  5. Forrest W. Breyfogle, "Control Charting at the 30,000-foot-level, Part 2," Quality Progress, November 2004, pp. 85-87.
  6. Forrest W. Breyfogle, "Control Charting at the 30,000-foot-level, Part 3," Quality Progress, November 2005, p. 66-70.
  7. Forrest W. Breyfogle, "Control Charting at the 30,000-foot-level, Part 4," Quality Progress, November 2006, p. 59-62.
  8. Breyfogle, "Inputs Into Action," see reference 1.
  9. Forrest W. Breyfogle, Integrated Enterprise Excellence Volume III—Improvement Project Execution: A Management and Black Belt Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard, Bridgeway Books/Citius Publishing, 2008.

Forrest W. Breyfogle III is president and CEO of Smarter Solutions Inc. in Austin, TX. He earned a master’s degree in mechanical engineering from the University of Texas. Breyfogle is an ASQ fellow and recipient of the 2004 Crosby Medal.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers