Q: How should process improvement teams select and develop key performance indicators (KPI) to measure performance in the short and long term?
A: I would begin by understanding the objectives of the process improvement team. Usually, the process improvement objectives are derived from the broader organizational strategic objectives. Examples may include improving the market share in specific product segments or geographical locations by a certain percentage, increasing the number of differentiated innovative products or reducing the operation expenses by a certain percentage. After you understand which of the strategic objectives the process improvement team maps to, you can determine appropriate KPIs for the process.
For example, for a process improvement team mapping to the objective of reducing operation expenses by X%, KPIs may include items such as first pass yield, throughput, number of engineering changes post production release, equipment uptime, training hours per employee and per process, and lost hours from accidents and safety incidents. An improvement team from a service function mapping to an organizational objective may have different KPIs than a team working in the manufacturing environment. Examples of service KPIs may include transaction duration, transactions on time and transaction error defects per million opportunities.
It is difficult to exactly identify appropriate KPIs until you know the larger organizational objectives your process relates to. Also, note in this response I am not distinguishing between measures and indicators. For an indicator, measurement relates to performance and is not a direct measure.1
There are some guidelines for selecting indicators. One item to consider is lag, which refers to how far the measure is removed from real-time performance of a process, and how soon you may react to it. Indicators with short lag (the measure is not far removed from real-time performance of a process and you may be able to react quickly) are preferred for the team directly managing the improvement.
Author Richard J. Schonberger suggests you should "react often to short lag-time metrics, less often to those with intermediate lag and seldom for metrics with long lag time. Contrarily, business scorecards tend to display metrics intermixed as to lag times, and managers tend to react to them, with goals and corrective assignments, at a fixed time interval, typically monthly."2 Schonberger suggests using three levels of indicators:
- Indicators with short lag times for the direct team management.
- Indicators with monthly or quarterly lag times for middle management.
- Indicators tracked on an annual basis for senior management.3
The indicator should get broader as it gets to the third level. For example, percentage of nonconformances may be a first-level measure for a performance improvement team; cost of poor quality may be a second-level measure for middle management; and operation expenses may be a third-level measure for senior management. Second level or third-level measures cannot exist by themselves without supporting first-level metrics the performance team can act on.
To reinforce this idea, author Duke Okes recommends that the measures be deployed down and across the organization: "Having a high-level metric that does not have supporting metrics existing in the right processes will not add value other than letting management know that things aren’t going well."4
In his article, "Measuring the Hard Stuff: Teams and Other Hard-to-Measure Work,"5 Jack Zigon provides a structured approach to developing performance measurement:
- Review the organizational measures.
- Define measurement starting points.
- Weight the results (based on relative importance).
- Develop performance measures.
- Develop performance standards.
- Decide how to track the performance.
In general, do not select measures and indicators that are irrelevant to the improvement objective. This will waste the team’s time in driving actions that have no impact. Also, ensure that the measures are not redundant, such as measuring yield and internal failure costs. If the internal failure cost has more failure components, break down the details into separate measures. Ensure the data are from a reliable source and are verified periodically for integrity.
Director, quality assurance
SunPower Corp., San Jose, CA
- National Institute of Standards and Technology, 2013-2014 Baldrige Criteria for Performance Excellence, Measures and Indicators, p. 47.
- Richard J. Schonberger, "Time-Relevant Metrics in an Era of Continuous Process Improvement: The Balanced Scorecard Revisited," Quality Management Journal, July 2013, pp. 10-18, http://bit.ly/qmjmetrics.
- Duke Okes, Performance Metrics: The Levers for Process Management, ASQ Quality Press, 2013, pp. 29-40.
- Jack Zigon, "Measuring the Hard Stuff: Teams and Other
Hard-to-Measure Work," 1998, http://asq.org/forums/teamwork/proceedings/2000/Proceed/00025.html
Q: Is rational subgrouping in statistical process control (SPC) (determination of subgroup sample size and sampling frequency) done by experience and knowledge about the process, or by using proven statistical methods? What’s the best approach? Is it acceptable to plot one SPC chart representing multiple manufacturing lines for the same process parameter?
A: By nature, manufacturing processes are subject to inherent random variation due to common causes. Attempts to reduce process variation when no assignable causes of variation are present by making adjustments that are supposed to remove the random variation only adds an additional source of variation and increases it. Reduction of the random variation due to common causes only can be achieved by process improvement.
Therefore, the purpose of process control is maintaining it in the state of statistical control by detecting and removing assignable causes. This is achieved using charts to detect process shifts and R charts to detect occurrences of increased variation. The and R values are calculated from samples that form rational subgroups, usually of product units that have been manufactured at practically the same time to remove the effects of the variation from the assignable causes and increase the ability of the control charts to detect occurrences of the special causes.
Determining sample size of rational subgroups for the chart is relatively straightforward. Using statistical methods, you can calculate the probabilities of the falling between the control limits as a function of the process shift for several sample sizes and graph those values as operating characteristic curves of the chart.1 Reduction of the rates of false alarms and of the delayed detection of occurrences of assignable causes also are achieved by using various rules for interpreting patterns of the data points in the control charts. For example, observing a run above or below the target of a length exceeding a critical value signals a process shift that must be investigated and fixed before the noncomplying product could be manufactured.
The process shifts can be detected faster if you take larger samples more frequently, but this is not economical. More frequent and smaller samples can reduce the losses due to delays in detection of process shifts while producing noncomplying product. Statistical methods of economical design of the control charts that include optimizing the control limits, sample size and frequency of sampling have been proposed.2
Designed control charts minimize total cost of quality. With economical automation of sensing and process monitoring, ultimately every unit of product can be inspected as it is produced, leading to the fastest detection of occurrences of the assignable causes. The latter approach, though, is applicable only to nondestructive testing.
When the process consists of multiple manufacturing lines, charting each stream (machine, machine head and workstation) separately allows for faster identification of assignable causes occurring in any manufacturing line. Because monitoring multiple manufacturing streams separately was quite difficult when control charts were made and reviewed manually, various approaches were used to reduce the number of control charts.
For example, a single chart was often used when charting the maximum process shifts among all streams. When the maximum process shift or maximum variation was within the respective control limits, no action was needed. Only when an assignable cause was detected was the effort taken to identify the stream where it occurred. This made process control more manageable.
With modern computerized control charting, however, monitoring each process stream separately has become more feasible with large numbers of the streams. An example is charting process streams at a textile plant that manufactures multiple kinds of yarn, uses multiple spinning machines for each type of yarn, and each machine has hundreds of heads.
Roche Molecular Diagnostics
- Douglas C. Montgomery, Statistical Quality Control, second edition, Wiley, 1991, pp. 111-113, 226-229.
- Ibid, pp. 413-447.