2019

3.4 PER MILLION

Operational Definitions

The lifeblood of data collection and metric development

by T.M. Kubiak

If your organization is like most when it comes to data collection or metric development, little thought is actually given to these processes. I mean, after all, how difficult can it really be? Generally, the processes will follow something like this:

Once upon a time, senior management requested a new metric be established—cost per defect by part. A team was formed to facilitate the data collection process. The team consisted of five cost accountants and five quality engineers—one each from the five production facilities on site and a team leader.

At the first meeting, the team leader assigned the accountants to collect monthly cost data and the engineers to collect monthly defect data for the previous 12 months. The team reconvened two days later. At that meeting two days later, the team computed, graphed and reviewed the cost per defect by part metric. Nothing of consequence stood out. Therefore, the metric was integrated into the senior management monthly operations review.

The team leader assigned each accountant and engineer a monthly due date for submitting data, and the meeting adjourned. Fast forward several years. The metric has been institutionalized and accepted without question.

Operational definition defined

In the context of data collection and metric development, an operational definition is a clear, concise and nonambiguous statement that provides a unified understanding of the data for all involved before the data are collected or the metric developed.

It answers: who collects the data, how are the data collected, what data are collected, where are the sources of the data and when are the data collected? Additional "who," "how," "what," "where" and "when" answers may be required.

When the data are used to construct a metric, the operational definition further defines the formula that will be used and each term used in the formula. As with data collection, the operational definition also answers: who provides the metrics and who answers to its results; how the metric is to be displayed or graphed; where the metric is displayed; and when it is available. Again, additional "who," "how," "what," "where," and "when" answers may be required.

Finally, the operational definition provides an interpretation of the metric such as "up is good." While providing an interpretation of a metric seems trivial, this is not always the case.

For example, consider "attrition rate." How should this be interpreted? Is down good? Yes? How far down is "good"? Some authorities suggest that some amount of attrition is acceptable because it permits new thinking to enter the organization. A high level of attrition, however, can bleed the organization of institutional knowledge and memory.

Unlike the cost per defect by part metric—where down is always good—the attrition rate metric may have boundaries in which:

  • Above the upper boundaries—"down is good."
  • Below the lower boundary—"up is good."

Interestingly enough, I have seen metrics so complicated that senior management must be reminded constantly how to interpret them. As you might expect, such metrics are usually of little to no value and eventually fall into disuse.

Back to cost per defect by part

Getting back to the story, several years had passed since it was implemented. One day, a new quality engineer named Tom decided to take a deeper look into the metric after having returned from a seminar where he learned about operational definitions.

Tom began delving into how the data were collected and quickly learned that none of the original data collection team members remained, including the team leader, and no documentation could be found.

Consequently, Tom began asking questions to each of the accountants and engineers currently assigned to the task:

  • What type of month are you using?
  • What type of cost are you using?
  • Where did you get your defect data?
  • Who is responsible for collecting the data?
  • When are the data submitted?

During the discussions with each of the accountants and engineers, Tom learned:

  • The accountants collected cost data according to the fiscal month, while the engineers collected data according to the calendar month.
  • Because the details regarding the "cost" of a defect were not originally clarified, a variety of costs were being collected, such as repair, rework and scrap costs.
  • Quality engineers were collecting defect data from several different sources, including individual databases that were not often current, while others were collecting data from the site quality system that served the five production facilities.
  • In some instances, quality engineers delegated the responsibility for collecting the defect data to lower-level individuals who were not familiar with the data and its coding structure. Consequently, incorrect data were extracted and submitted at various times in the past. Furthermore, it was determined that from this improper delegation of authority, due dates for data submission were missed, and the metric integrated into the senior management monthly operations review was frequently incomplete.
  • About 29% of the part numbers made at the site are manufactured across the five production facilities. However, each facility focuses on a different customer base, which has different requirements, particularly with regard to what constitutes a defect. This is especially true among parts made at more than one facility.
  • Furthermore, quality inspectors are treated as a commodity resource and are often borrowed among each facility, as work demand requires. As a result, the part-level defect data become contaminated due to the large influence of inspector error (that is, inspectors loaned from other facilities who have different interpretations of what constitutes a defect).
  • As the defect data were originally collected by calendar month (that is, not more granular than a month) and is now archived and not easily retrieved, it cannot be disassembled and reconstructed with new cost data; therefore, the historical data for the metric are, essentially, unavailable.

Tom recognizes the metric that has been used by senior management is now useless and must be removed from the monthly operations review. To do so, he must explain the situation to the vice president of quality, a highly undesirable task. Though Tom did not commit the original mistake, he can look forward to carrying some of the blame for this grievous error.

Now, Tom must create a new, but meaningful metric. This time, however, he will generate the appropriate operational definitions, ensure that ownership of the metric is established and have senior management sign off on the metric.

After this is accomplished, data collection can begin because Tom has written the operational definitions for all the data to be collected.

Some noteworthy examples

Generally, two metrics of the greatest concern to most quality professionals are:

  • Quality metric (usually the form of a defect).
  • On-time delivery metric.

Quality metric. Incoming inspection for a manufacturing organization has received a small lot of critical components. Sampling inspection revealed a sufficient number of defects in the sample to cause the lot to be rejected. Production supervision requested 100% inspection on the lot to sort all the good components and the material review board (MRB) to review all the defective components.

Incoming inspection sorted the lot and 94% passed. The remaining lot was sent to the MRB. Eight-five percent of the defective components were passed by the MRB, and the remaining components were returned to the supplier for credit. How should the quality for the components accepted by the MRB be treated?

Many organizations are quick to treat these components as "good." But they’re still defective and should be entered into the organization’s quality system as such. Although the MRB simply assessed these components as being usable, they are still, by operational definition, defective.

On-time delivery metric. Most organizations assume that on-time delivery means having their product or service available on the due date usually specified by a purchase order or contract. To meet this date, they usually plan backwards from the due date, including appropriate time elements such as shipping, packaging, assembly and manufacturing.

One organization following this traditional approach found its largest customer to be quite happy with its sustained 100% on-time delivery results. As a result, the organization received special recognition from its customer’s supply management function. The organization’s management was thrilled and attributed this to its recent lean implementation efforts.

Quite unexpectedly, the customer began complaining about on-time delivery. The organization’s management was perplexed and angry. In fact, the organization was never late with a delivery and was now delivering quite early. The organization’s management was frustrated and couldn’t understand why there was a problem.

Rather than seek an answer with direct and honest communication, however, the organization quietly sizzled. Complaints continued, the relationship degraded and the organization was put on notice.

One day, a quality engineer reviewed the original purchase order requirements and discovered on-time delivery was clearly defined as "two business days earlier than the due date or one business later than the due date based on the latest purchase order revision due date. For orders to be counted as being received for the day, they must be received on or before 3 p.m. local time."

Of course. The customer was trying to minimize carrying inventory and its associated costs, and needed time to allow the receiving function time to process everything received for the day.

This new insight provided the organization the ability to retain its largest customer by adjusting its planning and shipping schedules. But it took time to rebuild relationships and trust.

Added caution, communication

On the surface, data collection and metric development, may appear to be simple straightforward processes. These examples indicate otherwise. Significant attention must be given to the development of operational definitions to ensure data are collected properly so that useful metrics can be established.

Otherwise, time, energy and resources will be wasted. More importantly, bad or even no decisions will be made based on faulty metrics.

When dealing with data and metrics up and down the value chain, additional caution and communication is required. Assumptions should be minimized or regarded with skepticism.

The next time someone argues with you that generating operational definitions is a waste of time because it’s so easy and everyone understands what needs to be done, tell them this story.


Bibliography

  • Kubiak, T.M., "Data Dependability," Quality Progress, June 2008, pp. 61-64.
  • Kubiak, T.M., "Defining Defect," a contribution to the Expert Answers department, Quality Progress, June 2008, p. 12.
  • Kubiak, T.M., The Certified Six Sigma Master Black Belt Handbook, ASQ Quality Press, 2012.
  • Kubiak, T.M. and Donald W. Benbow, The Certified Six Sigma Black Belt Handbook, second edition, ASQ Quality Press, 2009.

T.M. Kubiak is founder and president of Performance Improvement Solutions, an independent consulting organization in Weddington, NC. He is coauthor of The Certified Six Sigma Black Belt Handbook (ASQ Quality Press, 2009) and author of The Certified Six Sigma Master Black Belt Handbook (ASQ Quality Press, 2012). Kubiak, a senior member of ASQ, serves on many ASQ boards and committees, and is a past chair of ASQ’s Publications Management Board.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers