This month’s first question

I work for an organization that wants to measure the effect of productivity and quality improvement interventions. An objective of an intervention might be to improve service excellence, for example. What are the best aspects or elements to use to measure how effective an intervention has been?

Our response

Often, organizations embark on improvement activities but they fail to consider how they will measure the improvement before beginning the journey. Many times, the key to measuring such improvement activities is some type of data rationalization. After the data have been rationalized, or normalized, those measures can be applied to an individuals control chart to better visualize the effect of the improvement activity.

Data normalization allows at least two dimensions to be considered to overcome inherent variability. If an organization wants to measure productivity, one thing it should consider is widgets produced. The widget counts must be divided by another number, however, such as total hours worked. Dividing the total widget count by the total number of hours worked will give you widgets per hour, which is a normalized metric.

Performing a mathematical calculation with the two numbers transforms the data, or counts (such as widgets produced or hours worked) into a metric—widgets per hour. This type of mathematical treatment of your data is essential to measuring quality improvement efforts because the calculation neutralizes the variability of the widget count and the hours count.

As for quality improvement, parts per million defects already is a normalized number. Another way to consider quality improvement is calculating the number of customer complaints per one million pieces produced (used in high-volume manufacturing).

With data normalization, the data you are interested in tracking is the numerator. The denominator is the rational subgrouping of how the numerator will be expressed. If you are tracking customer complaints received via phone call, for example, you must know the desired unit of phone calls to express complaints. You must determine if complaints will be expressed as complaints per call, complaints per 100 calls or complaints per 1,000 calls, for example. Knowing the relevant call volume is critical to selecting the denominator.

After you have decided how to measure the effect of the intervention, go back and collect historical data, such as historical numerators and denominators, normalize the data and plug the numbers into an individuals control chart. Doing this will help you understand the natural variability of the data. After you have the date of the intervention, you can add that data to the chart.

The historical data is critical. If there isn’t a change in the centerline, it doesn't mean the intervention hasn't been successful. Observe the delta between the upper control limit and the lower control limit. If there is a reduction in the delta between the upper control limit and the lower control limit, the process is showing that quality has been improved by reducing the variability of the quality parameter.

Tracking the data in this format shows which way the data is trending, and will continue to provide insight regarding the effectiveness of the intervention. It will also show when another intervention is needed (when variability is no longer being reduced) and when the process has stabilized.

You can measure the impact of interventions in a twofold fashion: by normalizing the data and by plotting the normalized data on an individuals control chart. Summarizing your data this way demonstrates just how effective your intervention was.

This response was written by Keith Wagoner, certified quality engineer, Wilmington, NC.

This month’s second question

I have a supplier that uses a lot of scales, but isn’t calibrating the scales to the full range of capability. For example, the 5,000-pound scale is calibrated to only 3,000 pounds, and the 500-pound scale is calibrated to only 250 pounds. The supplier claims this is acceptable because its calibration facility is ISO 17025 certified. My understanding, however, is that scales aren't necessarily linear and should therefore be calibrated for the full range. (I issued a finding because the supplier uses the scales over the calibrated limits.)

Our response

It sounds like you were conducting a supplier audit when you discovered this concern. You have every right to be concerned. You are correct and completely justified in issuing a finding.

Some older scales had nonlinear performance, especially if the scale was based on spring compression. Most modern industrial scales are based on load cells that tend to be linear throughout the operating range. For many of these scales, however, the measurement uncertainty is proportional to the operating range of the scale. Measurements taken near the upper end of the range have more uncertainty (variability) than at the lower end of the range.

This may affect accept/reject decisions if the product being weighed is close to the upper specification limit. Products that are slightly over the specification limit may "pass" if the uncertainty decreases the observed weight below the specification. This is a type II error (consumer’s risk).

The supplier claims to comply with the ISO 17025 standard. The standard itself does not provide instructions on how to calibrate specific devices. Rather, it provides general requirements the calibration service must meet to maintain certification. These requirements tend to be broad, such as management oversight, document control, complaint management and training of personnel, so it is difficult to point to a specific clause within the ISO standard.

That being said, I suggest you look at subclause 5.4.2: "The laboratory shall use test and/or calibration methods, including methods for sampling, which meet the needs of the customer and which are appropriate for the tests and/or calibrations it undertakes."1 Clearly, the tests the supplier is performing fail on two counts:

  1. The tests are not appropriate for the intended application of the scale.
  2. The tests do not meet your needs as the customer.

Quite simply, you can insist the supplier perform calibrations across the entire range of the applications for which the device will be used.

Subclause of the standard states, "Validation is the confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled."2 If the supplier limits the range of the calibration, as suggested by your question, it is operating outside the validated range whenever it weighs something beyond the calibration limits.

Given your situation, one of the most important requirements of ISO 17025 is to estimate the measurement uncertainty of the device (subclause 5.4.6). Because the measurement uncertainty may change across the operating range of the device, the supplier must test for it throughout the expected range of the applications for the scale. As the customer, you should insist the measurement uncertainty is small relative to the specification of the product you are purchasing.

For example, suppose you are purchasing raw material in a 400-pound tote. The specification for the tote is 400 +/- 4 lbs. Thus, the specification is +/- 1% of nominal. The measurement uncertainty should be small compared to the product specification. Depending on the criticality of the specification, the measurement uncertainty should be as small as 10% of the product specification, or +/- 0.4 lbs.3

This response was written by Andy Barnett, director of quality systems, NSF Health Sciences Pharma Biotech, Kingwood, TX.

References and Note

  1. International Organization for Standardization (ISO), ISO/IEC 17025:2005—General requirements for the competence of testing and calibration laboratories, subclause 5.4.2.
  2. Ibid., subclause
  3. For more information about measurement uncertainty, see Andy Barnett, "Calibration in Confined Spaces," Quality Progress, Vol. 48, No. 12, pp. 8-9.


Automotive Industry Action Group, Measurement Systems Analysis (MSA), fourth edition, Automotive Industry Action Group, 2010.

Average Rating


Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ

Featured advertisers