What’s acceptable?

Q: A handheld micrometer is to be used to perform measurement. The manufacturer states that the micrometer has a resolution of 0.001 millimeters and an accuracy of 0.001 millimeters. Given our situation, we feel we’ll be lucky if our results of measurement will be within 0.005 millimeters. This is acceptable, as the product being measured has a tolerance of +/-0.05 millimeters. This gives a test accuracy ratio (TAR) of 10:1. We want to use this micrometer.

Now the question is: When we calibrate the micrometer in-house, should the acceptable tolerance be that of the manufacturer, which is +/-0.001 millimeters, or ours, which is +/-0.005 millimeters? We don’t believe that +/-0.001 millimeters is realistic for us.

Bob Kennedy
Sligo, Ireland

A: This answer contains various definitions and explanations of terms, and hopefully helps distinguish among different measurement concepts and types of variation. When the manufacturer states that the micrometer has a resolution of 0.001 millimeters, that means the smallest difference that can be measured is 0.001 millimeters. When speaking about accuracy, the 0.001 millimeters refers to the degree of agreement between the measured value and a true value, with the true value being that of a reference standard.

Calibration verifies accuracy and subsequent adjustment improves accuracy, but resolution is not affected. TAR is the ratio of the accuracy of the tool being calibrated (such as a micrometer) to the accuracy of the reference standard. Note that the TAR is not a function of the product tolerance. A TAR of 10:1 or 4:1 is considered typical.

Assuming we use 10:1, the implication for this example is that the reference standard should have an accuracy of at least 0.0001 millimeters. In practice, then, typical gauge blocks or rods with this accuracy can be used to adequately calibrate this micrometer to the desired level. TAR can be confused with resolution because another rule of thumb is that it is desirable to be able to detect differences in measurements as small as 1/10 of the total tolerance or process spread.

A separate issue is that the measurements of the desired parts may be able to be obtained to only 0.005 millimeters accuracy. Most of the variation in the 0.005 millimeters comes from typical gage repeatability and reproducibility (GR&R) factors. Repeatability is a measure of the ability of the measuring system to obtain the same value when all factors are kept as constant as possible, and is often called equipment variation. Repeatability is a measure of the micrometer’s precision, and is independent of the calibration status. Reproducibility is often a larger contributor to the total variation, and is considered a measure of the ability of the measuring system to obtain the same value under varying conditions, most commonly different operators at different times. Reproducibility is often called appraiser variation.

All this leads to the recommendation to think separately about calibration and total measurement system variation. Ensuring accuracy is always a good thing, so calibrate the micrometer to the highest reasonable criteria. Then, perform measurement systems analysis (such as GR&R) studies to identify and reduce the largest contributors to the total variation in the particular application. To answer the question directly, between the options, calibrating to 0.001 millimeters should be possible and is preferable.

Scott A. Laman
Senior manager, quality engineering and risk management
Teleflex Inc.
Reading, PA

Developing quality indicators

Q: I am developing a project quality indicator for oil and gas projects. Can anyone guide or share some examples on what should be included in this indicator?

Ashwani Kumar Khare
Abu Dhabi, United Arab Emirates

A: The term "indicator" is used to mean observable measures that provide insights into a concept. The term "quality" is more difficult to measure directly. However, for a more precise definition, quality performance indicators can be divided into two types: outcome indicators and activities indicators.               

Both outcome and activities indicators consist of two key components:

  1. A definition that clearly states what is being measured in meaningful terms for the intended audience.
  2. A metric that defines the unit of measurement or how the indicator is being measured and is precise enough to highlight trends in quality over time and highlight deviations from quality expectations that require action.

Outcome indicators are designed to help assess whether:

  • Quality-related actions (such as policies, programs, procedures and practices) are achieving their desired results.
  • Such actions are leading to less likelihood of poor products or service occurring and less adverse impact on human health, the environment and property.

Outcome indicators are designed to collect information and provide results to help organizations answer the broad question of whether the issue of concern is achieving the desired results. They are reactive and are intended to measure the impact of actions taken to manage quality. Outcome indicators are similar to lagging indicators and often measure change in quality performance over time or failure of performance.           

Thus, outcome indicators identify whether a desired result was achieved or if a desired quality result was not reached. Unlike activities indicators, they do not indicate reasons why the result was achieved or not.

Activities indicators are designed to help identify whether organizations are taking actions believed necessary to lower quality risks (for example, examining the types of policies, programs, procedures and practices). They are proactive measures and are similar to leading indicators. They measure quality performance against a tolerance level that shows deviations from quality expectations at a specific point in time. When used in this way, activities indicators highlight the need for action when a tolerance level is exceeded.

Thus, activities indicators provide organizations with a means of checking—on a regular and systematic basis—whether the organization is implementing priority actions in the way they were intended. Activities indicators can help explain why a result (measured by an outcome indicator) has been achieved or not.

After you decide on the key issues of concern, consider which outcome indicators may be relevant. When choosing outcome indicators, it is useful to ask: "What would success in implementing this element look like?" and "Can this successful outcome be detected?" The answers to these questions should help define in specific, measurable terms what the quality-related policy, program, procedure, and practice are intended to achieve, or in other words, what the target is.

Activities indicators relate to these identified outcome indicators and help measure whether critical quality policies, programs, procedures and practices are in place to achieve the desired outcomes.

To identify the appropriate activities indicator for a specific outcome, identify the activities that are most closely related to the chosen outcome indicators and most critical to achieving the intended target. For example, consider:

  • Which activities must always be performed correctly (zero tolerance for error)?
  • Which activities are most vulnerable to deterioration over time?
  • Which activities are performed most frequently?

For example, you work with other managers to identify the elements of a quality training program that are most important to maintain a competent staff. Based on those discussions, it is decided to focus on the indicator: "Is there a mechanism to check that training is actually performed according to the training program and achieves desired results?" Using this and the related bullets as a starting point, the following activities indicators are proposed:

  • Percentage of personnel receiving initial training related to job function (accounting for changes in job function).
  • Period of time between retraining activities.
  • Competence of the staff members based on post-training testing.

James J. Rooney
Director, Department of Energy programs
Director, quality management
and lean Six Sigma services
ABS Consulting
Global Government Division 
Knoxville, TN

Average Rating


Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ

Featured advertisers