Defining Quality

Considering customers, weights to ensure the best products

by Charles E. Holman

In a quality system, data are continuously collected and analyzed in an effort to improve product quality. How the data are interpreted and analyzed depends on a variety of factors that range from extremely complex to simple. I like to keep things simple.

I am a U.S. Air Force (USAF) metrology laboratory lead evaluator charged with identifying and assessing risk in USAF precision measurement equipment laboratories (PMEL). PMELs calibrate and certify test, measurement and diagnostic equipment (TMDE) across a variety of measurement disciplines. Every day, technicians are making risk-based decisions when certifying TMDE. It is the job of the PMEL quality program (QP) to observe and report how these decisions affect product quality. The effectiveness of the PMEL QP is the single-most critical element in an operational PMEL.

In a USAF PMEL, quality personnel identify and document items of TMDE that either conform or don’t conform to process and quality standards. The data collected are used to track trends throughout a specified time period, usually six months depending on the data pool size. When the data are ready for analysis, a trend analysis (TA) team is assembled. The goal of the TA team is to identify the most significant root cause of the most prevalent nonconformity (NC) trending within the PMEL and brainstorm potential corrective actions to mitigate the negative trend. The corrective actions identified and implemented affect the measurement capability and QP effectiveness for the future of the PMEL.

Potential for improvement

In the USAF PMEL program, there are 43 critical and 40 noncritical quality and process codes available to categorize NCs.1 The TA team first identifies and states the most significant NC or trend occurring within the PMEL. What does significant mean? PMELs usually define significant as most prevalent or often occurring. In a nutshell, if documentation errors occurred significantly more often than out-of-tolerance conditions, documentation errors would be the focus of the TA. The problem with this idea is that not all NCs affect the customer the same. There are levels of severity that must be considered for each NC. After all, though documentation errors are critical NCs and may cause a customer some discomfort, an out-of-tolerance condition may cause injury, harm or even death.

Figure 1 is an example of a Pareto chart based on a simulated PMEL QP. Up-front documentation errors tend to be the most significant—or prevalent and often-occurring—NC trend. To better discriminate NC data compiled throughout a specified time period, the TA team must first identify the greatest risks to its customers. In other words, the team should define what a quality product is to a customer. There are several ways to accomplish this task, but one of the most effective tools is the customer survey (Figure 2). The customer survey can be developed to help identify what creates value for the customer with regard to a calibrated item of TMDE. Bottom line: Is the quality program focusing on the needs of the customer or the needs of management?

Figure 1

Figure 2

When the results of the customer surveys are collated and analyzed, management can determine what areas are valued most by their customers. The value of this process shines a spotlight on where PMEL management should focus corrective actions with respect to how customers view a quality product. One byproduct of the survey is the prioritization of NCs that lose the greatest amount of value for a customer. This means there is now a list available for management to assign a weight to each NC.

After applying the weights to each of the NCs, the Pareto chart shifts. With this new outlook on the data, the TA team can make process improvement decisions on areas of concern that are the greatest risk to the customer—not what management assumes is the greatest risk.

In the Pareto chart in Figure 1, PMEL management assigned documentation errors as the No. 1 NC to focus corrective actions toward based on frequency. As shown in Figure 3, using NC weights, documentation errors moved from No. 1 to No. 3, enabling corrective actions to be directed toward customer concerns.

Figure 3

While the use of managerial weights may seem like a daunting task at first, the end gains provide a much better product for the customer. Additionally, pointing corrective actions toward those areas that directly affect customer-concerned quality reduces operational resource waste—chasing the problems that potentially do not exist. Finally, as the late Peter Drucker once said: "Quality in a service or product is not what you put into it. It is what the client or customer gets out of it."2


  1. United States Air Force, Air Force Metrology and Calibration Program—Technical Order 00-20-14, Secretary of the Air Force, 2013.
  2. Tammy A.S. Kohl, "Quality in a Service or Product Is Not What You Put Into It," Resource Associates Corp., March 18, 2012, http://bit.ly/1CG2VDW.

Charles E. Holman is a U.S. Air Force metrology lab lead evaluator in Heath, OH. He has a master’s degree in project management from Embry Riddle Aeronautical University-Worldwide in Daytona Beach, FL. An ASQ member, Holman is an ASQ-certified calibration technician.

Average Rating


Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ

Featured advertisers