2017

EXPERT ANSWERS

Keeping score

Q: My organization began formally auditing its suppliers 18 months ago. Each supplier is assigned an audit score based purely on the maturity and execution of its quality system, without direct regard for actual product quality, using measures such as yields, reject rates and customer-reported failures.

We have found that to date there is no correlation between a supplier’s audit scores and its product quality. For example, some suppliers with relatively high audit scores have been responsible for considerable breakdowns in product quality, while others whose quality systems score low provide consistently high-quality product.

Is this unusual? If studies have been conducted on this topic, do they indicate product quality does rise when an organization’s quality system—or audit score—improves? If so, how long does that take?

Daniel Mueller
San Diego

A: If the audit scores your organization assigns to suppliers are based on the maturity and execution of their quality systems, product quality also should be reflected in those scores. Assigning a maturity score essentially requires evaluating effectiveness. For a quality management system, assessing effectiveness means determining the extent to which customers’ and other stakeholders’ expectations, including expectations for product quality, are met.

The objective of any supplier assessment system is to remove or at least minimize the effects of supplier deterioration in areas such as product quality, reliability and on-time delivery. The design of your supplier assessment and scoring system should enable your organization to achieve this and identify potentially low-performing suppliers.

But some degree of disconnect between supplier scores and the quality of delivered product is not unusual. In fact, you can’t design a system that perfectly aligns audit scores and actual effectiveness from the start.

A well-planned design can help you reach 80 to 90% alignment, but the balance must happen based on cycles of learning. Full alignment can require months or even years of adjustments. I believe you are currently in this stage of post-implementation learning and improving.

If your system shows no correlation between supplier audit scores and product quality, either you are asking the wrong questions during the audit, or your auditor’s competency is in question. A disconnect also can happen due to a poorly designed scoring system. For instance, higher weights for scoring may be assigned to audit sections that do not have a direct impact on actual product quality, such as yields, reject rates and customer-reported failures.

A well-designed supplier assessment/auditing system will include a defined objective, an infrastructure, a trained cross-functional team of auditors, a score review process, and an effective corrective and preventive action system. The supplier audit score review process must be dynamic.

When you see a trend suggesting deteriorating product quality—such as declining yields, increasing reject rates and increasing customer-reported failures—you should revisit the supplier’s score. Similarly when a supplier consistently meets or exceeds goals, its score should reflect that. Suppliers’ scores also should reflect the effectiveness of closure of audit findings and corrective and preventive actions.

The Baldrige Criteria for Performance Excellence, although not an auditing system, is an example in which process and results are tied together to achieve an overall score. QP has published several case studies from Baldrige recipients linking business excellence to results.

In a nutshell, processes and results are important for a well-functioning system. Processes without results are useless, and results without processes are unsustainable.

Govind Ramu
Director, quality assurance
SunPower Corp.
San Jose, CA

Bibliography

Supplier inspections

Q: I need to develop an inspection plan for incoming supplier checks. I’m looking for frequency suggestions and sample sizes that are realistic, bearing in mind some suppliers are more critical than others.

Stacy Gregory
Cartersville, GA

A: Your question contains individual parts that may lead you to an appropriate sampling plan.

First, you noted this is for checking incoming supplier material, so you can eliminate in-process and final—or audit—inspection. Next, you mentioned you’re interested in frequency inspection, which implies it will be performed on a series of lots from the supplier. The need for realistic sample sizes indicates inspection costs are a concern.

Finally, the last part of your request—about dealing with suppliers that are not all critical—indicates you want a sampling plan that is flexible enough to deal with inspection that is more or less stringent. Based on those three facets, there are a couple of options to consider.

One may be the use of a skip-lot sampling plan. These plans were developed by Harold Dodge and work well if the supplier generally has good quality. Like chain sampling plans, skip-lot sampling plans also are called cumulative result plans, which typically involve lot-by-lot inspection of a stream of product.

In general, such plans require certain assumptions be met regarding the nature of the inspection process:

  • The lot should be one of a continuing series of lots.
  • You expect these lots to be of the same quality.
  • The consumer should not expect that any lot is any worse than any of the immediately preceding lots.
  • The consumer must have confidence in the supplier not to pass a substandard lot, even though other lots are of acceptable quality.

Under these conditions, you can use the record of previous inspections as a means of reducing the number of inspections performed on any given lot.

Applications may involve situations in which extensive and costly tests would be needed on the characteristics of bulk materials, such as chemical analysis of incoming raw material composition, or products made and shipped in successive batches from fairly reliable suppliers. Just as units are skipped during the sampling phase of a chain sampling plan, lots may be skipped—and passed—under a corresponding skip-lot plan.1

Another option is using a published sampling plan, such as Mil-Std-1916.2 Your question does not indicate whether you are doing attribute or variables inspection. The smallest sample sizes can be found under variables inspection, but many organizations now rely on c = 0 attributes plans, which typically are based on minimal sample sizes.

Mil-Std-1916 addresses the importance of statistical process control in modern acceptance control by incorporating an evaluation of the quality management system (QMS) along with c = 0 attributes sampling, variables sampling and continuous sampling plans as alternate means of acceptance in one standard. Thus, the standard is unique not only because there is switching among plans, but also because different alternate acceptance procedures may be selected from this standard.

Mil-Std-1916 provides two distinct means of product acceptance:

  1. Acceptance by contractor proposed provision, which requires qualification and verification of the QMS associated with the product.
  2. Acceptance by tables, which relies on traditional sampling plans for acceptance.

The contractor and the customer must decide which approach to use at the outset. If the contractor elects to rely on the quality system to demonstrate acceptability of the product, quality system documentation—including a quality plan—will be required to show the system is prevention-based and process-focused.

In addition, evidence of the implementation and effectiveness of the quality system will be required. This includes evidence of systematic process improvement based on process control and demonstrated product conformance.

If the contractor and customer decide to use tables for the acceptance of product, the approach is more conventional. Given lot size and verification level (VL), a code letter is selected from Table I of Mil-Std-1916.

The standard provides seven verification levels, with level seven being the most stringent. The VLs play a role similar to the acceptable quality levels of Mil-Std-105E, and they allow for adjustment of the severity of inspection. If no VL is specified, the default levels are critical (VII), major (VI) and minor (I).

In addition, tables are provided for three different sampling schemes: attributes, variables and continuous. Each is indexed by verification level and code letter. They are matched so it is possible to switch easily from one to another. All attributes plans in the standard have c = 0.

Dean V. Neubauer
Engineering fellow
Corning Inc.
Corning, NY

Reference and note

  1. For more information on the construction of these plans, see Edward G. Schilling and Dean V. Neubauer, Acceptance Sampling in Quality Control, second edition, CRC Press, 2009.
  2. U.S. Department of Defense, Mil-Std-1916: Department of Defense Test Method Standard, http://guidebook.dcma.mil/34/milstd1916(15).pdf.

Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers