 2019

This month’s first question

My organization received 250 assembled electronics components. Out of 125 sold so far, two are defective. How many of the remaining 125 components would I have to test to have some confidence that no defects remain? Would I use a hypergeometric sampling table or ASQ Z1.4?

Our response

When I am asked a statistical question like this, I try to understand the situation by asking several questions, such as:

• How expensive are the components?
• If the components are relatively inexpensive, does it make sense to reject the remainder of the lot?
• What is the impact to customers if they receive a defective assembly?
• Is the testing non-destructive?
• How expensive and time consuming is the screening test?

Because you already have identified two failures, the likelihood is high that there are additional defective assemblies in your remaining inventory.

Based on binomial probability, an observed failure rate of 2/125 = 0.016 and the assumption that the units sold are similar to the remaining units, there is about an 87% chance that at least one more defective unit is in the remaining inventory. Releasing the remaining inventory without testing is not a good option.

Hypergeometric sampling is inappropriate in this situation. You have a relatively small population, but you don’t know how many defects are in the remaining inventory.

Compare this to a classic application of hypergeometric sampling used by card counters at a casino. They know how many face cards are in a full deck and that the probability of drawing a face card changes as the cards are dealt. This affects the odds that a particular hand (such as a straight) will be a winning hand. Card counters give themselves away if they systematically increase their bets as the stack of cards diminishes.

You certainly may use the American National Standards Institute (ANSI) and ASQ quality control standard Z1.4 to make a more-informed decision. The normal inspection table is based on about 90% confidence that the population defect rate is at or below the acceptance quality limit (AQL) if the number of defects found in the sample is less than or equal to the acceptance level.

For example, if the AQL = 0.25, the minimum required sample size is 50 and the acceptance criteria are A=0, R=1. So, if you sample 50 units and find no defects, you can conclude, with about 90% confidence, that the population defect rate is less than or equal to the AQL.

The AQL values in the ANSI/ASQ Z1.4 table are given in percentages, so 0.25 is 0.25%, or 2.5 defects per thousand. The ANSI/ASQ standard does not guarantee or "give confidence that no defects remain."1 It can only show that the defect rate in the population is less than or equal to the acceptance level, as specified by the AQL.

You can calculate the probability of passing the sampling plan outlined earlier on most calculators using the formula yn, where y is the probability of finding a good unit and n is the number of trials.

The probability of finding a good unit is one minus the probability of finding a bad unit, or 1 – 0.016 = 0.984. So, the probability of passing the sampling plan in this situation is 0.98450 = 0.446. You can try this sampling plan, but the odds are not in your favor.

The only way to be certain that no defects remain is to test 100%. This may be the most prudent approach, especially if the cost of testing is less than the cost of releasing defective product. Consider the cost of a warranty claim and the effect on your organization’s reputation. Could a customer be injured if the assembly fails? Now that you know you have a quality issue, you also must consider your exposure to liability lawsuits.

If the cost of testing is low, consider 100% testing. If the cost of testing is expensive or if the test is destructive, you might be better off dropping the testing altogether and reworking the assemblies with new components.

Reference

1. American National Standards Institute (ANSI) and ASQ, ANSI/ASQ Z1.4 Sampling Procedures and Tables for Inspection by Attributes (2008).

This response was written by Andy Barnett,director of quality systems, NSF Health Sciences Pharma Biotech, Kingwood, TX.

This month’s second question

Is design for Six Sigma (DFSS) considered the same thing as define, measure, analyze, design, verify (DMADV)? If not, what are the differences?

Our response

There are so many acronyms flying around in the improvement world that people sometimes lose track of their hierarchy. Regarding the differentiation between DFSS and DMADV, think about it this way:

DFSS is the method—it is the overarching construct that we refer to as Six Sigma. DMADV is the tool used to design new products and processes.1

Think about it like this:

• DFSS is the tool box and DMADV is the tool.
• DFSS is the goal and DMADV is the means to accomplish the goal.

Hopefully, this explanation helps to shed some light on the difference between DFSS and DMADV.

Reference

1. Gary A. Gack and Kyle Robison, "Integrating Improvement Initiatives: Connecting Six Sigma for Software, CMMI, Personal Software Process (PSP), and Team Software Process (TSP)," Software Quality Professional, Vol. 5, No. 4, 2003, pp. 5-13.

This response was written by Keith Wagoner, senior process consultant improvement consultant, BlueLine Associates, Cary, NC.

Average Rating Out of 0 Ratings