Basic Information Remains Important
I just received the June issue. The “Quality Survival Guide” cover theme is outstanding. Too many times, we provide sophisticated and complex information when there are so many quality assurance folks who just need the fundamentals applied well to be successful.
I am reminded of Vince Lombardi, one-time coach of the Green Bay Packers and member of the NFL Hall of Fame. He started practice for his veteran championship team each season with basic blocking and tackling. More of this would help. Thanks!
Thank you to the readers who contacted us with suggestions for improving the “Quality Glossary” (June 2007, p. 39.) We will continually update the online version (www.asq.org/glossary) and include all changes in the next printed version.
Food Safety Not as Simple as ISO 22000
I take issue with the superficial remedy Steve Wilson proposes in the “Keeping Current” article “A Closer Look at Food Safety and Quality” (June 2007, p. 22).
Wilson’s confidence that “the ISO 22000 standard is exactly what food producers need to prevent incidents like the recent pet food contamination” does not address willful substitution of unsafe or banned substances in foods destined for U.S. consumption by people more concerned with making money than consumer health.
Because less than 0.5% of all incoming U.S. food shipments are laboratory tested, the risk to food safety and quality has little to do with whether the standard is adopted and audited but, rather, how criminal intent is uncovered and dealt with. The former head of China’s Food and Health Admini-stration’s death penalty should send a clear message the issue is not to be taken as lightly as Wilson suggests.
CORY D. ZUPFER
Big Mistake in Otherwise Excellent Column
Only one mortal sin was committed in “Reliability Assessment by Use-Rate Acceleration” (Necip Doganaksoy, Gerald Hahn and William Meeker, June 2007, p. 74).
The column describes an application of accelerated testing of washing machine motors by reducing non-operational time, a terrific way to get results much more quickly if done properly.
The requirement was clear: 97% reliability for 10 years of operation. Initial evaluations used a proportion of failures to total, and intermediate evaluations used a binomial estimate of reliability at a lower 95% confidence. The 95% confidence level was selected to account for testing a relatively small sample of motors because the estimate was subject to much statistical uncertainty, as the authors stated.
The authors correctly used the well-known Weibull distribution to model variability of age to failure. Unfortu-nately, there was one main problem with the article. The authors employed statistics to calculate resulting confidence, and then they used the estimated confidence to allow bypassing of the requirements. This approach is completely invalid.
The authors stated that the 95% lower confidence bound of 96% for 10-year reliability just missed the desired demonstration goal of 97%. So, analysis after six months of testing indicated the motors did not meet the requirement.
The article then said that the 97% demonstration (reliability requirement) can be made with 92% confidence—and this was judged to be sufficient for production start-up. Using confidence in this way can lead to improper acceptance of faulty equipment.
When I teach classes on Weibull distribution, questions often arise about acceptance in such cases. One that always leads to heated debate is whether you can change requirements based on results. It is very tempting to do that, but adjusting the confidence level according to the results voids the concept of confidence.
A related problem involves retesting. Suppose you create a pass/fail requirement test and flunk the test. You try another test and flunk that test. You try another test, and you pass it. So, the equipment is OK, right? Wrong. You are giving yourself additional opportunities to pass.
Taking away this cart-before-the-horse error, the remainder of the column is a good example of practical reliability and quality engineering work. The benefit of such work is evident in improved reliability from equipment fault identification and corrective action. The column deftly demonstrates the use of simple techniques for evaluating reliability.
San Pedro, CA
We thank Wes Fulton for his positive comments and for voicing his concerns about our column. His “mortal sin” assessment is, however, completely misplaced.
The crux of our disagreement rests in Fulton’s statement that the analysis after six months of testing indicated the motors did not meet the requirement. Our analysis showed no such thing. Instead, the estimated 10-year reliability was 99.4% (first paragraph on p. 76), subject to correction of the previously identified design flaw.
Admittedly, accounting for statistical variability, 97% reliability could be demonstrated only with 92% confidence, rather than the desired 95% confidence. Failure to meet a high level of statistical confidence in making an assertion is, however, vastly different from showing that the assertion is untrue. In fact, one-sided lower and upper 95% confidence bounds on 10-year reliability were 96% and 99.9% (the latter not reported in our article), respectively.
Under these circumstances, the practical—and not unreasonable—decision was made to proceed with production. This was done with an understanding of the associated statistical risks and the added uncertainty in applying the results to large scale production. But would things really have been that different if the test results had demonstrated 97% reliability with 95% confidence, rather than with 92% confidence?
Most importantly, we did not rest on the initial assessment. Additional investigations were conducted to get more information. These included physical examination of all failed (and some unfailed) test units, continuing some unfailed units on test, and extensive testing of additional units.
If these further evaluations had been unfavorable (which they were not), the start-up could have been aborted. This was felt to be an acceptable risk based upon the available information.
We agree that confidence intervals are an important tool for quantifying statistical uncertainty. The choice of the confidence level to use is, however, somewhat arbitrary; 95% confidence is used most commonly.
However, we often recommend lower levels early in a development program and higher levels when nearing product release or when there are important safety issues.
We concur with some of the general concerns Fulton expressed but strongly disagree that they apply to our analysis.