## 2019

### Reliable plan

Q: What are the most appropriate, non-industry specific sampling plans for reliability testing, particularly for ongoing reliability testing? The testing we are doing is destructive, so we cannot afford a large sample size. At the same time, we need to provide good customer protection.

A: Depending on the characteristics of the data being collected, there are many sampling plans that may apply. Three potential plans are:

1. Attribute testing: The test result is either pass or fail, and reliability is demonstrated by the likelihood of a pass. An example is go/no-go testing on the strength of a plastic tube, with the test being a weight hung from the end of the tube and the result being whether the tube breaks. The test purpose is to demonstrate that a percentage of the tubes meet the requirement at a given confidence level.
2. Variable testing: The numerical data are analyzed to determine the reliability demonstrated versus a specification. An example is tensile strength of the same plastic tube, with the purpose being to demonstrate that a percentage of the tubes meet a minimum requirement at a given confidence level.
3. Reliability testing: True product life data are being collected, and reliability is defined as the likelihood of surviving without failure under a given set of conditions. An example is an extension of the attribute testing earlier, in which the weight is hung from the tube until the tube breaks and the time to failure is recorded. Environmental conditions also may be involved.

For attributes, standard lot acceptance sampling is one place to start. The sampling plan should demonstrate a high probability of acceptance (for example, 0.95) at the acceptable quality level. Conversely, the sampling plan should have a low probability of acceptance (for example, 0.10) at the lot tolerance percent defective (LTPD). To minimize sample size in attribute testing, a plan that allows zero failures (C = 0 plan) provides the best consumer protection and the smallest sample size. The following simple equation can be used to calculate the required sample size (n) given a desired reliability (R) and a confidence level (CL):

n = ln (1 − CL) / ln (R).

To relate this equation to acceptance sampling terminology, a plan’s LTPD is equal to 1 − R.

Another way to reduce sample size is to make the testing requirements risk-based. Use the product’s risk analysis (hazard analysis or failure mode and effects analysis) to justify lower confidence and reliability requirements for testing. Table 1 is an example of a risk analysis severity rating scale and corresponding reliability/confidence requirements that can be used for attribute and variable testing.

To determine sample sizes for variable testing, additional parameters called "power" and "difference to detect" are needed. Statistical power is a measure of consumers’ protection. The higher, the better, and it also can be risk-based, as shown in Table 1. Lower risk implies lower required confidence and power, which leads to smaller sample sizes.

In addition to using risk, variable sample sizes also can be reduced by realistically increasing the difference to detect. If historical data on the product under test or on a similar product are available, the data can be used to estimate what difference to detect value would be so small that there would be insignificant risk if it were not detected. In other words, if the data infer that the requirement can likely be met comfortably, smaller sample sizes can be used.

Table 2 provides sample sizes for different powers and differences to detect, all with 95% confidence. Notice how much the difference to detect value can affect sample sizes, by entire orders of magnitude.

Table 2 applies to data that follow the normal distribution, so testing for normality is necessary before conducting the analysis. Alternate techniques such as data transformations and determining best-fit distributions are recommended if the data are not normally distributed.

For true reliability testing, the sample size calculation for a reliability demonstration test is straightforward, assuming the exponential distribution and its constant failure rate apply. Reliability software, such as ReliaSoft’s Weibull++ software and supporting literature, thoroughly explains the concepts that follow. The sample size is related to the total accumulated test time (T), which can be calculated from the equation:

T = (MTTF × X2CL, 2(r + 1)) / 2.

In this equation, MTTF is the mean time to failure that needs to be demonstrated, X2 is the Chi-squared distribution, CL is the confidence level and r is the number of failures.

After T is determined, you can calculate the sample size by dividing T by the available test time per unit. Note the tradeoff between sample size and test time per unit. Smaller sample sizes can be used, but the test time is longer. Also, as previously shown with attribute and variable testing, confidence level is a discretionary parameter. If the risk level allows, decreasing the confidence level can reduce the total test time needed, and therefore can decrease the sample size. The required MTTF also may have some discretion, although it is primarily driven by product requirements and a relevant safety margin. Decreasing the required MTTF can reduce sample size, again with caution and only if the risk level permits.

Other techniques to minimize the cost of testing or reduce sample size include:

1. Reduced inspection:1 For ongoing testing, in which production is at a steady rate, reduced inspection can be implemented based on previous acceptable results on a certain number of lots and units.
2. Double and multiple sampling: A smaller initial sample is drawn, and then the lot either accepted or rejected, with a third option being to take another sample. If the expected lot quality is very good or very bad, double and multiple sampling can be economical.
3. Combined environments or worst-case testing: In reliability testing, rather than testing the effects of two different conditions separately, such as temperature and vibration, combine them. The concept, which should be specifically validated by engineering expertise and the physics of the situation, is that subjecting the test specimen to more than one stress is more severe than applying the stresses separately. If the product passes the combined study, it is inferred that it would pass if exposed to the treatments separately.
4. Correlation with nondestructive evaluation (NDE): If product quality can be verified and the product is unharmed by the testing itself, NDE can be the way to go. For example, porosity or voids in a plastic tube can lead to reduced tensile strength. Ultrasonic inspection of the tubes can be used to qualitatively and quantitatively see the defects, and correlation with destructive test data can allow acceptance criteria to be set based on the ultrasonic scans.
5. Eliminate the need for testing by controlling the process: Use design of experiments and engineering knowledge to determine which manufacturing process parameters affect the product characteristics being tested. Then, control the key process parameters using statistical process control. Perform studies and validations to demonstrate that if the process is in control, the product will meet requirements.
6. Eliminate the part being tested: If you are testing a component of a more complex product, consider how you could redesign the overall product to eliminate the component or incorporate its function into another part of the design that does not require ongoing testing. This could be part of a design for reliability effort, or more specifically, design for testing.

These are a few of the ways in which sample sizes can be reduced to minimize cost without compromising the priority of protecting the users of the products from risks.

Scott A. Laman
Senior manager,
Quality engineering and risk management
Teleflex Inc.