Planning Reliability Assessment
by William Q. Meeker, Gerald J. Hahn and Necip Doganaksoy
Let’s say you have designed a new metal spring and want to estimate the time by which 10% of such springs will fail. Many reliability tests require estimation of a percentile or quantile tp of the distribution for time to failure—t0.10 in this example. But how many units do you need to test and for how long?
The Basic Approach
The life test will provide an estimate, 'tpof tp, and a 95% confidence interval to contain tp. A lower confidence bound on tp is 'tp/'R, and an upper confidence bound is 'tp x 'R, where the precision factor, 'R > 1, is estimated from the data. For example, if 'R = 2, the upper confidence bound on tp exceeds t�p by a factor of 2, and the lower confidence bound is half of 'tp.'R = 1.3 implies a much narrower confidence interval and greater precision in estimating tp.'R will depend on the sample size, n, and the test duration, tc—the time at which unfailed units are removed from the test.
In planning a life test, you need to specify a target precision factor, R*, to obtain a reasonably sized confidence interval. 'R is random, varying from one test to the next. Therefore, select R* so the 'R attained exceeds R* about half the time. Then, find a combination of n and tc to estimate tp with a precision factor close to R*.
We suggest you use simulation to do this. The basic idea is for the computer to generate many samples of size n for test duration tc to resemble the data expected from the life test and analyze the results for each such sample. Then repeat for different n and tc to compare the resulting statistical uncertainties.
The specific procedure, which from now on will focus on t0.10, is:
- Step one: From past experience and engineering judgment, assume a statistical distribution for time to failure—say a Weibull distribution with initially specified values for the shape parameter β—and a planning value for t0.10. Then, determine the assumed Weibull distribution scale parameter η, from β and t0.10.1
- Step two: Specify an initially proposed n and tc.
- Step three: Randomly generate n times to failure from the assumed distribution. Many of these randomly generated times will exceed the time tc at which the test is terminated and are, therefore, taken as unfailed or censored at time tc.
- Step four: Apply the maximum likelihood (ML) method to the simulated data to compute estimates of the parameters of the time to failure distribution, the estimate 't0.10of t0.10, a two-sided confidence interval for t0.10 and the resulting 'R .
- Step five: Repeat steps three and four many times to obtain a distribution of 'R . Then, find the distribution’s geometric mean, RG, which is an estimate of the median of the distribution of 'R. This characterizes the precision you can expect in estimating t0.10 for the chosen n and tc. Compare RG with the target R*.
- Step six: Repeat steps three, four and five for different n and tc, and assess their impact on RG. From this, select n and tc for the life test.
The Metal Spring Example
You have 45 representative metal springs available for testing and five machines to test the springs under cyclic compressive stress with the displacement encountered in application. You will, therefore, test nine randomly selected groups of five springs for up to tc hours.
A cycling rate of three cycles per minute can be safely used without creating new failure modes to accelerate the test. To end the test after slightly more than two months, with each group running for a week, you determine the unfailed units run for tc = 30 kilocycles.
Say you then want to estimate t0.10 with a 95% confidence interval and a precision factor R* = 1.5. You first must ask, “Will a life test with n = 45 and t0.10 = 30 kilocycles satisfy this requirement? If not, what combination of n and tc will?”
Determine n and tc to achieve R* = 1.5 by following these steps:
- Step one: Time to failure is assumed to follow a Weibull distribution with β = 2 and t0.10 = 40 kilocycles, implying η = 123.2 kilocycles (see Figure 1).
- Step two: Consider n = 45 units and tc = 30 kilocycles.
- Step three: The computer randomly generates 45 times to failure from the assumed Weibull distribution. Four of these times (11.5, 24.0, 26.3 and 28.7 kilocycles) were less than 30 kilocycles. The remaining 41 values exceeded 30 kilocycles and were taken to represent unfailed springs at 30 kilocycles.
- Step four: The data yielded the ML estimates 'η = 66.43 kilocycles, ' β = 2.991 and 't0.10 = 31.30 kilocycles. An approximate 95% confidence interval to contain t0.10 is [22.48, 43.59]. Thus, 'R = 43.59/31.30 = 1.39. The solid gray and brown lines in Figure 2 show the resulting fitted time to failure distribution and the distribution implied by the planning values, respectively. The dashed lines are (pointwise) 95% confidence intervals on tp for different values of p.
A second simulation is shown in Figure 3. It resulted in only two failures, the estimate 't0.10 = 54.37 kilocycles and a much wider 95% confidence interval [14.93, 197.95], giving a value of 'R = 3.64.
- Step five: Steps three and four are repeated to obtain a total of 5,000 simulations. The Weibull distribution fits for the first 50 simulations are shown in Figure 4 (p. 92). The large variability in the 't0.10 estimates is evidenced by the spread in the fitted lines crossing the horizontal axis at 0.10—showing values of 't0.10 ranging from about 28 to more than 500 kilocycles for the 50 simulations, with n = 45 and tc = 30 kilocycles.
Figure 5 (p. 92) shows the 5,000 values of 'R. The geometric mean of these values is RG= 2.5, appreciably exceeding the targeted R* = 1.5.
- Step six: To achieve the specified R*, either n or tc—or both—need to be increased.
In this application, it is difficult to increase the sample size beyond n = 45,
but it is feasible to test unfailed springs to tc = 50 kilocycles by extending the
test duration to nearly four months. Steps three to five were, therefore, repeated
for n = 45 and tc = 50.
The results are shown in Figures 6 and 7 (p. 93). These indicate much less spread and reduced RG to 1.55. This was close enough to R* = 1.5. The life test was, therefore, conducted with n = 45 springs for tc = 50 kilocycles.
Further simulations, whose details
are not shown, indicated a sample of n = 180 springs is required to attain ,'RGclose to 1.5, while maintaining tc = 30 kilocycles.
The Impact of Increasing n and tc
Table 1 summarizes simulation results for 12 combinations of n and tc to assess the impact of sample size and test duration on 'RG in estimating t0.10 for the assumed Weibull distribution. This shows:
- There is a point of diminishing returns in increasing tc. A rule of thumb in estimating the percentile, tp, is to wait until a fraction, p, of units fail (in parentheses in Table 1) or, preferably, a little longer. Waiting much longer provides little added information unless you also need to estimate tp for a larger value of p, say 50%.
improves slowly, especially for large tc, with an increase in sample size.
Impact of Initial Assumptions
Life test planning requires planning information or initial assumptions about the distribution for time to failure, which in this case were planning values for η and t0.10. These are likely incorrect—if we knew them we wouldn’t require the test—but you can assess the sensitivity of the results to the values selected. Thus, if t0.10 in our example was specified as 20 or 60 kilocycles (instead of 40), simulations indicate 45 springs need to be tested for 25 and 75 kilocycles, respectively.
Implementation Issues And Extensions
We discussed only the technical question of determining n and tc, but there are many other things to consider in planning a life test. For example, test units should be representative of the population of interest, and the test must closely resemble the applications environment.
Also, the methods for determining sample size and test duration can be readily modified to plan test programs for other situations, such as:
- Unequal test duration—for example, testing one-third of the units for 30, 50 and 70 kilocycles, respectively.
- A different time to failure distribution, such as the lognormal.
- Accelerated testing2 or degradation testing.3
- Demonstrating high reliability over a defined lifetime.4
Statistical Analysis And an Alternative Method
Once the life test has been conducted, you can analyze the data to obtain an ML estimate and a confidence interval for tp5,6
A mathematical formula7 provides an alternative for determining n and tc, which can be used instead of simulation or to suggest starting values for n and tc in the simulation.
The approach described here was conducted using the SPLIDA package (available at www.public.iastate.edu/~splida) of S-Plus 6 (available from Insightful Corp.). The procedures have also been programmed in the JMP scripting language for release 6 of JMP (available from the SAS Institute). The simulations can also be implemented using other commercially available packages, but some programming skill is required.
- W.Q. Meeker and L.A. Escobar, Statistical Methods for Reliability Data, Section 2.8, John Wiley and Sons, 1998.
- G.J. Hahn, W.Q. Meeker and Necip Doganaksoy, “Speedier Reliability Analysis,” Quality Progress, June 2003, pp. 58-64.
- W.Q. Meeker, Necip Doganaksoy and G.J. Hahn, “Using Degradation Data for Reliability Analysis,” Quality Progress, June 2001, pp. 60-65.
- W.Q. Meeker, G.J. Hahn and N. Doganaksoy, “Planning Life Tests for Reliability Demonstration,” Quality Progress, August 2004, pp. 80-82.
- Meeker and Escobar, Statistical Methods for Reliability Data, Section 8.4.3, see reference 1.
- Necip Doganaksoy, G.J. Hahn and W.Q. Meeker, “Product Life Data Analysis: A Case Study,” Quality Progress, June 2000, pp. 115-122.
- Meeker and Escobar, Statistical Methods for Reliability Data, Section 10.5.4, see reference 1.
WILLIAM Q. MEEKER is a professor of statistics and distinguished professor of liberal arts and sciences at Iowa State University in Ames. He has a doctorate in administrative and engineering systems from Union College in Schenectady, NY. Meeker is a fellow of the American Statistical Assn. and a Senior Member of ASQ.
GERALD J. HAHN is a retired manager of applied statistics at the GE Global Research Center in Schenectady, NY. He has a doctorate in statistics and operations research from Rensselaer Polytechnic Institute in Troy, NY, where he is also an adjunct faculty member. Hahn is a Fellow of the American Statistical Assn. and ASQ.