 ## 2020

STATISTICS ROUNDTABLE

# Reliability Upsizing

## How to adjust for size effect in product life data analysis

by Necip Doganaksoy, Gerald J. Hahn and William Q. Meeker

In many reliability evaluations, the units placed on life test are reduced-size versions of the actual product. This is often the result of practical limitations such as available test facilities. In the example we will consider, electrodes were used in an accelerated test to obtain time-to-failure information on dielectric insulation of generator armature bars that were four times as long as the test electrodes.

The term "size" is used in a generic sense. In addition to length, it can, for example, also represent area or volume. In such situations, you must "upsize" the results of the statistical data analysis on the reduced-size test units to the full-size product.

This column will cover a simple method to account for the effect of product size on reliability evaluations. These methods can be implemented using popular software packages such as JMP, Minitab and Weibull ++.

### Example

An example from a previous Statistics Roundtable column1 addressing a new insulation for generator armature bars will be used to illustrate the methods. The insulation consists of a mica-based system bonded with an organic binder. Failures occur due to the degradation of the organic material, causing reduced voltage strength.

Because the expected lifetime at the use condition of 120 volts/mm was expected to appreciably exceed that which could be observed in life tests, a voltage-accelerated stress test was conducted on electrodes with new insulating material. Fifteen randomly selected electrodes were tested at each of the five accelerated voltages from 170 to 220 volts/mm.

After nine months, half of these 75 electrodes had failed, mostly at the higher voltages. The electrode lifetime data were analyzed—assuming both a Weibull and a lognormal distribution, with constant shape parameters, for time to failure at each voltage (see the sidebar, "Background on Weibull and Lognormal Distributions"). An inverse power model (linear relationship between log life and log voltage) was fitted to the data using the method of maximum likelihood. This led to the estimated model parameters shown in Table 1. ## Background on Weibull and Lognormal Distributions

In reliability applications, one is frequently most interested in estimating R(t), the reliability at time t (that is, the population fraction surviving beyond time t) and specified quantiles (for example, 0.05 and 0.01) of the time to failure distribution.

The Weibull and the lognormal distributions are the most commonly used models for product time to failure. The reliability function and the q quantile of these distributions are determined from the distribution parameters as shown in table here. Table 2 shows the point estimates and the associated confidence intervals for the 0.01 and 0.05 distribution quantiles and for the reliability estimates at five years and 10 years at the (120 volts/mm) use condition, using the methods described in the earlier column. Due to test equipment limitations, the electrodes used in the accelerated test were 50 inches long. The actual product units (referred to as "bars") are 200 inches long. It is, therefore, desired to convert the estimates for electrodes shown in Table 2 to full-length bars.

### Correcting for size effect

General method: The simple and widely used method described here for correcting for size is based on reliability modeling for components in series. In series systems, the failure of any one component results in the failure of the entire system. Suppose that a component (or segment) of length L0 has reliability R0 (t). Given R0 (t), the following approach can be used to obtain reliability R(t) for a system (or full) product of length L > L0.

The full product, which can be thought of as being comprised of L / L0 nominally identical segments, each of length L0, fails when the first segment fails. Therefore, assuming statistically independent times to failure for the individual segments (covered later), full product reliability is R(t) = [R0(t)]L/L0. Similarly, the q quantile of the time-to-failure distribution of a full product of length L corresponds to the 1 – (1 – q)L0/L quantile of the time-to-failure distribution of a segment of length L0. The preceding approach can be used to obtain not only point estimates, but also confidence intervals and bounds.

Example: In our example, L / L0 = 4.

Estimating bar reliability: Assuming independence of failures, bar reliability R(t) at time t can be calculated from electrode reliability R0(t) as R(t) = [R0(t)]4. Thus, assuming a Weibull distribution for electrode time to failure, 10-year bar reliability is estimated to be (10) = [0.9972]4 = 0.9890, using 0 = 0.9972 from Table 2. Also, the lower bound of the associated 95% confidence interval for R(10) is obtained as [0.8647]4 = 0.5591. The results, assuming a lognormal distribution for electrode time to failure, are obtained similarly.

Estimating quantiles of the bar time-to-failure distribution, assuming a lognormal distribution for electrode time to failure: Again, assuming independence of failures, the 0.05 quantile of the bar time-to-failure distribution corresponds to the 1 – (1 – 0.05)1/4 or 0.0127, quantile of the electrode time-to-failure distribution. This quantile is estimated to be 125.71 x exp(z0.127 x 0.6299) = 30.80 years, using the expression for the q quantile of the lognormal distribution shown in the sidebar and the estimates for exp(μ̂ ) and σ̂ for the electrode time-to-failure distribution shown in Table 1, and z0.0127 = –2.2340.

Similarly, a 95% confidence interval for the 0.05 quantile of the bar time-to-failure distribution corresponds to a 95% confidence interval for the 0.0127 quantile of the electrode time-to-failure distribution. Obtaining this estimate required rerunning the earlier analysis of the electrode data, yielding a 95% confidence interval of 5.2 to 183.6 years for the 0.0127 quantile of the electrode time-to-failure distribution (and equivalently for the 0.05 quantile of the bar time-to-failure distribution).

Estimating quantiles of the bar time-to-failure distribution, assuming a Weibull distribution for electrode time to failure: A point estimate and confidence interval for quantiles of bar time to failure—assuming independence of failures for bar segments and a Weibull (or any other) distribution for electrode time to failure—can be obtained in the same way as for the lognormal distribution described earlier.

However, a simpler method is also available for the Weibull distribution. This is because it can be shown—assuming independence of the failure times for individual segments—that if the time to failure of a product of length L0 has a Weibull distribution with scale parameter η and shape parameter β, a product of length L also has a Weibull distribution with scale parameter η(L0 / L)1/β and (the same) shape parameter β.

Therefore, the q quantile of the time-to-failure distribution for a product of length L is simply (L0 / L)1/β times the corresponding quantile of the time-to-failure distribution of a product of length L0. Thus, in this example, the point and interval estimates for the quantiles of the time-to-failure distribution for bars of length L = 200 inches also can be obtained, more simply, by multiplying their counterpart estimates for the electrodes, given in Table 2, by (50 / 200)1/2.27 =0.5430.

Summary and discussion of results: The preceding estimates, and some further results for the full-length bars are shown in Table 3. As expected, the quantile and reliability estimates for full-length bars are considerably lower than the corresponding estimates for the electrodes shown in Table 2. The confidence intervals are wide, reflecting the statistical uncertainty due to the limited sample data and the extrapolation.

Perhaps the most meaningful results, from a practical perspective, are the lower bounds of 1.6 years and 3.7 years of the 95% confidence interval for the estimated 0.01 quantile of the time-to-failure distribution, assuming Weibull and lognormal distributions, respectively. In addition, we must emphasize that the estimates and the intervals are subject to the basic assumptions that follow. Thus, as wide as they are, the confidence intervals shown are still based on the assumptions being correct and do not consider any added uncertainty associated with the possibility that this is not the case.

### Basic assumptions

Independence of failures: The preceding approach assumes that the times to failure of similar segments in the same product are independent of one another. This assumption does not always hold true. In fact, there frequently is positive correlation between the times to failure of segments—and especially adjoining segments—of a larger product. For example, segments in the same product may come from the same manufacturing batch and are subject to the same field environment and operational stresses.

Exact methods to handle correlation between segments of the same product are advanced and complicated. However, Wayne B. Nelson, an author and expert on analysis of reliability and accelerated test data, suggests an easy method to establish lower and upper bounds (on the estimates and calculated confidence interval) for this frequently encountered situation.2,3

In this case (that is, positive correlation), the actual confidence interval can be shown to be between:

1. The (optimistic) interval obtained for a single segment, as in Table 2 of the example, assuming perfect correlation in the failure times of segments in the same product.
2. The (pessimistic) confidence interval obtained assuming independence among the segment failures, as in Table 3 of our example.

The problem with the preceding bounds is that they might be so wide as not to be useful.

Assumed models and extrapolation: The analyses in this example, involving accelerated voltage testing, were based on either a Weibull distribution or a lognormal distribution for time to failure at each voltage. Analysis showed that both distributions fit the data in the range of the data.

Alternative models that fit the data reasonably well (and give similar estimates) in the range of the data, however, might give quite different results when extrapolated beyond the range of the data. The results shown in Table 3 are, to a considerable degree, extrapolations. Indeed, we note considerable differences in estimates and confidence intervals between the two models—with the lognormal distribution giving more optimistic results than the Weibull.

In addition, we made a variety of other assumptions and extrapolations—such as the relationship between test voltage and time to failure—in the modeling and analysis of this accelerated test data. These assumptions not holding can create added uncertainty in the results.

### Further considerations

Accommodating different lengths: In some applications, the life data involves samples of different lengths. The method of maximum likelihood can be used to estimate time-to-failure distribution from such data. Authors Nelson and Necip Doganaksoy illustrate this for the lognormal distribution.4 However, such analyses have not been incorporated as a standard feature in popular software packages.

Testing above-size product: In some applications, we may, moreover, be able to test samples that are larger, rather than smaller, than the actual product—the opposite of the case that we have described. For example, our test stands may accommodate 200-inch electrodes, but the actual product is only 20 inches long. Thus, the test units provide a form of accelerated testing. The method described here, though directed at situations for which L > L0, also apply for L < L0.

The desirability of avoiding upsizing: Our comments have focused mainly on the mathematics of upsizing in reliability data analysis. As indicated, such testing is often necessary because of practical considerations.

We must emphasize, however, that whenever feasible, it is desirable to test units that are, at least, the same size as the actual product. This requires fewer assumptions, such as independence of failures, and can provide more precise estimates by permitting more material to be tested.

### References

1. Gerald J. Hahn, William Q. Meeker and Necip Doganaksoy, "Speedier Reliability Analysis," Quality Progress, June 2003, pp. 58-64.
2. Wayne B. Nelson, Applied Life Data Analysis, John Wiley & Sons, 1982, pp. 170-173. Republished in 2004 as a paperback edition.
3. Wayne B. Nelson, Accelerated Testing: Statistical Models, Test Plans and Data Analyses, John Wiley & Sons, 1990, pp. 385-387. Republished in 2004 as a paperback edition.
4. Wayne B. Nelson and Necip Doganaksoy, "Statistical Analysis of Life or Strength Data From Specimens of Various Sizes Using the Power-(log) Normal Model," Recent Advances in Life-Testing and Reliability, N. Balakrishnan, ed., CRC Press, 1995, pp. 377-408.

• For further comments and added references, see chapters 5 and 7, respectively, of Wayne B. Nelson’s Applied Life Data Analysis and Accelerated Testing: Statistical Models, Test Plans and Data Analyses listed in the reference section.

Necip Doganaksoy is senior manager of data sciences at GE Renewables in Schenectady, NY. He has a doctorate in administrative and engineering systems from Union College in Schenectady. Doganaksoy is a fellow of ASQ and the American Statistical Association.

Gerald J. Hahn is a retired manager of statistics at the GE Global Research Center in Schenectady. He has a doctorate in statistics and operations research from Rensselaer Polytechnic Institute in Troy, NY. Hahn is a fellow of ASQ and the American Statistical Association.

William Q. Meeker is professor of statistics and distinguished professor of liberal arts and sciences at Iowa State University in Ames. He has a doctorate in administrative and engineering systems from Union College in Schenectady, NY. Meeker is a fellow of ASQ and the American Statistical Association.

### Average Rating Out of 0 Ratings