Design and analysis of confirmation experiments By Nathaniel T. Stevens and Christine M. Anderson-Cook
The statistical literature and practitioners have long advocated the use of confirmation experiments as the final stage of a sequence of designed experiments to verify that the optimal operating conditions identified as part of a response surface methodology strategy are attainable and able to achieve the value of the response desired. However, until recently there has been a gap between this recommendation and details about how to perform an analysis to quantitatively assess whether the confirmation runs are adequate. Similarly, there has been little in the way of specific recommendations for the number and nature of the confirmation runs that should be performed. In this article, we propose analysis methods to assess agreement between the mean response from previous experiments and the confirmation experiment, and suggest a strategy for the design of confirmation experiments that more fully explores the region around the optimum. Supplemental Material (ZIP)
A one-sided procedure for monitoring variables defined on contingency tables By Sotiris Bersimis and Athanasios Sachlas
Nowadays the use of the multivariate statistical process control (MSPC) toolbox is efficiently generalized beyond assuring product quality through the monitoring of industrial processes in order to be used in many other non industrial fields (e.g., public health, environmental, financial monitoring, etc.). Data produced by non industrial processes usually require the development of problem-oriented monitoring procedures. In this article we develop a method for monitoring bivariate random variables defined on contingency tables and introduce an appropriate one-sided control procedure, motivated by a problem from double reading used in many medical processes. Specifically, we propose a procedure for monitoring simultaneously the measure of agreement between Cohen’s kappa defined on a contingency table associated with the process stability and one percentage associated with the process quality level, defined on the same contingency table. The procedure is based on an appropriate approximation that is assessed numerically and shows an excellent performance. Then we explore the performance of several candidate one-sided techniques for monitoring the process and we propose a new one that is based on a penalization strategy that appears to have the best performance. The new technique is very easy to implement by a non statistician, as illustrated by its application to a real case from double reading.
Orthogonal blocking arrangements for 24-run and 28-run two-level designs By Eric D. Schoen, Nha Vo-Thanh and Peter Goos
Much research has been done concerning 24-run orthogonal two-level designs involving 3–23 factors and 28-run orthogonal two-level designs involving 3–27 factors. The focus of this research was on completely randomized screening designs and led to lists of recommended designs for each number of factors. When using the recommended designs, there is either no aliasing between the main effects and the two-factor interaction effects or only a limited amount. Among all designs with this attractive property, the recommended designs minimize the aliasing among the two-factor interaction effects. It is, however, unclear which 24- and 28-run designs are best when complete randomization is infeasible and the designs have to be arranged in blocks. In this article, we address this issue and present the best arrangements of 24-run designs in 3, 4, and 6 blocks and the best arrangements of 28-run designs in 7 blocks.
Supplemental Material (ZIP)
Construction of optimal run order in design of experiments By Jiayu Peng and Dennis K. J. Lin
It is sometimes favorable to conduct experiments in a systematic run order rather than the conventional random run order. In this article, we propose an algorithm to construct the optimal run order. The algorithm is very flexible: it is applicable to any design and works whenever the optimality criterion can be represented as distance between any two experimental runs. Specifically, the proposed method first formulates the run order problem into a graph, then makes use of the existing traveling salesman problem-solver to obtain the optimal run order. It can always reach the optimal result in an efficient manner.
A special case where level change is used as the distance criterion is investigated thoroughly. The optimal run orders for popular two-level designs are obtained and tabulated for practical use. For higher- or mixed-level designs a generic table is not possible, although the proposed algorithm still works well for finding the optimal run order. Some supporting theoretical results are derived.
Yield-based process capability indices for nonnormal continuous data By Piao Chen, Bing Xing Wang and Zhi-Sheng Ye
Process capability indices (PCIs) are widely used to assess whether an in-control process meets manufacturing specifications. In most applications of classical PCIs, the process characteristic is assumed normally distributed. However, the normal distribution has been found inappropriate in various applications. In the literature, the percentile-based PCIs are widely used to deal with the nonnormal process. One problem associated with the percentile-based PCIs is that they do not provide a quantitative interpretation to the process capability. In this study, new PCIs that have a consistent quantification to the process capability for both normal and nonnormal processes are proposed. The proposed PCIs are generalizations of the classical normal PCIs in the sense that they are the same as the classical PCIs when the process characteristic follows a normal distribution, and they offer the same interpretation to the process capability as the classical PCIs when the process characteristic is nonnormal. We then discuss nonparametric and parametric estimation of the proposed PCIs. The nonparametric estimator is based on the kernel density estimation and confidence limits are obtained by the nonparametric bootstrap, while the parametric estimator is based on the maximum likelihood estimation and confidence limits are constructed by the method of generalized pivots. The proposed methodologies are demonstrated using a real example from a manufacturing factory.
An adaptive two-stage Bayesian model averaging approach to planning and analyzing accelerated life tests under model uncertainty By Xiujie Zhao, Rong Pan, Enrique del Castillo and Min Xie
Accelerated life testing (ALT) is commonly used to predict the lifetime of a product at its use stress by subjecting test units to elevated stress conditions that accelerate the occurrence of failures. For new products, the selection of an acceleration model for planning optimal ALT plans is challenging due to the absence of historical lifetime data. The misspecification of an ALT model can lead to considerable errors when it is used to predict the product’s life quantiles. This article proposes a two-stage Bayesian approach to constructing ALT plans and predicting lifetime quantiles. At the first stage, the ALT plan is optimized based on the prior information of candidate models under a modified V-optimality criterion that incorporates both asymptotic prediction variance and squared bias. A Bayesian model averaging (BMA) framework is used to derive the posterior model and the posterior distribution for the life quantile of interest under use stress. If the obtained test data cannot provide satisfactory model selection results, an adaptive second-stage test is conducted based on the posterior information from the first stage. A revisited numerical example demonstrates the efficiency and robustness of the resulting Bayesian ALT plans by comparing with the plans derived from previous methods.
End of performance prediction of lithium-ion batteries By Yi-Fu Wang, Sheng-Tsaing Tseng, Bo Henry Lindqvist and Kwok-Leung Tsui
Rechargeable batteries are critical components for the performance of portable electronics and electric vehicles. The long-term health performance of rechargeable batteries is characterized by state of health, which can be quantified by end of performance (EOP) and remaining useful performance. Focusing on EOP prediction, this article first proposes an accelerated testing version of the trend-renewal process model to address this decision problem. The proposed model is also applied to a real case study. Finally, a NASA dataset is used to address the prediction performance of the proposed model. Comparing with the existing prediction methods and time series models, our proposed procedure has better performance in the EOP prediction.