Journal of Quality Technology - January 2019 - ASQ

Journal of Quality Technology - January 2019

Volume 51 ∙ Number 1

EDITORIAL

RESEARCH ARTICLES

  • Identifying and visualizing part-to-part variation with spatially dense optical dimensional metrology data
    By Zhenyu Shi, Daniel W. Apley & George C. Runger
    Optical dimensional metrology (ODM) technology that produces spatially dense surface measurement data is increasingly employed for quality-control purposes in discrete parts manufacturing. Such data contain a wealth of information on the surface dimensional characteristics of individual parts and on the nature of part-to-part variation. The large body of prior quality-control work on analyzing dimensional metrology data has focused heavily on fitting parametric geometric features such as circles or planes to the data for individual parts and checking whether the features are within specifications; and subsequent analysis of part-to-part variation is restricted to those specific features. In this article, we present an approach for identifying and visualizing the nature of part-to-part variation in a more general manner that is not restricted to a prespecified set of parametric features. The basis for the approach is manifold learning applied to the collective ODM data for a set of measured parts. Particular emphasis is on handling the extremely high dimensionality of ODM data.
  • A new variable selection method based on SVM for analyzing supersaturated designs
    By Krystallenia Drosou & Christos Koukouvinos
    Supersaturated designs (SSDs) are designs whose factors exceeds run size; thus, there are not enough runs for estimating all the main effects. They are commonly used in screening experiments, where the primary goal is to identify the few, but dominant, active factors, keeping the cost as low as possible. The development of new statistical methods inspired by machine learning algorithms is increasing rapidly, especially nowadays. One of such methods is the support vector machine recursive feature elimination (SVM-RFE), which manages to extract the informative genes in classification problems, while it achieves extremely high performance. In this article, we study a variable selection method for regression problems, called SVR-RFE, to screen active effects in both two-level and mixed-level designs. Simulation studies demonstrate that this procedure is effective enough, especially in terms of statistical power.
  • An integer linear programing approach to find trend-robust run orders of experimental designs
    By José Núñez Ares & Peter Goos
    When a multifactor experiment is carried out over a period of time, the responses may depend on a time trend. Unless the tests of the experiment are conducted in proper order, the time trend has a negative impact on the precision of the estimates of the main effects, the interaction effects, and the quadratic effects. A proper run order, called a trend-robust run order, minimizes the confounding between the effects’ contrast vectors and the time trend’s linear, quadratic, and cubic components. Finding a trend-robust run order is essentially a permutation problem. We develop a multistage approach based on integer programing to find a trend-robust run order for any given design. The multistage nature of our algorithm allows us to prioritize the trend robustness of the main-effect estimates. In the literature, most of the methods used are tailored to specific designs and are not applicable to an arbitrary design. Additionally, little attention has been paid to trend-robust run orders of response surface designs, such as central composite designs, Box–Behnken designs, and definitive screening designs. Our algorithm succeeds in identifying trend-robust run orders for arbitrary factorial designs and response surface designs with two up to six factors.
  • An EWMA-type sign chart with exact run length properties
    By P. Castagliola, K. P. Tran, G. Celano, A. C. Rakitzis & P. E. Maravelakis
    In this paper, a new phase II EWMA-type chart for count data, based on the sign statistic, is proposed and applied to the monitoring of the location of an unknown continuous distribution. The most valuable characteristics of this new chart are that: a) it only uses positive integer-valued weights to account for the past process history, b) the plotted statistic is always an integer, and c) its run length properties can be exactly obtained without resorting to expensive simulations or unreliable approximations. We provide the methodology to compute the exact run length properties of the proposed chart and the algorithms to obtain the optimal chart parameters through the minimization of the out-of-control average run length. We also compare the statistical performance of this new EWMA-type sign chart vs. two classical continuous type EWMA sign charts, a CUSUM-type sign chart and a GWMA-type sign chart. The computational evaluations show that the proposed EWMA control chart outperforms the other control charts for a wide range of location shifts. Finally, two examples implementing this new EWMA type sign chart are given.
  • Strategic allocation of test units in an accelerated degradation test plan
    By Zhi-Sheng Ye, Qingpei Hu & Dan Yu
    Degradation is often defined in terms of the change of a key performance characteristic over time. When the degradation is slow, accelerated degradation tests (ADTs) that apply harsh test conditions are often used to obtain reliability information in a timely manner. It is common to see that the initial performance of the test units varies and it is strongly correlated with the degradation rate. Motivated by a real application in the semiconductor sensor industry, this study advocates an allocation strategy in ADT planning by capitalizing on the correlation information. In the proposed strategy, the initial degradation levels of the test units are measured and the measurements are ranked. The ranking information is used to allocate the test units to different factor levels of the accelerating variable. More specifically, we may prefer to allocate units with lower degradation rates to a higher factor level in order to hasten the degradation process. The allocation strategy is first demonstrated using a cumulative-exposure degradation model. Likelihood inference for the model is developed. The optimum test plan is obtained by minimizing the large sample variance of a lifetime quantile at nominal use conditions. Various compromise plans are discussed. A comparison of the results with those from traditional ADTs with random allocation reveals the value of the proposed allocation rule. To demonstrate the broad applicability, we further apply the allocation strategy to two more degradation models which are variants of the cumulative-exposure model.
  • Estimation of the random error of binary tests using adaptive polynomials
    By Thomas S. Akkerhuis, Jeroen de Mast & Tashi P. Erdmann
    We investigate how to estimate error probabilities for binary measurements and tests when a gold standard is not available. Recent studies have shown that, without a gold standard, the widely used false-acceptance probability (FAP) and false-rejection probability (FRP) are difficult to estimate. We show by mathematical analysis that this problem is inherent to the unavailability of a gold standard. Instead, the proposed method determines the random components of the error probabilities: the inconsistent acceptance and rejection probabilities IAP and IRP. The method is efficient and robust as the number of model parameters is not predetermined but depends on the goodness of fit.
  • Critical fault-detecting time evaluation in software with discrete compound Poisson models
    By Min-Hsiung Hsieh, Shuen-Lin Jeng & Paul Kvam
    Software developers predict their product’s failure rate using reliability growth models that are typically based on nonhomogeneous Poisson (NHP) processes. In this article, we extend that practice to a nonhomogeneous discrete-compound Poisson process that allows for multiple faults of a system at the same time point. Along with traditional reliability metrics such as average number of failures in a time interval, we propose an alternative reliability index called critical fault-detecting time in order to provide more information for software managers making software quality evaluation and critical market policy decisions. We illustrate the significant potential for improved analysis using wireless failure data as well as simulated data.

 

Featured advertisers


ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.