Edited by Connie M. BorrorDOE Simplified: Practical Tools for Effective Experimentation
Editors Reviews of New Editions, Collections of Papers, and Other Books
Miller and Freund's Probability and Statistics for Engineers, 6th ed.
by Robert A. Johnson
by Barbara Ryan and Brian Joiner
DOE Simplified: Practical Tools for Effective Experimentation by Mark J. Anderson and Patrick J. Whitcomb. Productivity, Inc., Portland, OR, 2000. 236 pp. $39.95.
Reviewer: Karen A. F. Copeland, Statistical Consultant, Boulder, CO 80304.
THE best way (that I can think of) to describe this book is as a well planned, two day, industrial short course on the design of experiments. The topics covered, the depth of material, and the general tone of the book match that of a typical short course.
Chapter 1 covers the statistical basics such as mean, standard deviation, normal distribution, confidence intervals, t-distribution, and normal probability plots. Chapter 2 introduces simple comparative experiments. Chapter 3 contains the meat of the text: two-level factorial designs. Chapter 4 looks at non-normality and the subsequent transforming of responses. Chapter 5 covers fractional factorial designs, including a 3/4 fraction design for 4 factors. Chapter 6 looks at minimal designs and fold-overs. Chapter 7 introduces general factorial designs in the context of a few categorical factors at more than three levels. Optimal designs are alluded to, but they are b yond the scope of this text.
Chapters eight and nine, on response surface designs and mixture designs, are meant only to wet the appetite with what is possible. No exercises are given for these two chapters. The previous seven chapters all contain at least one practice problem. These problems can be done using the software (180-day educational version of Design-Ease 6.0) that comes with the book. It is only in suggestions after the exercises that the software is mentioned. All of the output in the text is from Design-Ease, but the use of that software is not a must for use of this text. I mention this because the authors are the principals of Stat-Ease (makers of Design-Ease), and so I expected the text to be a plug for their software; however, that is not the case. Chapter 10 contains solutions to the practice problems. Finally, Chapter 11 gives ideas for "practice designs", the type of exercises you could run in a classroom.
The text concludes with appendices of tables and standard design layouts, a glossary (including a few unexpected entries for comic relief), and a list of recommended reading. The book is littered with side bars of details, interesting tidbits, and amusing stories. These side-bars both add to and distract from the material. They add some entertainment and information, but they are distracting to the flow of the reading.
For its intended audience, readers with minimal statistical background, this text is reasonable. It would work well for self study or as a text for an industrial short course.
Quality Engineering Handbook by Thomas Pyzdek. Marcel Dekker, Inc. , New York, NY. 1999. 696 pp, $ 99.75.
Reviewer: Lorraine Daniels, Arizona State University, Tempe, AZ 85287-5906.
THE goal of Pyzdek's book, as he states it, is to provide a single source that covers every topic in the body of knowledge for quality engineering. He has done a good job of accomplishing this task. The book introduces a wide range of topics. Each is presented in an interesting style that could be easily understood by someone new to quality engineering. However, because of the breadth covered, there is a limit to the depth that can be covered for each topic.
In Chapter 1 Pyzdek covers general knowledge topics, which includes a very nice history of attitudes towards quality. Chapter 2 discusses many of the concepts usually included in total quality management texts, such as supplier management, training, and the seven quality tools. A variety of statistical topics are included in Chapter 3. The topics include basic statistics, regression, time series, experimental design, and acceptance sampling. Chapter 4 is titled Product, Process and Material Control. This chapter addresses issues such as work instructions with some nice small examples, material review boards, and statistical process control. Chapter 5 is dedicated to measurement system analysis. Chapter 6 is focused on safety and reliability. Chapter Seven describes the important issue of data collection and analysis. The final chapter concludes with a discussion of maintainability and availability. Pyzdek also provides a nice set of appendices that includes terminology and statistical table references.
Some small improvements that could be made to the book are: 1) A set of references for each chapter is needed. Since the topics are not covered in depth, a list of specific references for each topic could be helpful to those who are interested in learning more. References are included at the end of the book. 2) A few minor changes in the flow could provide clarification for the reader. For example, there is a discussion of the Central Limit Theorem before the topics of probability distribution functions and the normal distribution. 3) I personally did not like the format of the headings of each section, written in a combination of letters and numbers. I did not find this formatting style easy to follow. Also, there was a single case where a list of references was provided right before the last section, and this made the book appear unorganized.
Pyzdek provides a good general reference book that could be used as a launching pad for those interested in mastering the field of quality engineering.
Statistcal Quality Control: A Loss Minimization Approach by Dan Trietsch. World Scientific Publishing Co., Inc., Singapore; River Edge, NJ. 1999. 404 pp. $ 76.00.
Reviewer: Charles Quesenberry, Department of Statistics, North Carolina State University, Raleigh, NC 27695-8203.
THIS book consists of 361 pages of text organized into nine chapters and a number of Supplements at the ends of some chapters. According to a statement in the preface the book is intended for both practitioners and theoreticians, and can also be used for teaching. For reasons that I will explain in the following, I feel that it would most likely be of some interest to experienced SPC practitioners. The book would be of limited interest to theoreticians, and of limited usefulness to beginning students in the subject, even those with strong backgrounds in statistics. Although the title suggests a general quality control book, the treatment is largely of classical Shewhart charts. I will describe and briefly discuss the individual chapters and then conclude with some general remarks.
Chapter 1: Introduction to Shewhart Control Charts, 16 pages
Although the title of this chapter calls this an introduction to Shewhart charts, it is more a heuristic discussion of some of the basic concepts and terminology that assumes the reader is already familiar with these charts. Most of this will not be new to those readers familiar with Shewhart charts, but one particularly worthwhile point made is that charts are often made with data sets that are too small to give reliable charts.
Chapter 2: On Measurement, 28 pages
This chapter gives a generally reasonable discussion of some issues in measurement technology. However, on page 39 the following surprising statement appears: "regardless of the exact estimation method we use, when we estimate a ratio between independent variances the underlying distribution is F." This is not true, because we can be sure that the distribution is an F distribution only if the estimate is itself a ratio of independent X2 statistics. Other estimates may not have an F distribution.
Chapter 3: Partial Measurement of Quality by Loss Functions and Production Costs, 29 pages
This chapter gives the author's treatment of the quadratic loss function and related matters due to Taguchi and others.
Chapter 4: Adjusting Processes without Tampering, 38 pages
This chapter discusses a number of ways to adjust a process in attempting to center it on target value. Adjustment rules discussed include the common fractional adjustment, the rule given by Frank Grubbs in 1954, and some suggested by the author. Experienced manufacturing people should find some of the discussion interesting.
Chapter 5: Shewhart Control Charts for Attributes, 46 pages
This is a fairly standard description of the basic attribute charts, but much emphasis is placed on the author's interpretation of charting philosophy. He emphasizes that the charts for the so-called "standards given" case are not really control charts at all-GOOD! He asserts that, in his view, control charts are essentially tests of hypotheses. Many will not agree with this view (see Woodall (2000)). The discussion of mixtures for p charts is nice and important. In this chapter, and later in other contexts, Trietsch recommends making charts with varying control limits when subgroup sizes vary. I feel this is a bad idea, because it destroys the ability to interpret chart patterns, e.g. Nelson (1989) and Quesenberry (1997).
On page 156 the author states that "in my opinion, control charts are a form of confidence intervals." In my view a control chart is not a confidence interval, but an ordered sequence of prediction intervals, each of which predicts the statistic to be plotted. Also, on page 156 the author says "I still think that even without explicitly specifying an -risk a rate of false signals (RFS)—what we are doing in control charting can and should be described as hypothesis testing." This is apparently the definition of the acronym RFS, and it causes a major problem in understanding much of the material in later chapters. The diffculty is as follows. Consider an ordered sequence of events A1, A2, A3,..., and suppose these are a sequence of Bernoulli trials, i.e., each has a constant probability of occurring, say p, and the events are mutually independent. Under these conditions it is meaningful to say that the events occur at a rate of p. However, if the events are not independent, i.e., if they are dependent events, we do not know what is meant by saying they occur at a rate of p. In this case, the probability that any event occurs depends conditionally upon other events. I will hereafter refer to this as the RFS diffculty.
Chapter 6: Control Charts for Continuous Variables, 84 pages
In this chapter Trietsch presents the usual charts for this case. The author recommends that charts for the subgroups case should be based on at least 100 subgroups, but preferably 400, and he asserts that there is no need to update the limits unless there is evidence the process has changed.
When subgroup sizes vary Trietsch recommends charts for both the mean and variance with varying control limits. As I commented above for attribute charts I feel that this is bad advice, because it destroys the ability to interpret point patterns. Section 6.5 is a quick introduction to charts for individual observations. The discussion of power in section 6.6 is only for the case when a very large sample from a stable process is available to establish charts. Moving average, EWMA, and CUSUM charts are introduced near the end of the chapter with some discussion of sensitivity to detect small shifts. Supplements 6.1, 6.2, and 6.3 cover estimates of , computing chart constants, and subgroup size selection, respectively.
Chapter 7: Pattern Tests for Shewhart Control Charts, 25 pages
This chapter gives a discussion of what are often called "runs tests" on Shewhart charts. Most discussion is of the types of patterns given in Nelson (1984). In the discussion of runs tests the author again uses what he calls the rate of false signals, RFS. Except for test 1, the event that one of these tests will signal on the current point is dependent upon the signal events that have occurred on at least one preceding point. So, one does not know what is meant by RFS, and much of the discussion in terms of it is thus meaningless.
On page 252 the suggestion is made to apply runs tests to attributes charts. I feel that this is questionable advice, because for classical 3-sigma charts, unless samples are extremely large, the normal approximation to an attribute distribution is poor, especially in the tails of the distribution. The tails are particularly important for several of these tests.
Chapter 8: Diffidence Analysis of Control Charts and Diffidence Charts, 27 pages
The stated purpose of this chapter, as given in the preface, is to serv as an aid in determining how large data sets are required to b for reliable charts. The author asserts the important point that larger sets than are usually recommended are required for reliable charts, when the charts are made using the classical 3-sigma control limits with plug-in estimation of parameters. In addition to the Quesenberry paper cited, this point has also been made by Ghosh, Reynolds and Hui (1981) and in recent papers by Quesenberry (1998, 1999).
I feel, however, that some of the discussion in this chapter is misleading. The following statement appears on page 281. "In contrast to Wheeler, Quesenberry (1993) reached the same conclusion we do here. His argument is that using too few data causes statistical dependence between the subsequent false signals. This is true for the population of all possible control charts, but for any given chart, once control limits are set, one way or another, a process that is in control will always generate a stream of false signals that are statistically independent of each other (Trietsch and Bischak, 1998). Therefore, the core problem is not the statistical independence identified by Quesenberry."
To consider this issue, I note that to establish a chart one must first obtain a calibration data set. Suppose one takes m samples of size n each for a total of mn observations, computes estimates of µ and , and plugs these estimates into the 3-sigma formulas for these parameters. To mak an -chart one then observes additional subgroups of size n, say, and plots the sample mean of each subgroup on the chart. Consider the chart when the ith subgroup is observed and its mean is plotted. The sample space that one must consider in order to design and study the properties of the chart is the full n(m + i) dimensional Cartesian product space. The independence that the author mentions is only conditional independence given the calibration data. If Ai denotes the sequence of signals obtained by having a point beyond, say, the upper control limit, then it is shown in the Quesenberry reference cited that this sequence of events are dependent in this n(m + i) dimensional sample space. It is this dependence, caused by using the same estimates in the control limits for all plotted points, that is indeed the core problem.
On page 304 the author gives the formula (from Quesenberry (1993)) for the standard deviation of the estimate of UCL obtained by plug-in estimation. After some discussion of what is called a diffidence band the following statement is made. "The practical implication is that (at least for n = 5) it is practically impossible to identify out-of-control points with m < 4, or even m = 5, at least not reliably. Any scheme that suggests otherwise Quesenberry (1991) is flawed." This quotation has no validity. I think the so-called diffidence chart has little, if any, value for determining sample sizes even for the case for which it was designed, viz., for classical charts made by plug-in estimation of parameters to estimate the control limits. Simpler and more readily understood results for studying the sample size issue for these classical charts are given in Quesenberry (1999), for example. The Quesenberry (1991) reference is to a paper in which a new class of charts, called Q charts, were introduced. For a stable process, these charts feature transforming model residuals to independent standard normal statistics that are then plotted on charts with constant control limits of ±3. The diffidence chart discussed by the author of this text has no relevance to these Q charts. The properties of these charts have been studied in a series of papers and much of this material is now summarized in a book (Quesenberry (1997)).
Chapter 9: Inspection Theory, 43 pages
This chapter is a discussion of some aspects of sampling inspection. It would probably be more interesting to those experienced in sampling inspection than to those requiring a basic introduction. Only about the last half of the chapter is concerned with actually designing and using sampling plans.
Experienced SPC practitioners will find some of the discussion in this book interesting. However, I think it would be difficult reading for those seeking an introduction to SPC, because the author often seems to assume that the reader has prior knowledge of the subject. There are no problems, which are needed for self study or a class text. In any event, I recommend that any user skip Chapter 8 because it contains what I feel is erroneous, misleading material as discussed above.
Ghosh, B. K. ; Reynolds, M. R., Jr. ; and Van Hui, Y. (1981). Shewhart -Charts with Estimated Process Variance. Communications in Statistics Theory and Methods 10, pp. 1797-1822.
Nelson, L. S. (1984). The Shewhart Control Chart Tests for Special Causes. Journal of Quality Technology 16, pp. 237-239.
Nelson, L. S. (1989). Standardization of Shewhart Control Charts. Journal of Quality Technology 21, pp. 287-289.
Quesenberry, C. P. (1997). SPC Methods for Quality Improvement. John Wiley & Sons, New York, NY.
Quesenberry, C. P. (1998). Statistical Gymnastics. Quality Progress 9, pp. 77-79.
Quesenberry, C. P. (1999). Statistical Gymnastics Revisited. Quality Progress 2, pp. 84-94.
Woodall, W. H. (2000). Controversies and Contradictions in Statistical Process Control. Journal of Quality Technology 32, pp. 341-378.
The following brief editors reviews are of new editions, collections of papers, or other books that may be of interest to some readers.
Connie M. Borror, Industrial and Management Systems Engineering, Arizona State University, Tempe, AZ 85287-5906.
Miller and Freund's Probability and Statistics for Engineers, 6th ed. by Robert A. Johnson.. Prentice-Hall, Inc. , Upper Saddle River, NJ. 2000. 622 + xii pp. $ 104.33.
THE first printing of Miller and Freund's Probability and Statistics for Engineers appeared in 1965. Since that time the text has gone through five revisions, each of which has been an improvement over the previous edition. The sixth edition, by Robert A. Johnson, provides the reader with many new case studies, new actual data sets, and updated procedures. The layout of the text is easy to follow and flows easily from one topic to the next.
In the Preface Johnson states that the text has been tested extensively in courses for university students as well as by in-plant training of engineers. The result is a well-written and complete text (with over 950 exercises of varying difficulty) that could be used in a two-semester or three-quarter course. The topics are presented in enough detail (both in theory and in application) that the book could be used either as a main text or a supplementary text in a special topics course. A prerequisite for this text is some background in calculus (one year of calculus as suggested in the Preface).
The titles and number of pages for each chapter are provided.
The quality topics provided in Chapter 14 are up to date and provide a good introduction to the topic. The subsections of Chapter 14 are entitled: 14.1 Quality-Improvement Programs; 14.2 Starting a Quality Improvement Program; 14.3 Experimental Designs for Quality-Improvement; 14.4 Quality Control; 14.5 Control Charts for Measurements; 14.6 Control Charts for Attributes; 14.7 Tolerance Limits; and 14.8 Acceptance Sampling. Several of the topics in Chapter 14 are covered only briefly, but in enough detail to give the reader a good understanding of the basics.
Overall, the text includes many improvements in the area of new, practical problems, case studies, and enhanced material in such areas as factorial experiments and curve fitting. Miller and Freund in its sixth edition would be an excellent text for a university course or In-House/On-Site training.
Minitab Handbook, 4th ed. by Barbara Ryan and Brian Joiner.. Duxbury/ Thompson Learning, Paci c Grov , CA. 2001, 464 + xvi. $ 29.95.
THE fourth edition of Minitab Handbook has been updated for the 12th and 13th versions of Minitab. At the time of this printing, Minitab Version 13 had just become available, and the authors provide footnotes for the differences between the two versions. The authors stated purpose of this text is "...to teach students how to use Minitab to analyze data." The target audience is obviously new, or relatively new, users of Minitab. For advanced users, some of the latter chapters may provide helpful tips and insightful hints, but overall most will not find this text as useful as will a new customer of Minitab.
The present edition of the text has made significant use of graphics, examples, and exercises. There are numerous data sets used for illustrations and examples. Complete data sets are available from the publisher at www.duxbury.com/datasets.htm. The fourth edition does not provide a complete overview of each tool available in Minitab, nor is that the authors purpose. The text is an introduction to some of the basic statistical procedures featured in Minitab and some basic interpretation of results. There are a number of exercises at the end of each chapter that incorporate many of the datasets available to the user. The chapter titles are provided here for completeness, with the number of pages in parenthesis.
Overall, the text provides a good introduction and "tutorial" to Minitab versions 12 and 13. Again, the text does not give complete descriptions/discussions on all statistical procedures. For example, there is no discussion of I-MR control charts, EWMA control charts, CUSUM control charts, or quality topics such as process capability. But as an introductory text on Minitab procedures, I would recommend the Minitab Handbook. The text would be an excellent supplement for a statistics course that emphasizes the use of Minitab software.
ASQ Quality Press, 600 North Plankinton Avenue, Milwaukee, WI 53201-3005; (800) 248-1946; http://www.asq.org
Duxbury Press/ITP Press, 511 Forest Lodge Road, Pacific Grove, CA 93950-5098; (800) 423-0563; http://www.duxbury.com
Marcel Dekker, Inc., Cimarron Road, P. O. Box 5005, Monticello, NY 12701-5185; (800) 228-1160; http://www.dekker.com
Prentice-Hall, Inc., One Lake Street, Upper Saddle River, NJ 07458; (800) 282-0693; Fax (800) 835-5327; http://www.prenhall.com
Productivity, Inc., 541 NE 20th Ave, Portland, OR 97232; (800) 966-5423; Fax (503) 235-0909; http://www.ppress.com
World Scientific Publishing Co., Inc., 1060 Main Street, River Edge, NJ 07661; (800) 227-7562; Fax (888) 977-2665; http://www.worldscientific.com