ASQ - Statistics Division

Is Deming's Red Bead Experiment Misleading? – Mini-Paper from the Jan 2013 Newsletter

Keywords: Red beads - Rating Employees - Common cause - Sample Size - Power

Already a member? Access this Content

You will need Adobe Reader to view this PDF document.
Download the free Reader from Adobe


It apears that the author never actually researched control charts and their purpose. I would give the author an F and assign him to do his research, with a special focus on the difference between enumerative studies and analytic studies.

Deming's red bead experiment is NOT a hypothesis test on the differences between operators. It was simply, yet profoundly, a DEMONSTRATION that random variation from a stabel process (that clearly had NO operator influence) would result in variation that too many managers and engineers ascribe to assignable causes. The most lazy of which is to 'blame' the operator.

Back to school for the author.

If I had the option I would have selected 0 stars...



--Bev Daniels, 02-17-2013

The paper seems to confuse, or worse, equate statistical significance with practical importance. Control charts aren't hypothesis tests. They are practical decision rules for guiding management action. A signal of special cause on the control chart is taken as something of practical importance that management should respond to.
Yet, this paper re-analyzes the red bead/control chart data with "more advanced" methods, such as ANOVA. (Leaving the question of what constitutes 'more advanced' unanswered. 'Different' may have been a better term.) Using the wrong analytical approach is still wrong, regardless of how 'advanced' it is. Nevertheless, the switch to ANOVA allows a subtle switch to be made, using statistical significance to act as a stand in for practical importance. The paper then goes on to show there was insufficient power in the red bead experiment to detect statistically significant differences among workers or replications.
The author asks: "Could it be that these experiments were under powered to detect what might be important process effects?". The answer is "No." But they may have been underpowered to detect statistically significant differences. Fortunately, few could care less about generating statistically significant results. The whole point of experimentation is making findings of practical, scientific importance, not calculating probabilities. On that point, Moen's "Quality Improvement Through Planned Experimentation" remains one of the best.

--Robert Gerst, 01-28-2013
  • Print this page
  • Save this page

Average Rating

Rating

Out of 2 Ratings
Rate this item

View comments
Add comments
Comments FAQ

ASQ News