2019

STATISTICS ROUNDTABLE

Follow the Rules

Don’t dismiss the Nelson Rules
when looking for causes of variation

by Lynne B. Hare

Despite appearances to the contrary, I am not old enough to have participated in the first discussions of control chart rules.

Walter Shewhart claimed he established three standard error limits on the basis of economics, not on the basis of probabilities.

But most likely, earlier practitioners carefully examined the probability of a single mean being beyond the limits by chance alone. Under the assumption of normality, it is 0.0026 or 0.26% at each sampling time.

To Shewhart’s way of thinking, the probability was not nearly as important as the action to be taken when the control chart signaled the presence of an assignable cause of variation.

Get busy and remove the cause, and work preventively to ensure the cause does not reappear.

Eight rules

During the early 1950s, engineers at Western Electric augmented Shewhart’s three standard error rules. Their additional rules were collected into a set of tests for assignable causes by Lloyd Nelson and published in one of his Technical Aids columns in the Journal of Quality Technology in 1984. The paraphrased rules can be found in the sidebar, "Eight Nelson Rules."

Eight Nelson Rules

  1. One point beyond three standard errors.
  2. Nine consecutive points on the same side of the center line within one standard error of the center line.
  3. Six consecutive points increasing or decreasing.
  4. Fourteen consecutive points alternating, increasing and decreasing.
  5. Two of three consecutive points between two and three standard errors on either side of the center line.
  6. Four of five consecutive points on either side of the center line beyond one standard error from the center line.
  7. Fifteen consecutive points within one standard error of the center line.
  8. Eight consecutive points on either or both sides of the center line— with none within one standard error of the center line.

To encourage focus on corrective action and de-emphasize concentration on the rules, Nelson had large cards made to display the rules—complete with illustrative diagrams on one side and brief explanations on the other.

"One of the main objectives was to standardize on this schedule of tests so that discussion would be focused on the behavior of the process rather than on what test should be used," he said. It is not about the test. It is all about the process.

The explanatory notes on the reverse side of Nelson’s cards were about the process, not about rule probabilities. Note five, for example, teaches that rules five and six should be used if an early warning is desired, and note six teaches that rules seven and eight are tests for stratification.

If rules seven or eight raise flags, there are two or more sources of variation in the process. Workers are encouraged to figure out what those sources might be so their effects might be removed from the process.

Clearly, the notes help with the detective work necessary for process improvement via the elimination of assignable sources of variation.

Operator error?

Sadly, in many applications, that message has been lost. Any rule violation has become a trigger for process adjustment on the part of an operator, who is burdened with many responsibilities in addition to control chart care and feeding. These adjustments may or may not be appropriate to the cause, and operators are given little training in appropriate corrective action.

Operators are not to blame. We must lay that at the feet of their management, whose ignorance of the lost opportunities for process improvement results from not understanding the value of process variation reduction.

In a recent application, I had discussions with chemists responsible for tracking a compound’s assays from lot to production lot. Control chart instructions written into their standard operating procedures (SOP) included a strict following of the eight Nelson Rules. Any rule violation triggered an expensive, time-consuming investigation. Because there was no mention of the interpretation of the Nelson Rules in the SOP, the chemists were given little information to aid their investigations. What kind of relief could I provide?

Well, change the SOP, of course. But in the interim, perhaps the Nelson Rules could be relaxed. After all, there were hints that when rules two through eight were found to be inconvenient, they were sometimes ignored. Shhhhh. And given the way the rules’ process improvement messages were ignored, why bother using them?

I wondered what the chance of engaging in an unnecessary investigation would be when all rules are applied compared to the chance when only rule one is applied. We know the chance when only rule one is applied. It is 0.26%, as stated earlier. The chance when all rules are applied is difficult to compute because the rule events are correlated. But some popular software packages allow users to create control charts with rules of various types applied automatically.

Applying the eight Nelson Rules to a column of 100,000 random normal numbers, I found that rule one was violated 0.25% of the time, in close agreement with the theoretical value.

The combination of rules one through eight was violated 1.97% of the time, counting simultaneous multiple rule violations as a single violation. This suggests you are eight times more likely to launch an investigation unnecessarily when all eight Nelson Rules are applied.

Wouldn’t you conclude that ignoring the messages of the Nelson Rules can result in missing opportunities for process improvement and seeking sources of assignable cause variation unnecessarily?

Rest assured, the Western Electric staff who first considered these rules—and Lloyd Nelson, who assembled them in cohesive form—all knew what they were doing. The problem is, all too often, the rationale is unknown and the messages are ignored.

What a waste.


Bibliography

  • Nelson, Lloyd, "The Shewhart Control Chart—Tests for Special Causes," Technical Aids, Journal of Quality Technology, No. 16, Oct. 1984, pp. 238-239.

Lynne B. Hare is a statistical consultant. He holds a doctorate in statistics from Rutgers University in New Brunswick, NJ. He is past chairman of the ASQ Statistics Division and a fellow of both ASQ and the American Statistical Association.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers