Q: In Six Sigma, what is the significance of the 1.5 sigma shift? How do you arrive at this shift? How can you explain this shift?
Krishnan P M
A: It is generally accepted that Six Sigma capability requirements began with the idea that a process should be so capable as to allow 50% margins between process limits and specification limits.
This idea is attributed to Bill Smith around 1984. Up until that time, Motorola was operating in such a way that process limits were roughly equal to specification limits, resulting in a 0.27% defective rate. Smith noted that sometimes there were shifts in process levels that caused higher defective rates. By requiring a larger margin between process limits and specification limits, a process would be more resistant to these shifts.
Early in its Six Sigma history, Motorola developed the recommendation that a process characteristic should be centered within its specification range, and that its variation should be so small that upper and lower specification limits each fall six standard deviations from the process mean (see Figure 1). In other words, sigma should be so small that the process variation takes up only half of the specification range.
This was predicated on the idea that the process data falls within a normal distribution. If a process characteristic has a normal distribution with the mean centered between its specification limits and a standard deviation equal to one-twelfth of the specification range, then the proportion of parts falling outside the specification limits is 0.002 per million.
When Motorola launched its Six Sigma Challenge in January 1987, a document was issued that introduced the idea that the process should be centered and should have a variation no more than half of the specification width, so the capability is greater than or equal to two. It also acknowledged that if the mean in a process happens to shift by as much as 1.5 sigma in one direction, then defective product occurs at a rate not exceeding 3.4 parts per million (see Figure 2). This value can be calculated by finding the probability that a standard normal variable exceeds 4.5.
A process with a capability of two is said to have a Six Sigma quality level, and the defective rate of 3.4 parts per million is the Six Sigma definition of world-class performance. The process capability is usually established by checking stability over a trial period using an XBar-R or XBar-S chart. If the process is stable, sigma is estimated using a short-term variation estimate, which is often within the subgroup from the R or S chart.
Does a capability of two guarantee a process will produce no more than 3.4 defects per million? Of course not. But, with proper process monitoring, a shift of 1.5 sigma can be detected in a timely fashion and corrected. If the process is monitored by an XBar-S chart with subgroups of size four, then a shift of 1.5 sigma in one direction will be detected, on average, within two subgroups. There’s a probability exceeding 90% the shift will be detected within four subgroups.
Having said all of this, the true origin of the choice of 1.5 as the allowable sigma shift remains somewhat of a mystery. Some accounts suggest that, based on experience, Smith found that a safety cushion of 50% on either side of the mean was necessary to absorb likely variations of the process mean. Other accounts trace it to a misinterpretation of the 1.5 inflation factor suggested by Arthur Bender for tolerance stacking.
Whatever its true origin, the 1.5 sigma factor can be thought of as an empirically based factor that prevents an overestimation of lifetime capability. Today, many professionals think of it as a safety factor that captures unexpected sources of variation or that compensates for lack of conformity to a normal model.
We strive to eliminate all sources of special cause variation, and ideally our processes should never drift or shift levels. But the reality is they sometimes do. If the short-term variation is such that the process has a Six Sigma quality level—namely, a capability of two—then we are fairly certain we can catch drifts and shifts before they degrade the process too drastically.
If the process output is moderately non-normal, the 1.5 sigma shift factor might also be useful in bounding the overall defective rate. If the characteristic of interest has a distribution that is strikingly non-normal, the analyst should abandon normal-based theory and deal with the non-normal distribution directly.
Marie A. Gaudard and Philip J. Ramsey
North Haven Group LLC
Q: What is the difference between process sigma and standard deviation?
A: Process sigma is a measure of process performance usually compared to customer specifications. This parameter defines what the process can do and is an indicator of the process capability and likelihood of meeting expectations. The process sigma is calculated from the process sample subgroups, which provide a clear idea of how that particular production lot is performing.
There are a few assumptions here: The process is stable, meaning that all variations come from the process and not external causes; and the variables are independent and not correlated to any other parameter.
Standard deviation of a probability distribution is a measure of the spread of its values around the mean and is measured in the same units as the data. If many data points are close to the mean, the standard deviation is small. If many data points are far from the mean, the standard deviation is large. If all data values are equal, the standard deviation is zero. The results are valid provided the process is normally distributed.
The process sigma is what we want the stable process to be, and the standard deviation is the final result of the data and how they fall into the process. The standard deviation should be better than our process sigma. If we have a wide process sigma, the variability of our results is going to be high.
David Bonyuet, CQA
Q: Help settle a bet. In reference to the ISO quality standard, my manager says ISO stands for the International Organization for Standardization. I told him that doesn’t make sense because the letters are in the wrong sequence. He said that in French (since the organization is in Switzerland), the abbreviation for the name of the organization comes out as ISO instead of IOS. I say it’s the International Standards Organization. Who’s right?
A: Your manager is partially correct. The organization to which you are referring is called the International Organization for Standardization.
According to its website, the organization’s name would have a different acronym depending on which language was being used (IOS in English, and OIN in French for Organisation Internationale de Normalisation). To avoid any confusion, its founders decided to give it a short, all-purpose name. They chose ISO, derived from the Greek isos, meaning equal. Whatever the country, whatever the language, the short form of the organization’s name is always ISO.
In addition to the ISO 9001 quality management standard and the ISO 140001 environmental standard, the International Organization for Standardization has developed more than 16,500 standards that cover a variety of subjects. Its catalog of offerings can be found at www.iso.org/iso/iso_catalogue.htm.
Sr. manager, performance management