Getting It Right the First Time (article)
Is the Isoplot an Ellipse? (article)
The Shainin System™ (SS) for quality improvement was developed over many years under the leadership of the late Dorian Shainin.
SS is also called Statistical Engineering by the consulting firm Shainin LLC that holds the trademark and Red X® strategy in parts of the automotive sector where SS is popular. The overall methodology has not been subject to critical review although some of the components have been discussed extensively.
The Shainin System was developed for and is best suited to problem solving on operating, medium to high volume processes where data are cheaply available, statistical methods are widely used and intervention into the process is difficult. It has been mostly applied in parts and assembly operations.
The underlying principles of SS can be placed in two groups. The first group follows from the idea that there are dominant causes of variation. This idea appears in Juran and Gryna1, but it is Shainin who fully exploits this concept. The second group of principles is embedded in the algorithm, the Shainin System™, shown in Figure 1.
A fundamental tenet of SS is that, in any problem, there is a dominant cause of variation in the process output that defines the problem. This presumption is based on an application of the Pareto principle to the causes of variation.
Juran and Gryna1 define a dominant cause as, ‘‘a major contributor to the existence of defects, and one which must be remedied before there can be an adequate solution.’’ In SS, the dominant cause is called the Red X1. The emphasis on a dominant cause is justified since ‘‘The impact of the Red X is magnified because the combined effect of multiple inputs is calculated as the square root of the sum of squares”2.To clarify, if the effects of causes (i.e., process inputs that vary from unit to unit or time to time) are independent and roughly additive, we can decompose the standard deviation of the output that defines the problem as:
A direct consequence of (1) is being unable to reduce the output standard deviation substantially by identifying and removing or reducing the contribution of a single cause, unless that cause has a large effect.
For example, if (stdev due to cause 1) is 30 percent of the stdev (output), users can reduce the stdev (output) by only about 5 percent with complete elimination of the contribution of this cause. The assumption that there is a dominant cause (possibly because of an interaction between two or more varying process inputs) is unique to SS and has several consequences in its application.
Within SS, there is recognition that there may be a second or third large cause, called the Pink XTM and Pale Pink XTM respectively3, that make a substantial contribution to the overall variation and must be dealt with in order to solve the problem. Note that if there is not a single dominant cause, reducing variation is much more difficult, since, in light of (1), several large causes would have to be addressed to substantially reduce the overall output variation.
To simplify the language, we refer to a dominant cause of the problem, recognizing that there may be more than one important cause.
There is a risk that multiple failure modes contribute to a problem, and hence result in different dominant causes for each mode.
In one application, a team used SS to reduce the frequency of leaks in cast iron engine blocks. They made little progress until they realized that there were three categories of leaks, defined by location within the block. When they considered leaks at each location as separate problems, they rapidly determined a dominant cause and a remedy for each problem.
SS uses a process of elimination3, called progressive search, to identify the dominant causes. Progressive search works much like a successful strategy in the game ‘‘20 questions,’’ where users attempt to find the correct answer using a series of (yes=no) questions that divide the search space into smaller and smaller regions.
To implement the process of elimination, SS uses families of causes of variation. A family of variation is a group of varying process inputs that act at the same location or in the same time span. Common families include within-part, part-to-part (consecutive), hour-to-hour, day-to-day, cavity-to-cavity and machine-to-machine.
At any point in the search, the idea is to divide the inputs remaining as possible dominant causes into mutually exclusive families, and then to carry out an investigation that will eliminate all but one family as the home of the dominant cause.
This excerpt is from Steiner, Stefan H., MacKay, R. Jock and Ramberg, John S. (2008) 'An Overview of the Shainin System™ for Quality Improvement', Quality Engineering, 20:1, 6 – 19. Subscribe to Quality Engineering.