What Is the Future of Six Sigma? - ASQ

What Is the Future of Six Sigma?


Download the Article (PDF, 59 KB)

Kenneth S. Stephens
Retired professor of quality management, Southern College of Technology

First, I will assume Six Sigma is being used in the majority of places and applications as a system, an approach or a process for investigation to achieve better design, control and improvement of operations, services and products. I assume it utilizes any and all of the tools, techniques, principles and philosophies of quality (and other) disciplines.

That being the case, I am not too concerned with what you call it. But if the past is a predictor of the future, the name Six Sigma will be replaced before long. As with total quality management (TQM), a name can mean different things to different people. Over the years, names given to these programs have been merely handles (buzzwords, if you will) for discussion, communication, promotions, developments, teaching and applications. Programs with the Six Sigma and TQM labels have met with success. Others with the same labels have met with failure.

Programs of similar content have also gone under many different labels (often because of the “not invented here” syndrome). Labels such as integrated quality management, strategic quality management, total quality, reengineering, whole system architecture and even quality circles (never really understood by American management), statistical process control (SPC) and statistical quality control have been around and still linger in places.

Is it possible American management is fickle? Managers and corporations seem to tire of a name even though the one that replaces it embraces the same philosophies, principles, tools and techniques. Is there any wonder the name Six Sigma has now emerged? To me, it is unfortunate we use a label that otherwise represents a specific statistical phenomenon to describe a whole sphere of statistical, managerial, engineering and other procedures.

As a quality professional, I am disturbed by an aspect of Six Sigma, as presently practiced, that goes beyond the name. The quality profession for many years (since 1946 for ASQ) has endeavored to maintain accuracy of principles, tools and techniques. This, unfortunately, is not being maintained by some practices of Six Sigma. I refer to this as the Six Sigma dilemma.

The Dilemma

A component or finished product has an important variable quality characteristic for which two specification limits—upper and lower—have been specified. The limits are based on sound engineering principles, considering the use of the component alone and in combination with others, as well as customer agreement. The process to produce this component and the associated quality characteristic has been designed, implemented and confirmed by process capability studies to have a variability following the normal distribution centered between the specification limits and with ±6 standard deviations between the specification limits.

Hence, the expected proportion of components outside the upper or lower specification limit is 3.4 parts per million (ppm)—true or false?

This should be an obvious “false.” It can be readily confirmed by entry into a table of normal distribution carried out to the 6 sigma level or with the use of a calculator with a normal distribution algorithm, such as the Hewlett-Packard (HP) 48x. This yields a figure of 9.865876 E-10 or 9.865876 times 10 raised to the -10, or 0.000000000987 or 0.000987 ppm or 0.987 parts per billion.

Unfortunately, an alarming number of people (even some trained in Six Sigma methodology) would answer the true or false question above with an erroneous “true.”

Equally unfortunate is the fact that too many textbooks and papers on related subjects lead some people to the wrong answer to the true or false question. For example, take the text Integrating Reengineering With Total Quality, by Joseph N. Kelada (Milwaukee: ASQ Quality Press, 1996). The book contains a table (10.1) on p. 252 listing 3.4 as the number of defects per million opportunities for error and 6 as the corresponding associated sigma level. The text contains absolutely no explanation of the conditions or assumptions associated with these values.

Is it any wonder we have a proliferation of erroneous statements and misunderstanding concerning application of Six Sigma principles?

Where does 3.4 ppm come from, then, in its association with the Six Sigma methodology? If we take the component example and further allow a possible shift in the mean of ±1.5 sigma (standard deviations)—perhaps owing to longer-term variation with different sources of supply and other conditions that cause the mean to shift—a minimum of 4.5 sigma will be between the process mean and either specification limit. The shift would be 4.5 sigma in the direction of shift and 7.5 sigma on the opposite side.

Returning to the table of normal distribution or the HP 48x calculator, you can see the proportion outside 4.5 sigma from the mean of the normal distribution is 3.397673 E-6, or 0.000003397673 or 3.397673 ppm. This is the proportion outside one side of the normal curve. If process variation was such that specification limits were at ±4.5 sigma with the mean centered between the specification limits, there would be a (3.4 x 2) or 6.8 ppm defect rate total.

When you take an attribute, such as count of defects or defectives, rather than a variables measurement, and associate 3.4 ppm with 6 sigma, a very serious distortion of statistics takes place—true or false?

This is a resounding “true.” Yet too many quality practitioners are letting this distortion occur, and even people who know better are turning a blind eye. To quote a quality professional with whom this concern was shared, “This is one of the more elusive areas of Six Sigma. I believe Motorola did all a great disservice by miscommunicating from the start and perhaps thereby hopelessly confusing the masses (and even some of us who are not in the masses).”

In the long run this dilemma is not at all beneficial to the quality profession or, for that matter, even to the name of Six Sigma, whatever it entails. The situation needs immediate correction to curb the trend of the “dumbing” of quality. I see and hear too many erroneous statements pertaining to capability indices, zero defects, zero acceptance numbers, control charts, designed experiments, gage repeatability and reproducibility, and other quality tools.

Return to top

Charles A. Aubrey
Director, quality and process management, Sears Customer Care Network

I’m sure many of you have participated in a number of quality strategies and approaches. These include SPC, ISO 9000, quality circles, self-assessment, benchmarking, Malcolm Baldrige National Quality Award criteria for performance excellence, Hoshin planning, TQM and Six Sigma, to name some significant ones.

All these strategies and approaches are alive and well today in many organizations worldwide. However, many more organizations have “been there and tried that” unsuccessfully, and for them the strategies quickly became a fad. It is often reported that more than 90% of TQM programs have failed.

I believe there are three root causes for such a great number of failures, which unfortunately may be repeated again in many organizations using Six Sigma. I have witnessed these failures firsthand during my own experiences and observations of other attempts to implement these strategies. I have also had the opportunity to successfully implement Six Sigma in many organizations while at the Juran Institute, American Express and Sears, Roebuck and Co.

The first and foremost root cause is the implementers do not have the proper knowledge of how the strategy or approach works and what needs to be done to make it successful. The person tapped to make this happen could be someone integral to the company. The individual may do his or her best, but the implementation fails because of a lack of training or experience, or the resistance to seeking qualified advice. It could also be the result of hiring a consultant who does not have the appropriate knowledge or experience.

Joseph M. Juran draws the parallel of quality to finance. Failure to have educated, experienced and certified financial experts (like certified public accountants) managing finances has led to organizational failure. Assigning a quality role to an individual without the proper education or training will lead to the same kind of failure. This analogy also applies to the process of recruiting an employee or a consulting company. Always check references thoroughly to confirm demonstrated or sustained success in implementing and carrying out the strategy or approach you want to use.

The second reason for failure is related to the knowledge problem but is worth noting separately. Organizations and their senior management teams must see and be able to touch the economic benefits of a quality initiative to continue to believe in and support it. Once the strategy or approach is implemented, it must clearly demonstrate its economic impact. For example, with Six Sigma we may complete a project that significantly reduces defects, which means partial success. However, to achieve full success we must determine the actual cost reduction and revenue increase caused by the project.

I believe the last root cause is a lack of superior management involvement and support. Top management may start out believing in the potential success of a new process. However, if either of the first two failure modes occurs, senior management often bails out. In many cases top management has already experienced a failure with one of the previous strategies or approaches and will resist Six Sigma because of the philosophy of “been there and done that.” If this is the case and you know you can avoid the first two failure modes, then try a pilot Six Sigma process and prove to top management Six Sigma can work in your organization.

I hope my words of caution may help some of you avoid these pitfalls. The past pattern of unsuccessful implementation is clear, and the challenge is before us. It is up to each of us who lead our organizations’ Six Sigma strategies to bring the right knowledge and experience resources to the process to assure success. If you’ve been tapped for the Six Sigma job, get the proper education. Partner with your financial experts to determine the economic impact. Be sure the results are demonstrated to senior management. Keep them involved, and they will continue to support Six Sigma.

We will only know for sure in the next three to five years whether Six Sigma will be a sustained strategy or a fad.

Return to top

Larry R. Smith
Manager, quality deployment and training, Ford North American Truck

I believe the future of Six Sigma involves growth in three areas:

  1. Building Six Sigma capability into the product development process.
  2. Improving and enhancing the methodology.
  3. Adding concepts associated with eco-effective design.

The first transition involves moving the concepts of Six Sigma from the back end of the product development process to the front with implementation of Design for Six Sigma (DFSS). When Ford began looking at Six Sigma, we had the privilege of talking with management at companies like GE Aircraft Engine and Bombardier. When they incorporated DFSS into their product development processes, not only did they find payoffs in quality, cost and customer satisfaction, but productivity also improved. Both companies found they could use 50% fewer engineering resources if they designed the product right the first time.

The Six Sigma methodology will also be enhanced with tools such as Genichi Taguchi’s concepts of parameter and tolerance design, Genrich Altshuller’s TRIZ (a Russian acronym for “theory of inventive problem solving”) and Nam P. Suh’s axiomatic design. (For more information, see “Six Sigma and the Evolution of Quality in Product Development” on p. 28 of this issue.) Numerous case studies show these methods can make order of magnitude improvements in results. Companies that do not use the techniques will simply not be able to compete with others that do.

Eventually I believe Michael Braungart’s and Bill McDonough’s idea of eco-effective design will be incorporated into the DFSS process. This concept means designing in a way that fosters healthy and prosperous conditions for humans and ecological systems by reusing materials and components in natural biological or technical cycles. It only makes sense to design products that improve the health of both your customers and their environment.

Return to top

Do you agree or disagree with these comments? Or would you like to recommend a question for a future issue? Please send your suggestions to the editor at godfrey@asq.org.

Six Sigma Forum Magazine Cover

Featured advertisers

ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.