Six Sigma, Innovation Can and Do Co-exist
A very important question was raised in the “Up Front” column (August, p. 6)—one that is misunderstood by many people who overzealously adopt a single improvement process, tool or other technique. The editor asked, “Can Six Sigma and innovation work within the same four walls?” I believe the answer is unequivocally, “Yes.”
This discussion, which is far from new, always reminds me of a pamphlet that was written by former ASQ President Dana Cound in 1986 for National Quality Month. The theme was that “control is the antithesis of improvement.” Although this pamphlet is far from the only publication available that addresses this balance, I always have considered it one of the best sources for a brief discussion of this all-important topic for quality practitioners. The writings of Dr. Joseph Juran, in particular, present much more in-depth information on this concept.
Fundamentally, it is important for us to understand that sustainable business success requires the ongoing cyclical combination of periods of breakthrough, where the focus is on changing the process mean to achieve more effective and efficient performance (either for customers, for the business, or for both) followed by periods of incremental improvement and control, where the focus is on stabilization and variability reduction.
Lean and Six Sigma methods primarily are associated with the incremental improvement and control phase of the process’s life cycle. Although there are adaptations of the Six Sigma process intended to be used in association with the design of new products, services and processes, which often involve innovation, these methods certainly are not intended to be major factors in driving innovation. On the other hand, the Six Sigma methods certainly do not stand in the way of innovation, either.
The key, of course, is to use Six Sigma methods in conjunction with other approaches to provide a comprehensive set of tools that create an upward spiral of breakthrough and incremental improvement, as described by Dr. Juran. It appears that the BusinessWeek author, and quite possibly the researchers and scientists from 3M, had fallen into the famous trap described by noted industrial psychologist Abraham Maslow, “He that is good with a hammer tends to think everything is a nail.” Hopefully, they’ll consider adding some tools to their kit, so they can work with nuts, bolts and other hardware, too.
Deborah Hopen Associates, Inc.
Federal Way, WA
Innovation Must Address Business Goals
I have lived on both sides of the Six Sigma for innovation fence. One thing is certain: the discipline of design for Six Sigma (DFSS) and the creativity of raw innovation both have a rightful place in many corporate environments. The problem arises when the two are mixed in the wrong manner. DFSS essentially focuses on the correct projects done correctly; there are scores of books describing great ideas that enjoyed no sales and crummy ideas that were executed flawlessly.
Conversely, raw innovation is very important as well, but eventually must be tied back to a business goal (or two). One leaves invention for the sake of invention to the universities. Corporate innovation must have at least some semblance of accountability; the balance is difficult—but rewarding—to achieve.
Proper (for profit) innovation rests not on any one “methodology;” one must take a holistic view of product development.
The writer of the BusinessWeek article was a typical journalist: seeing what he wanted to see through his blinders.
Six Sigma Can Impede Innovation
I agree that Six Sigma can slow the pace of innovation when a company has systemic weaknesses for evaluating and implementing riskier ideas that are essential for quick breakthroughs. If a risk averse corporate culture only implements process improvements through the deep diving and consensus building exercise that Six Sigma entails, innovation will be slower. If companies are dependent on quick high impact fixes, they may have some brilliant innovations, but the risk can be ruinous to their long-term business plan. The most innovative and successful corporations must have systems that separately analyze and accept the risk each of these two disparate methods involves. The foundation for business success usually comes down to what amount of risk a corporate culture is willing to accept and how they manage that risk, not a single methodology in their tool box.
Practical Advice On Youden Plot
Kudos to Lynne B. Hare for writing an excellent and simple-to-understand explanation of the Youden Plot (Statistics Roundtable, August 2007, p. 64). In the spirit of the article “It’s Not Always What You Say, But How You Say It,” I would suggest the term “expected value” (center of the Youden circle) be replaced with “consensus value” for interlaboratory data, because it’s a term that’s easily understood and commonly accepted by industrial chemists as opposed to a statistical expectation. Of course, strictly speaking, to be statistically correct, the mode and not median should be used to arrive at the “consensus” value. However, the assumption that the data isn’t bimodal or multimodal is a reasonable assumption, so the median will likely suffice in more than 95% of the situations encountered.
Besides, use of the median will also alert the analyst to bimodal data because the plot will essentially have two distinctly separate clusters lying on the parity line.
I always enjoy reading Lynne’s articles because they’re so down to earth and practical.
ASTM D02.94 Coordinating Subcommittee on Quality and Statistics
I agree. In the specific case of the data in the example, though, I chose to use the “known” value of the samples, which were made up to a specific Vitamin A content. Of course, in the world of statistics, nothing is really known. And in actuality, these samples were subject to drift in value depending on the date of analysis, which we tried to keep constant. It is correct that the “expected value” is not the best language to use because it will be confused with mathematical expectation. That, of course, was not my intent.
Lynne B. Hare
East Hanover, NJ
Case Study Confusing
Most statistical material presented in QP is quite good, but I found the case study article in the August issue (“Retrospective Analysis of a Designed Experiment,” p. 23) very confusing. How can Figure 4 (interactions) have any meaning if only eight runs were conducted?
Or were additional runs carried out—as implied by Table 3—to augment the original design in Table 2? If so, what were the average F-50 values? And it seems highly unlikely the stated results are “optimal.”
More careful proofreading should also have addressed a number of quality issues: for example, R is never defined (same as F-50?), “experiment” is frequently used instead of “runs,” etc.
These observations are relevant from the theoretical perspective or in an experiment being conducted in a perfect laboratory setting. One has to analyze the whole issue in the backdrop of knowledge level available at that time and place. The article is about the retrospective analysis of the experiment conducted in 1995 in India. The knowledge level at that time was Orthogonal Arrays and empirical local knowledge. The interaction is always there between the variables even in a single run. The issue is that we cannot optimize. Only eight runs were conducted. Yes, the debatable issue is the level of resolution and the reliability of outcome. The experiment never claimed to achieve an ideal DoE solution—the concluding part is relevant.