2019

INBOX

At what cost?

In the article "Raising the Bar" (July 2008, p. 22), A.V. Feigenbaum describes a business failure cost structure that seems to include prevention and appraisal costs as part of failure costs. But are prevention and appraisal costs indeed part of failure costs?

It is my understanding that costs associated with prevention and appraisal are part of the total cost in obtaining quality, but they are not part of failure costs.

In my mind, failure costs are to be minimized to the extent deemed suitable by the organization (potentially to zero). As a result of failure costs being minimized, appraisal costs can be reduced, but appraisal efforts will likely not go away. Also, as failure costs are minimized, the effectiveness of the prevention efforts is evident.

I’m likely misunderstanding a key point. Please help me understand.

Wallace Robinson
STRS Ohio,
Columbus, OH

Author’s response

Since the origination of the quality cost discipline, appraisal costs and prevention costs have been designated as the quality cost improvement drivers of the business failure cost structure and have performed effectively in the reduction of the external failure and the internal failure of the business failure cost structure.

As stated on p. 24 of the article, the two terms are defined as the following:

Appraisal costs are associated with key systems, processes and internet functions that provide controls to ensure complete customer satisfaction. Prevention costs are associated with initiatives for forestalling quality problems, and the disconnects and backward creep that can affect complete customer satisfaction.

Val Feigenbaum

Understanding the shift

Marie A. Gaudard and Phillip J. Ramsey’s explanation of the 1.5-sigma shift (Expert Answers, July 2008, p. 14) is like many I have heard before—one I believe is as much Motorola folklore as actual quality history. I wasn’t inside the organization to confirm the actual story, but I have heard from third-party sources the 1.5-sigma shift was more marketing rather than statistical.

At the time Motorola started its Six Sigma challenge, the target Cpk value in industry was typically 1.33 (4-sigma). Motorola, wanting to up the ante, made the decision to increase the target to 1.5 (4.5-sigma).

At the same time, looking for a way to reference the new effort, someone happened to notice that the number six, when on its side, was nearly a sigma in appearance, and together they made "a cool logo."

Hence, the 1.5-sigma shift was needed to explain why a 4.5-sigma value suddenly becomes a 6-sigma value (and with it, the introduction of short-term and long-term capability, which causes more confusion than anything else).

To me, it is all a moot point, because it is really just an operational definition question for capability—measure things consistently and focus on improvement. Having been in quality for a number of years, I was always led to believe capability was based on the assumption of statistical control. Hence, in a stable process, wouldn’t the short-term and long-term variations be equal, resulting in no shift at all?

I would love for someone who was within the Motorola organization at the time to confirm the true history of the Six Sigma term (and the corresponding 1.5-sigma shift). I am truly tired of teaching people why they need to look for a number other than six when looking for 3.4 defects per million opportunities.

Craig Tickel
Global quality director, Stepan Co.
Northfield, IL

Quality education

Douglas C. Wood’s "Blurred Vision" article in the July 2008 edition of QP (p. 28) was excellent! The eight myths are something we ought to reread every day because we often hear them couched in our day-to-day business activities.

Often, I see examples of another myth: making each individual a better performer will automatically make the results better. W. Edwards Deming and others showed this to be wrong so many times, but these lessons are largely forgotten.

I think it’s up to us as quality professionals to teach others and speak up when we see examples of people trying to revive old management theories that are ineffective or counterproductive. There should not be any controversy; what’s right is right.

Another big concern is the organizations that supposedly support quality, and yet they support these old management theories. Apparently, these people were not exposed to quality education as noted in myth No. 8.

This type of activity is sure to give quality a bad name, because at many companies we now have a host of people who claim to be quality practitioners but know little to nothing about it. They might learn a little about Six Sigma or ISO 9001, and then be dubbed a "quality professional." What a sad state of affairs.

I hope we can finally come up with some way to address myth No. 8, because if we don’t, the quality profession might be driven out by a huge wave of incompetency.

Mike Harkins
Round Rock, TX

Destroying the illusion

As an avid reader of all things regarding continuous improvement, I am amazed at the number of individuals in that arena who do not display even a basic knowledge of Six Sigma. The author of "Six Sigma Illusions" (InBox, July 2008, p. 10) provides a great example of this lack of understanding.

Six Sigma is a problem-solving method; it is not a quality system. With all of the excellent articles that exist in your magazine and others, I find it disheartening when a self-proclaimed quality professional displays a basic lack of understanding of what Six Sigma is all about.

Yes, Joseph Juran, Phil Crosby and others all played a great role in introducing a reluctant U.S. industrial community to the benefits of having a quality system in place. Unfortunately, most U.S. manufacturers only provided lip service to the message these individuals were preaching. Hopefully, the same organizations have a decent quality system in place today. Inspection systems do not fix the problem. They only find the problem after the fact. Six Sigma offers the toolkit to solve the problem once and forever.

To those of you who have not had the opportunity to invest four weeks of your life (and months of painstaking work performing Six Sigma projects), I would suggest you seek out the chance to attend a professional Six Sigma training class and learn the method involved. The tools are not new, but the method is.

R.R. (Ron) Kunes
Master Black Belt
Clover, SC

Correction

The software offerings of two companies in June’s Software Directory were incomplete.

IBS America’s list of software categories is: auditing, calibration, corrective action, customer service, customized software development, data acquisition, document management, e-commerce, gauge repeatability and reproducibility, inspection, ISO 9000, ISO 14000, management, measurement, preventive action, problem solving, process documentation/mapping, QS-9000/TS 16949, quality assurance, Six Sigma, software, statistical process control, supplier quality assurance, training and other.

Minitab’s list of software categories is: capability studies, consulting, customer service, design of experiments, gauge repeatability and reproducibility, prevention action, problem solving, process documentation/mapping, quality function deployment, reliability, Six Sigma, software, statistical methods, statistical process control, Taguchi techniques, training and other.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers