2019

QP MAILBAG

Pareto Head Hit the Mark

Mike Crossen’s March “Mr. Pareto Head” (p. 10) really hit the mark—enough so that when I posted it on a communication board, I was fired from my job within hours.

I don’t blame Crossen, QP or anyone at ASQ. This job action was the result of my being coerced to fictionalize and “extrapolate” quality data to support salable production and then having the gall to challenge the ethics of such behavior. This resulted, as you can imagine, in extreme stress and conflict in my organization and did little for product quality or customer satisfaction. For our upper management to ignore customer specifications and force staff to falsify data is unforgivable.

All in all, I feel vindicated though unemployed. It appears the truth really is painful to the ethically challenged. Keep up the good work.

ANONYMOUS FORMER QUALITY MANAGER

Editor’s Note: The author of this letter asked us to omit his or her name. While it is typically our policy not to print anonymous letters, we did in this case because of the unique circumstances.

Audit Advice Improved Interviews

I enjoyed reading “Improve your Audit Interviews” (ASQ’s Quality Audit Division and J.P. Russell, March 2006, p. 20). I used the advice on one of my supplier audits recently. I gave the conversational style audit a test run, and it went very well. The auditees seemed comfortable and we were able to accomplish a lot in two days. I especially thought Table 1 (“Open-ended vs. Leading Questions”) was useful. Thank you.

MIKE SINARD
Sonopress
Weaverville, NC
mike.sinard@sonopress.com

Faster Turnaround Time Figures Don’t Add Up

I found the article “Faster Turn-around Time” (Angelo Pellicone and Maude Martocci, March 2006, p. 31) interesting. I’m pleased to see statistics applied to nonmanufacturing problems. However, I have a few comments to make.

The time it takes to do things has a natural boundary of near zero. Thus, when calculating time, the data should follow a positively skewed distribution. The fitted curves in Figure 5 and the lower control limits in Figures 6 and 7 show negative values. I don’t believe the total turnaround time (TAT) can be negative unless someone is traveling faster than the speed of light.

Quite often, a log transform can change a positively skewed distribution nearly into a bell shape. There are statistical equations relating the mean and standard deviation on the original scale to the mean, median, mode and standard deviation on the transformed scale. Some interesting calculations follow.

For the upper histogram in Figure 5, the mean is 226, and the standard deviation is 170. The calculated median is 181 minutes, and the calculated mode is 115 minutes. Note the highest bar is near 100. The calculated standard deviation on the natural log scale is 0.670.

Calculations for the lower histogram in Figure 5 yield a median of 71 minutes, a mode of 44 minutes and a standard deviation on the natural log scale of 0.696. For Figure 7, the calculated results are a median of 57 minutes, a mode of 38 minutes and a standard deviation on the natural log scale of 0.628.

Note these standard deviations are very similar. As I understand the text, the sample size for the upper histogram in Figure 5 is 195. I estimate the sample size for Figure 6, which is in the data shown in the lower histogram in Figure 5, to be ~365.

Likewise, I estimate the sample size in Figure 7 to be ~65. Based on these sample sizes, the three standard deviations on the natural log scale are equivalent.

I conclude the underlying variability has not changed, but there has been a significant decrease in TATs. Since the mean is greatly influenced by the large TATs, the median might portray a better way of showing improvements.

Likewise, I question why the modes are where they are in these three sets of data.

RUDY KITTLITZ
Retired statistician
Alpine, TX
rgke300@direcway.com

Black Belts Should Ask Statisticians To Review

I write this in the spirit of constructive criticism. I’ve just finished reading the article “Faster Turnaround Time” and have a few comments. I don’t want to distract from the obvious improvements touted in the article. However, I’m deeply concerned with the presentation of data—specifically Figures 5, 6 and 7.

In Figure 5, two histograms are presented, each with two normal curves superimposed. Here’s the problem: There’s a false assumption of normality of the data. Unless we are capable of bending the laws of physics, we should expect no values of change of time to be less than zero. Yet, according to the superimposed normal curve, about 9% of the population is less than zero. This is based on the current mean of 226 and sigma of 170, which is also wrong for the same reason.

There are similar circumstances for the second histogram, summarizing total turnaround time (TAT) data. In this case, according to the normal curve, about 10% of the TAT values fall below zero, again based on the given average and standard deviation of 90 and 71, respectively.

These are important considerations when summarizing data into statistics such as average and standard deviation. If we’re concerned with the central tendency of the data, perhaps referencing the median or mode of the population would be better, especially in skewed distributions.

It is also not uncommon to transform the data to fit a normal distribution. Looking at the distribution in the second histogram, it appears the central tendency is closer to 40 than the average of 90.

Figures 6 and 7 have similar problems. Control limits traditionally have been used to define the bounds of a process’s normal or expected behavior and identify special cause variation. They’re important because they identify only those situations that most deserve our resources to be fixed.

Figure 6 leads us to assume we can expect values between the upper and lower control limits of 260.3 and
-79.47, respectively, but getting anything between 0 and -79.47 is impossible. So what are the natural bounds of the process? It’s a safe bet zero is the lower control limit and not -79.47.

Figure 7 has similar problems, with upper and lower control limits of 184.8 and -46.53, respectively. I suggest using an attribute chart for summarizing this type of data. The resulting control limits would be more meaningful.

This is not the first time I’ve seen data summarized this way. With the increased popularity of Six Sigma programs and statistical software, we are seeing more examples of this. One of my statistics professors likened distributing statistical software to the masses to giving your car keys to an infant.

Anyone can generate summary statistics and spiffy control charts. However, they need to be aware of what the data (and charts) are saying. They need to ask questions of the charts such as, “Do the control limits presented here look realistic?”

The statistics built into these programs are rife with assumptions. Unless the analyst is familiar with these assumptions, he or she can make wrong conclusions or present data in a misleading manner.

I encourage these analysts, even the Black Belts, to use statisticians, if only to review the data or reports, before committing vast resources or their reputations.

JIM HEIMBACH
Lonza Biologics
Portsmouth, NH
james.heimbach@lonza.com

Authors’ Response: I thank Rudy Kittlitz and Jim Heimbach for their observations.

When we do Six Sigma in our organization, we teach our Black Belts (BBs) not to show transformed data in a presentation. One of the first tests a BB does is for normality. If the data are not normal, then they are transformed for statistical analysis, which was the case for this project.

Like General Electric (GE), we have found we totally lose the audience with transformed data (the numbers mean nothing), so we have the BB work with transformed data but present the nontransformed data for process capability. It is a visual representation staff can relate to.

The defects per million opportunities measure is not determined from process capability. The BB takes the within parts per million and uses a conversion Six Sigma table, which is also the GE method.

To determine whether there has been a significant improvement in the process, the team uses a two-sample
t-test on transformed data. With many of our projects, due to the nature of the data, we are unable to transform the data and must resort to moods median.

The assumptions made by Kittlitz are incorrect and—coming from a statistician without any supporting data—inappropriate.

Control charts generated by Minitab take all the values and use three standard deviations to determine the upper and lower control limits. In this process, because there is no lower control limit, the charts should have been generated without one. Again, our staff is not made up of statisticians, and Minitab has its limitations.

We are teaching staff to use control charts to look at the variation in the process: Do they see a trend (seven points in one direction)? Do they see outliers (special cause variation)? If so, can they speak to it? By looking at the chart, do they see less variation? We stress a process can be in control but not meet the customers’ expectations.

The role of Six Sigma in our organization is to teach staff how to use metrics to assess operational performance improvement, not to create statisticians.

NANCY B. RIEBLING
Colleague of authors
North Shore-LIJ Health System
Lake Success, NY
riebling@nshs.edu

New Mexico 9000 Is The Benchmark

I was very pleased to see Ken Manicki’s article (“State Program Offers Hand to Small Business,” March 2006, p. 37) about the New Mexico 9000 program.

The company I work for, Sennheiser New Mexico, participated in New Mexico 9000 in 2001. The program helped us become ISO 9001 certified in 16 months—from the start of the classes to final registration—for $500. It also helped us pass our registration audit with zero nonconformances and zero observations. Since certification, Sennheiser New Mexico has grown and added more than 80 employees.

Ken Manicki was far too modest to mention the exceptionally good instruction given by Lori Hunt of Honeywell FMT in the classes. She is a member of the technical advisory group to technical committee 176, the body that writes the ISO 9000 family of standards. There could be no better resource to teach this class.

Manicki also does not mention his quiet encouragement to all the participating companies. It is his dedication, hard work and exceptional resources that have made this program work so well.

New Mexico 9000 is the benchmark for any state that wants to encourage ISO 9001 certification of local organizations.

DONALD A. HANSON
Sennheiser New Mexico
Albuquerque, NM
dhanson@sennheisernm.com

New Mexico 9000—Are Other States Doing This?

New Mexico 9000 is certainly a great program to assist small businesses in achieving ISO 9000 certification.

I was wondering whether this is happening in other states in which small business is becoming more prevalent. I recently was hired as a quality engineer by a small business to implement a quality system, which the company currently does not have. I bring many years of experience with me from a large corporate structure where cost was not an issue and many resources were available to achieve ISO 9000 certification.

In my current role, I wear several hats, some of which are supplier quality, quality system management and quality control inspection. Resources are very limited, and the huge cost of ISO 9000 certification definitely puts it out of reach for our organization.

SCOTT FAVREAU
Innovative Surveillance Technology
Coral Springs, FL
scott.favreau@teamist.net

Editor’s Note: Readers who know of similar programs in other states should e-mail editor@asq.org.

Lean Important, Systematic Innovation the Future

This is in response to a letter from Dennis Sowards published in the March “QP Mailbag” concerning an article written by us (“After Six Sigma, What’s Next?” Søren Bisbaard and Jeroen De Mast, January 2006, p. 30).

We thank Sowards for reminding us lean is important. We could not agree more. Indeed, the most significant current quality issue is the integration of lean and Six Sigma. It is an unfortunate oversight that lean was left out of Six Sigma’s original conception, as it is counterproductive for companies to run parallel lean and Six Sigma programs that compete for resources and attention. Back in the late 1980s, lean (at that time called just-in-time) was clearly part of total quality management.

In our article, we were so focused on what is coming next that we may not have devoted sufficient space to current issues. Our goal was to describe how principles, methods and techniques may appear to come and go and how the general framework is evolving. Economic circumstances change, so the role of quality needs to change, too. Our main point was that systematic innovation is a better umbrella concept for this new role—better than continuous quality im-provement or, for that matter, lean.

Our concern is with the future. In particular, we should be concerned about forces that claim lean and Six Sigma are no longer sources of meaningful competitive advantage. That is obviously misguided. To protect and defend our current standard of living, we need systematic innovation—breakthrough as well as incremental—including lean and Six Sigma.

SøREN BISAGAARD
bisgaard@som.umass.edu
JEROEN DE MAST
jdemast@science.uva.nl


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers