2019

Harley Didn't Grow Solely Because of Quality Crusade

The article "Is Quality Free or Profitable?" (Jon R. Miller and John S. Morris, January 2000, p. 50) failed to account for a significant event. The authors implied that the Harley-Davidson Motor Co. grew its market share from 20% in 1983 to 46% in 1988 solely by embarking on a quality crusade. They made this statement based on a passage from a book they referenced in the article.

In truth, in 1983 the U.S. International Trade Commission (USITC) found, via a 2-to-1 vote, that Japanese motorcycle imports were damaging the U.S. motorcycle industry. This came in response to a petition filed by Harley-Davidson in 1982. The USITC recommended that tariffs on imported motorcycles be increased by 125% over five years. Legislation was later passed making this a significant factor in Harley-Davidson's ability to increase market share.

JOE LOKAY, CQE
Bokchito, OK 
joe.lokay@abbott.com
 



Author's Response
In his letter, Joe Lokay suggests that a protectionist act by the U.S. Congress, which increased tariffs on Japanese motorcycles, was a significant factor in the turnaround of Harley-Davidson Motor Co. in the 1980s. He says that we overlooked this factor by saying that the turnaround was due to quality improvement.

First, we don't want to imply that quality improvement was the only reason for the turnaround at Harley-Davidson. Other factors may have been causal, but the tariffs on the 800 cc engines were largely circumvented by the Japanese exporting 799 cc motorcycles. Harley-Davidson even petitioned to have the sanctions lifted a year before they had to.

We suggest that the proper view is still that quality improvement was the primary reason for the turnaround at Harley-Davidson.

JACK MORRIS
Moscow, ID 
jmorris@uidaho.edu
 
JON MILLER
Moscow, ID 
jrmecon@uidaho.edu
 



Mr. Pareto Head Is a Hit
Wow! "Mr. Pareto Head" (Mike Crossen, March 2000, p. 16) is an awesome idea! I loved reading it; it's the first article that I cut out and placed on my office door. The cartoon hits home and breaks up the monotonous tone of the magazine.

DOREEN PALMERT
Indianapolis, IN 
dpalmert@divys.com
 

I love "Mr. Pareto Head." It sounds like the boss ought to take the Friday classes. I can't wait for the next one!

SANDY BOWMAN 
sbowman@edutech.org 

I just received my March issue of Quality Progress, and I came across "Mr. Pareto Head." I'm still laughing. It's absolutely great!

JOHN DEW
Tuscaloosa, AL 
jdew@aalan.ua.edu
 

It was very refreshing to laugh out loud while reading Quality Progress. I am looking forward to more from Mike Crossen and his cast of characters.

We put a copy of "Mr. Pareto Head" on our Company Quality Update Board to add some humor to our otherwise tense ISO 9000 surveillance audit month.

LARA LORE
Dearborn, MI 
laral@campbellco.com
 

"Mr. Pareto Head" is great! I can't wait until next month. Keep it coming.

BONNIE CONNELLY
North Little Rock, AR 
bonnie.connelly@deluxevideo.com
 

I just wanted to drop a quick line to say thanks for "Mr. Pareto Head." A little humor never hurt anyone.

DONNA NEWELL
Rochester, NH 
dnewell@primetanning.com
 

Bravo! "Mr. Pareto Head" is a "Dilbert" for the quality world. It's a refreshing addition to the excellent information presented in the magazine.

ADRIENNE C. ALTON-GUST
Lake Forest, IL 
aalton@packagingcorp.com
 

I think the comic strip is a great idea. It shows the many obstacles quality professionals have to face each day.

JOHN L. LORETTO
Angola, NY 
jloretto@flexovitusa.com
 

"Mr. Pareto Head" is a refreshing idea. Is he going to be at the ISO 9000 conference in Dallas? It would be a hoot to have something like that at those events.

Breaking paradigms can be fun!

LARRY HILLYER
Bossier City, LA 
hillyer@jlcompanies.com
 

I think "Mr. Pareto Head" is excellent! I have lived this scenario many times in the past, so it immediately struck a nerve. Please keep the comic strips coming.

KEVIN R. CHANDLER
Orlando, FL 
kchandler@leesburghregional.com
 



Everyone Should Be Part Of the Quality Commitment

It was a pleasure to read "Quality for the Long Haul at Gerber" (Mark R. Hagen, February 2000, p. 28). The article showed that being able to provide what the customer is owed does not happen overnight through the application of some tool or management process. The vision of what the customer is owed led to the integration of quality in all aspects of Gerber's enterprise. Gerber recognized that everyone should be a part of the commitment to provide a safe and effective product for its customers.

I was particularly interested in the expansion of quality responsibilities to the front-line operators. When trained people know what is required and what their individual responsibilities are, it is no longer necessary to have an army of inspectors.

This is achieved by "building trust through commitment." As with other companies whose people behave like the Gerber employees ("It's my job to protect the Gerber baby"), Gerber kept ahead of the quality curve. Because regulations follow good practices, it was not necessary to engage in frantic efforts to comply as existing regulations changed or new ones came along.

I surmise that Dan Gerber has projected his leadership on all of his managers, supervisors and employees. Leaders stress relationships. This leads to the recognition that everyone wants to do a good job and does so when provided with the proper tools and training. Necessary changes and improvements are readily introduced when needed. All employees come to understand that the customer is owed a safe and effective product.

The story describes the implementation of a long-term vision that resulted in delivering the quality that is owed to customers. The reward has been growth and leadership in Gerber's field.

MORT LEVIN
Natick, MA 
mlevin3585@aol.com 



Where's ISO's Annual Audit Frequency?

In James Bolek's article, "You Just Print Checks, Right?" (March 2000, p. 101), in section 4.17, he states, " ... auditors must fully audit all 20 elements, at least annually." I read and reread section 4.17 of the 1994 ISO standard and found no mention of the annual requirement. The closest the standard comes to specifying a schedule is to say, " ... audits shall be scheduled on the basis of the status and importance of the activity." Where does the annual audit frequency requirement come from?

ART GEIST
Norcross, GA


Author's Response

The audit requirement actually comes from our registrar and our own need for comfort. The registrar audits different elements on an annual basis. We hired the consultant to come in and perform two internal audits a year. We did this because:

* We believe that a system that is truly a living system needs constant monitoring by a third party to maintain effectiveness.

* Our newness to the whole standard made us vulnerable to regression. The constant monitoring of conformance ensures that we will continue to offer a quality product.

The uniqueness of our business requires that we utilize all of the elements. And each element feeds off of another one. It only makes sense for us to audit each one that often.

JAMES BOLEK
Wyoming, MI 
jimb@dominionsystems.com
 



Convert Soft Data Into Hard Bottom-Line Results

The article "Turning CFOs Into Quality Champions" by John Goodman, Pat O'Brien and Eden Segal (March 2000, p. 47) provides an excellent blueprint for converting soft customer satisfaction data into hard bottom-line results. The authors' approach is rigorous, logical, simple and intuitively appealing.

Congratulations to the authors and to Quality Progress for a difficult job well-done.

TOM PYZDEK
Tucson, AZ 
tom@pyzdek.com 



TQM Study Did Not Meet Its Objectives

As I was reading the article "TQM's Human Resource Component" (Christopher M. Lowery, Nicholas A. Beadles II and James B. Carpenter, February 2000, p. 55), I couldn't stop asking myself, "What did I learn from this study? Did it benefit me? Did it enlarge my knowledge?" The answers are, "Nothing. No. No."

I understand that Quality Progress is no scientific journal, and I shouldn't expect to find all the science behind an article just by reading it. However, I expect the conclusions of an article to be confident and meaningful and to provide useful information from which all quality practitioners can benefit.

My opinion is that the study didn't meet its objectives. One objective was to determine how many manufacturers implemented total quality management (TQM). Because only 26% of the questionnaires were returned, the authors couldn't assume anything about the other 74%.

By the way, questionnaire based approaches such as this one allow self-selection into the sample and rely on the respondent's perception without critical evaluation. The authors cannot be sure that the 35 firms who say they're using TQM are actually doing so.

Another objective of the article was to assess the organizational outcomes attributed to TQM implementation. The survey tried to examine the association between performance and the reported use of various practices. The study didn't attempt to determine when the practices were initiated or to examine if performance changes were associated with the actual implementation.

The failure to focus specifically on performance changes associated with the actual changes in management practices greatly increases the possibility of confounding factors. Further, the study provides weak evidence concerning causality.

It is important to note that no observational study can prove a causal relationship. It is difficult in survey based research to address the large variation in interpretation of terminology in different companies, and it is frequently unclear how respondents actually operationalize the questions. As a result, most questionnaire based research is fairly superficial.

Leaving the critique of the scientific method aside, I come back to look for the article's benefits. The results of this study cannot be generalized. First, the finding that performance improves as a result of TQM implementation is based solely on respondent perception. Second, even if the finding is true, it cannot necessarily be generalized to a prescription that the companies that implement TQM will also improve performance. I didn't find the article meaningful or useful.

CATALIN RISTEA
Vancouver 
ristea@interchange.ubc.ca
 



Author's Response

We appreciate the concerns of the letter writer, but note that the criticisms of our methodology would apply to virtually all survey research, so readers must bear in mind these limitations.

For instance, return rates of around 30% are considered satisfactory, so our rate of 26% would be acceptable. Also, we stated in the article that self-selection is a potential limitation. Most educated readers are aware that self-reported data are subject to misrepresentation by respondents, and the respondents may possibly misinterpret some of the questions.

Most readers should also be aware that causal relationships among variables cannot be proven by any research methodology, including survey research. The statement in the letter that "most questionnaire based research is fairly superficial" indicates an obvious bias against this methodology.

However, many people who are trained in the techniques of basic and applied research would disagree with the letter writer's assessment of survey based research. This is often the only methodology that can be used to study organizational issues.

So, should we simply ignore these issues, or should we recognize the limitations of the methodology and then use the results to make a more informed decision?

Lastly, putting aside the criticism of the methodology, the letter writer misconstrued our conclusions. It should be clear to readers of the article that the primary conclusion of the study did not concern the effects of TQM on performance, but rather the need to modify human resource practices if TQM is adopted.

CHRISTOPHER M. LOWERY
Tuscaloosa, AL 
clowery@mail.gcsu.edu
 
NICHOLAS A. BEADLES II
Tuscaloosa, AL 
nbeadles@mail.gcsu.edu
 



Don't Confuse Kindness With Poor Management

The article "Killing Quality With Kindness" by Susan Hake Surplus (February 2000, p. 60) was disappointing. Surplus is confusing kindness with ineffective management as the killer of quality. All of her examples cite instances where management was all too eager to relax the rules for the sake of soothing over hurt feelings.

The sad part of this article is that it perpetuates the myth that the business world has been trying to overcome since the 1950s: You can be an effective and/or successful manager, or you can be a kind and/or compassionate manager, but you cannot be both. This is a disservice not only to the managers who may be reading this article, but also to those employees working under managers struggling to do their best in an atmosphere of rule bending and value compromise.

All of the behaviors Surplus discusses as "well-intentioned acts of managerial kindness ... to placate, soothe and bandage the one wound over which they have most control" are not really acts of kindness. The acts are known in other circles as enabling reactions that keep people locked in unproductive and ineffective behaviors. This is neither an effective, honest, kind nor healthy way to manage people.

It's not kindness that's killing quality. It's managers who are too eager to bend and relax rules to soothe and placate rather than honestly and compassionately hold their charges accountable for doing the job right the first time. Kindness and being an effective manager are not mutually exclusive, and I believe this is the message we should be promoting in our effort to rise as world class companies.

RODGER LOW
Northridge, CA 
rlow@harman.com
 



Employees Should Be Held Accountable

he article "Killing Quality With Kindness" (Susan Hake Surplus, February 2000, p. 60) makes some good points about holding people accountable for performance and not backing down on performance standards. However, I would like to make some comments about the statement, "People will do best at what's measured and at tasks for which they are held accountable."

1. Unless performance measures directly reflect the activities of individuals or their work group, people will not respond well to what's being measured. People are interested in what their company does, but they are much more interested in their own performance and achievements.

2. People can only be held accountable for what they can control or change. Trying to hold managers or front-line employees accountable for matters beyond their control will not work. Accountability cannot be mandated; it must be accepted.

3. There is no such thing as joint accountability. If more than one person is declared responsible for a performance measure, no one will feel accountable. Specific feedback about work group and individual performance is more effective in improving performance than feedback about aggregate groups, such as divisions or companies.

4. If managers or individuals are to be held accountable for their performance, measures that accurately reflect their responsibilities and customers' requirements must be established. To produce these measures, the measurement system must be capable of determining what business function or operation is responsible for each reported quality problem.

Upper level performance measures are fine for keeping score, but they are inadequate for increasing the score because they don't identify the sources of deviations in performance. This can only be accomplished by lower level measures that go all the way down to front-line activities.

Accountability for performance cannot exist without valid measures of a business function's performance. Consequently, in most companies relatively few managers are truly accountable because the line where meaningful measurement begins is usually relatively high on a company's organization chart.

Making select groups accountable for performance will create morale and other performance problems. Those being held accountable will feel persecuted, and those not being held accountable will have little reason to achieve high levels of performance. Everyone's performance should be measured because everyone should be held accountable for his or her performance.

WILL KAYDOS
Charlotte, NC 
willk@decisiongroup.com
 



Complete Randomization Yields Independent Errors

In his article "Randomization Is the Key to Experimental Design Structure," Richard F. Gunst (February 2000, p. 72) makes the important point that "without knowledge of how an experiment is conducted, one cannot know for certain how to analyze the resulting data properly." It is important for authors to give a more complete discussion than normal regarding how the data were collected.

Complete randomization is important because it achieves independent errors. That independence allows for a simple analysis. If data are "Observation = Model + Error," then a completely randomized design is achieved by using a process that makes the errors independent. This gives an operational definition of a completely randomized design. An operational definition is useful.

The conventional definition given by Gunst that, "Completely randomized designs are designs in which the assignment of factor-level combinations to a test-run sequence or to experimental units (physical entities on which measurements are taken) is made by a random process where all assignments are equally likely" is both more complicated and inadequate.

Gunst's definition is adequate for much agricultural experimentation where experimental design was originally developed; however, it is inadequate for many industrial and scientific experiments because it does not address essential elements of such experiments.

1. It does not address how industrial and scientific experiments, especially those using equipment such as an extruding machine, are conducted. Industrial and scientific experiment factors are usually not reset if successive runs have the same factor setting. Resetting each factor on each run is required, in addition to a random assignment of factor-level combinations, because not resetting causes correlated errors among adjacent runs with the same factor levels.

2. It does not address the inferences desired from the experiment. I will use Gunst's experiment shown in Tables 1 and 2 to illustrate the problem. Table 1 shows an array of 24 cutoff times from three lawnmowers, two manufacturers, two speeds and two repeats. Gunst states: "The lawnmower cutoff time experiment is a completely randomized design under the following conditions. Suppose the lawnmowers represent three different models: push mowers, self-propelled mowers and small tractor mowers. Suppose further that all 24 combinations of lawnmowers (three from each manufacturer) and speeds, including repeats, were tested in a random sequence. Under these conditions the study would be a complete factorial experiment conducted in a completely randomized design."

Gunst's statement is correct if the inference about cutoff speed only applies to the six lawnmowers in the test; for this limited inference, a completely randomized design is achieved. For the more common case when we wish to make inferences about all the lawnmowers of the three types produced by the two manufacturers, a completely randomized design is not achieved "because of the known, often sizeable, variability of different lawnmowers of the same model."

The four measurements made on each lawnmower are correlated because inferences about all lawnmowers of the three types are desired. An experimenter cannot get 12 degrees of freedom for the appropriate error for testing lawnmower differences when there are only six lawnmowers in the experiment. The "known, often sizeable, variability of different lawnmowers of the same model" is not in the error component, and the analysis shown in Table 2 is incorrect when the broader inference is desired. Alternatively, this is a split-plot experiment, and the lawnmowers are whole plots.

This discussion also shows the independent errors in the operational definition are independent as far as the desired inference in concerned; however, they are not necessarily physically independent. For example, the process defined by the classical definition will remove an unknown fertility gradient in a field from the inference space for a field trial even though there will still be unknown correlations in the errors because of the fertility gradient.

It is easy to get the correct analysis when a broader inference is desired. Simply average the four measurements taken on each lawnmower, and then analyze the cutoff times for the six lawnmowers. This trick gives the correct analysis for the whole-plot effects in any balanced split-plot experiment. The analysis gives, apart from a divisor of four, the lawnmower, manufacturers and L x M lines from Table 2 with different F and p values. The L x M line is the best available error estimate. The F values increase by the factor 85/50, and both p values are > 0.01 because the correct test for the wider inference is significant at the 95% level, but not at the 99% level. These p values are different, by more than an order of magnitude, from the p values shown in Table 2.

I have concentrated on where Gunst and I differ because, except for the speed effect and the interactions with speed, the analysis given by Gunst is correct. Randomization is an important part of the foundation for many of the tools that we use in quality and statistics. Gunst's article gives a worthwhile discussion of this important topic.

JAMES M. LUCAS
Wilmington, DE 
jamesmlucas@worldnet.att.net
 



Author's Response

James Lucas is correct in asserting that the examples presented in the article do not pertain to extrusion processes--they were not intended to do so. I selected one example and chose to present three of many possible illustrations of how the data could be analyzed based on the stated assumptions.

Lucas' extrusion example imposes different conditions on the experimental setting, and consequently different assumptions are needed to ensure the desired features of a completely randomized design. I welcome Lucas' discussion of these assumptions because it reinforces the primary theme of the paper: One cannot tell solely from an examination of a data display how to properly analyze a data set.

Lucas confuses randomization and independence. Randomization is a feature of the design. Independence is a feature of the data. No matter how one defines a completely randomized design, the resulting data can have correlated errors--for example, because of unknown, uncontrollable factors that affect the response. Randomization by itself cannot guarantee independence.

Lucas' criticism following his two points arises because he chooses to change and mix the assumptions stated in the illustrations. He admits that there are conditions under which each is appropriate--precisely the purpose of the article. He then quotes an assumption that is clearly stated for the second and third illustrations in his criticism of the first illustration. He cites assumptions for which a split-plot analysis is the correct analysis. Hence, he confirms the third analysis, the one that is appropriate for his stated assumptions.

Lucas' last paragraph is incomprehensible. After arguing that the correct analysis is that of a split-plot, he returns to Table 2, which is clearly not correct for his assumptions. The L x M interaction line may be the "best available error estimate" for his shortcut, but it is not the correct error estimate for his assumptions.

Either the appropriate analysis is that of a split-plot, as he asserts, in which case Table 4 provides the correct analysis, or he is again changing the assumptions in some unstated way. In this paragraph Lucas appears to be trading a correct analysis for the simplicity of a shortcut analysis, which is precisely one of the motivating factors in writing this article. See the section "Simplicity is not paramount."

I concur with Lucas that there are many ways to analyze the data in Table 1, depending on the assumptions that are appropriate. I also concur with Lucas that the conditions for completely randomized designs can be expanded to cover other experimental settings that are not the focal point of this article.

RICHARD F. GUNST
Dallas, TX 
rgunst@mail.smu.edu
 

We welcome your letters. Send them to EDITOR, ASQ/QUALITY PROGRESS, 611 E. WISCONSIN AVE., PO BOX 3005, MILWAUKEE, WI 53201-3005; or e-mail them to editor@asq.org.  Please include address, daytime phone number and e-mail address. Whenever possible, the e-mail addresses will be included with published letters. Due to space restrictions, Quality Progress will publish a selection of letters in the magazine. All letters will be published on QP Forum, or you can post your comment on QP Forum directly at www.asqnet.org. We reserve the right to edit letters for space and clarity.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers