2019

Hans Bajaria Will Be Greatly Missed

A dear friend of the quality professional passed away in September. I met Hans Bajaria as a young engineer, and I immediately recognized he was special. Our relationship began as a student-teacher relationship and eventually grew into a genuine friendship.

When his wife called to tell me about the sudden loss, I was shocked. I also felt cheated. You see, whenever I could not research a solution or find an adequate explanation in a book, there was always one man I could turn to for the answer. His insight was always crystal clear and amazingly accurate.

Over the years, his answers consistently provided the best and most practical solutions. I eventually grew to accept his advice without hesitation. If Hans said it, it must be true. How could it be any other way?

This is an unusual feeling for a quality professional; we tend to be opinionated and like to challenge everything. I would always try to include Hans in my assignments because I knew that with him on the team, we would surely succeed. My practical knowledge of quality and reliability grew exponentially during our assignments together.

In addition to all of his qualities and awards, Bajaria was a great communicator. He could explain abstract and challenging theories with a unique brand of subtle humor and common sense. He would take time out of his busy schedule to come and speak to my graduate students. In fact, he would always find time for anyone seeking his advice. He could stimulate your mind, awaken your conscience and put a smile on your face.

Goodbye, my friend. I will miss you.

PAUL PALADY
Wayne State University
Detroit, MI 
paul.palady@gm.com
 

Is Performance Based On Random Chance?

I found the juxtaposition of the articles on Hillerich & Bradsby (March Laree Jacques, "Big League Quality," p. 27) and player game percentage (Jay M. Bennett, "The Game of Statistics," p. 43) in the August 2001 issue thought provoking.

As a practicing statistician with a modest interest in baseball statistics, I was intrigued by Bennett's player game percentage calculation based on the player's effect on his team's odds of winning the game.

But in reading March Laree Jacques' article describing H&B's decision to follow Deming's precepts and eliminate employee evaluations, I was reminded how Deming used to say that much of the difference in performance between employees is due to random variables beyond their influence.

The sportswriter and historian Leonard Koppet interpreted the (in)famous Sports Illustrated cover curse--a player whose picture appears on the cover of Sports Illustrated will perform poorly in the following weeks--as simply a case of regression to the mean. A player's picture would appear on the cover as a result of a random streak of outstanding performance, and after the cover appeared, the player's performance would be more like his or her normal average, which makes it look as if the player's performance has declined.

The question then arises: How much of the performance of a most valuable player (MVP)--both in sports and industry--is due to random chance and external influences, and how much is intrinsic to the player or employee?

PAUL KUCKEIN
Quality Metrics Program Manager
Sun Microsystems Inc.
Menlo Park, CA
paul.kuckein@sun.com
 

Statistical Measurements Subject to Argument

I read Jay M. Bennett's analysis of the MVP award during the 2000 World Series ("The Game of Statistics," August 2001, p. 43) with delight and skepticism. Calculated statistical measurements derived from game statistics are subject to friendly argument, and Bennett's player game percentage (PGP) is no exception.

Mariano Rivera is by definition a "closer." He only appears in games where the Yankees are leading and there is chance for a save. So if Derek Jeter, who plays all nine innings of most games, had not hit .409, with two homers, two doubles and a triple, ultimately scoring six runs, would Rivera have even had the chance to get into the game?

Under Bennett's system, Jeter is statistically punished for every out he makes in a game where a player who gets three hits in 10 is a star, and Rivera is statistically rewarded for working half an inning where he performed poorly, just because he didn't lose the game.

The victory in game one required Jose Vizciano to reach first base safely without forcing one of the other base runners. To do this, he would have had to get a hit, walk, get hit by a pitch or have one of the Mets commit an error. In 2000, Vizciano's on-base percentage was .319, meaning he reached base safely 32% of the time. Then how can the "chance of victory" be double that?

RICHARD A. MINOR
Greenwood, IN
 

Weak Signals May Be Vanguard of Innovation

There is a fundamental flaw with "how a disciplined metrics approach to quality of performance can replace subjective judgment with logic and order," whether it is baseball or business (Jay M. Bennett, "The Game of Statistics," August 2001, p. 43). The underlying assumption is performance value in baseball and, by implication, any system is quantifiable, orderly, linear and, therefore, objectively measurable.

This thinking dismisses the importance of the subjective and nonlinear nature of groups and systems: the very things sportswriters, players and fans know are the essence of a great team or MVP and what must also be understood in business.

How do you measure the value of the impassioned roar that ignites Yankee Stadium when a fan favorite blasts a home run--regardless of whether his team has a lead or not? Bennett's approach says this is less contributory than a relief pitcher's entering a game with a 6-2 lead and ending it with a 6-5 victory. Tell that to the fans, players and coaches still holding their collective breath.

This is the danger of overreliance on reductionism, linear thinking and objectivity. Accountants' spreadsheets and Wall Street numbers don't reveal the world's best performing companies. Six Sigma programs or Baldrige Award achievements don't tell which enterprises create the best products.

Yes, we need to quantitatively measure what we can, but innovative enterprises also focus on the qualitative and the subjective. Their leaders know they need to feel the difference in their organizations--creating a climate conducive to breakthrough is at least as important as lowering scrap rates. They also know weak signals, dismissed as irrelevant or ignored by many linear and quantifiable processes, may be the vanguard of innovation.

STEVEN ZEISLER
Zeisler Associates Inc.
Hockessin, DE
steven@zeislerassociates.com
 

Don't Automatically Adjust the Process

In Lynne B. Hare's August 2001 "Statistics Roundtable" article ("Chicken Soup for Processes," p. 76), the sidebar, "Wide or Narrow Specifications?" says, "A rule I've used successfully says specifications can never be narrower than capability."

He says if this rule is violated, the process will be adjusted unnecessarily, and process variation will be increased. However, when a control chart signals, we are supposed to look for assignable causes, not automatically adjust the process. If the process is stable but incapable, the search will be fruitless and we should not adjust the process. Control limits, not specification limits, should be used to determine the need for adjustment. This is why authors warn practitioners not to put specification limits on control charts.

Further, the rule might be misinterpreted and imply specifications should be adjusted so they satisfy Hare's criteria. That's not true. Specifications should represent the values required to make the product meet customer or design requirements throughout its expected life.

This is irrespective of the capability of the process that produces the product. If the specifications are narrower than the capability, you should validate the assumptions and computations underlying the specifications. If the specifications are validated, you should try to improve the capability of the process until it satisfies some acceptable criteria.

A formal expression of criteria is that the process interval is a subset of the specification interval. This is certainly a desirable situation, but you need to think about how the control limits are computed: Are they three sigma limits or some other number? The standard is three sigma, but Walter A. Shewhart said the control limits should be based on economic considerations (the cost of looking vs. the cost of failing to look). So in cases where the cost of looking is small and the cost of failing to look is high, it might be reasonable to use two sigma limits.

You also need to think about how the specification limits are computed. One test of the limits' reasonableness is to compare the proposed limits with the limits derived using economic analysis methods such as Genichi Taguchi's quadratic loss function.

JOHN J. FLAIG
Applied Technology
johnflaig@aol.com

Author's Response: John Flaig's caution is right on the mark. I do have one quibble: The quadratic loss function is attributable to Abraham DeMoivre and then Carl Friedrich Gauss. It was popularized by Taguchi, but the quality community should give credit where it is due.

LYNNE B. HARE
Kraft Foods Research
East Hanover, NJ 
harel@nabisco.com
 

An Equation Can Help Explain Quality

The July 2001 issue of Quality Progress excerpted several experts regarding quality and its definition and meaning. The authors certainly identified the major factors of quality: customer satisfaction, the cost to produce a quality item and the user or company's profits. I have used these same factors to explain quality to my students and customers. I expressed them in the form of an equation so they could see how these factors related to achieving quality:

quality = (customer satisfaction) profits
------------------------------------
costs to produce

Customer satisfaction (actual or in-house for a company, department or individual) can be expressed as (S + R + U). S is safety (the presence of life, health), R is reliability (the degree of dependability, trustworthiness), and U is the utility (the versatility, capability) of the product or service. Each should be calculated on a scale of one to 10. The costs are based on the amount of money involved, and the monetary aspects cancel out to provide a fraction.

This equation could also be used to measure the quality performance of a company or its product or service. I never did because I never had the need or opportunity, but it was certainly helpful in explaining quality to others.

EDWARD M. DUKE
San Jose, CA
Marge_ed@netzero.net
 

Most Engineering Degrees Are Watered Down

Degrees in engineering (not including the watered down versions that don't offer a real start for engineers without a GPA of 3.6 or above) are made for people who want to design hardware or software. For instance, quality and manufacturing disciplines do not really require a four-year engineering degree, and quality is a prime example. Statistics in manufacturing are what have always been applied in 90% of manufacturing companies.

Today, statistics are a real joke in the manufacturing world except for basic statistical process control charts. So how can somebody really have a life doing watered down, high school-level engineering? What happened is that a lot of us bit into our university's marketing promotions and got either watered down engineering degrees or bogus job descriptions from the career center. Guess what? Any job other than design does not require a degree and will lead you to the unemployment line when downsizing occurs at your company (this does not include sales engineering).

I graduated with a bachelor's degree in mechanical engineering technology, and the closest I ever came to an engineering job was in drafting. I always knew everything else could have been performed by someone with a high school or junior high school degree and a 950 on the SAT.

Remember, you are in control in today's economy because you are making a real contribution to a company based on your knowledge from school. However, that kind of skill takes hard work.

RICH PECONE
Honeywell Technology Solutions
Columbia, MD 
richpecone@netzero.net
 

Figure Not Explained Thoroughly in Article

Did I miss something? The discussion of Pareto charts in the September 2001 issue (Melissa G. Hartman, "Separate the Vital Few From the Trivial Many," p. 120) was, of course, fundamental.

However, there was only a short sentence explaining Figure 2. Plotting a Pareto chart based on total cost per type of incident is worthy of consideration and discussion, but there was no explanation of how the cost of each complaint was determined. I think the subject of cost per incident and total cost per type of incident is worthy of more than one short sentence.

DAN EPSTEIN
Quality Management Consulting Services Inc.
Oceanside, NY
qmcsinc@aol.com

Author's Response: I agree completely with Epstein. Determination of costs is an important topic that deserves far more than one short sentence. However, I think that discussion would be better addressed in an article devoted to quality costs than in one devoted to Pareto diagrams.

My discussion of Pareto diagrams was, indeed, fundamental and was intended to demonstrate one tool for organizing information. You have correctly identified a need for an article that addresses how these cost data are gathered. I appreciate your careful eye and encourage you to write that article!

MELISSA G. HARTMAN
Baker University School of Professional and Graduate Studies
Overland Park, KS
mhartman@kscable.com

Note
Nicolae-George Dragulanescu, who authored a "World View" column in Quality Progress ("Quality Management Challenges in Romania," August 2001, p. 102), is interested in establishing contact with QP readers. His address is nicucal@yahoo.com.

Correction
In the section on Spicer Driveshaft in "Journey to the Baldrige" (Debbie Phillips-Donaldson, September 2001, p. 51), the excellence in manufacturing and Six Sigma quality challenge programs did not fail, as stated. The company changed the name from Six Sigma quality challenge to the Dana quality leadership award and then to the Dana quality leadership process to emphasize the process over the award.

Also, employees aren't required to take six hours of courses to become a supervisor. Instead, supervisors are strongly encouraged to pursue that training and education before and after they accept a leadership role.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers