2019

EXPERT ANSWERS

Dealing with deviation

Q: We have two sites manufacturing the same product. There is concern, however, that the new site has deviated from the original design intent. What is the best tool or strategy to use to perform a gap assessment of the new site’s process to determine whether there is any gap from a design transfer perspective?

John Surfus
San Mateo, CA

A: You have every right to be concerned. Any deviation from the original design intent introduces risk and uncertainty. The original manufacturing site has experience with the process, and it knows how well the product performs in the field.

Will changes to the proven process make the product better, or will they produce unintended consequences? It is very important to proceed with caution, because it is much easier and cheaper to prevent problems than it is to fix them once they are in the field.

The first question to consider is whether the change is with respect to the product design intent or the process design intent. Let’s consider the process intent first. The process intent specifies the sequence of operations, the selection of equipment and tooling, the machine settings (such as temperatures and speeds), and procedures for measuring and controlling the process.

Deviations from the approved process (such as faster feed rates) may actually improve productivity and quality. For example, some machining operations can be improved by increasing the speed of the equipment, which reduces tool chatter and results in a smoother finish and less tool wear. If this is the case, the new site should provide detailed evidence showing that the change improves the process. Management can review the evidence and decide whether the expected benefits justify taking a risk.

If possible, the risk should be quantified through a formal risk assessment. One common method of quantifying risk is to create a process failure mode effects analysis (FMEA), which lists potential failure modes. Each failure mode has associated numerical estimates for severity, occurrence and detection. These three numbers are multiplied to create a risk priority number (RPN).

Failure modes with high RPN values should be avoided if possible. If the RPN is low, then the benefit of the proposed change may be worth the risk. The proof is in the results. The new site must provide documented evidence that the output of the revised process meets all of the product requirements. The product requirements always include visual and dimensional specifications but may also include performance criteria, such as corrosion resistance and durability tests.

If the deviation is with respect to the product design intent, then the risk is considerably greater. A product FMEA works the same way as a process FMEA, except that a failure now indicates a design flaw because the product no longer meets the design intent. These flaws can include a feature that does not work or the product wearing out too soon.

Suppose the new site has conducted studies that show its equipment is not capable of meeting the specifications. Management does not want to perform 100% inspection and carry the burden of scrap or rework, so it requests a deviation to widen a particular tolerance. Although this type of request is common and usually harmless, we have all suffered at some point from unintended consequences.

In this case, a formal cross-functional review is needed. Involve personnel from design engineering, quality, reliability and maybe even marketing. Will this change affect performance? Durability? Customer satisfaction? A formal risk assessment is crucial to answering those questions.

Andy Barnett
Consultant/Master Black Belt
Houston

For more information

  1. Borror, Connie M., ed., The Certified Quality Engineer Handbook, third edition, ASQ Quality Press, 2009.
  2. Ramu, Govind, "FMEA Minus the Pain," Quality Progress, March 2009, pp. 36-42.

Capable approach?

Q: I am an avid reader of QP but was never moved to contact you until I read the article "Calculated Decision" in the January issue. The article was very well written—thorough, yet presented in a format that was easy to follow

I have two follow-up questions. The first relates to the choice of using a statistical process control (SPC) chart, which indicated three out-of-control points. These points were convincingly demonstrated by the authors to be false triggers, which were caused by the non-normal process data.

I was thinking an X-bar chart would have been a more robust chart to determine whether the process was in control, because the central limit theorem would apply and would thus provide a robust workaround to handling the non-normal data. Would you agree? If so, what would be an appropriate rule of thumb for an appropriate subgroup size with non-normal data?

My second question is a personal sore point. It relates to the 1.5-sigma shift assumption for short-term versus long-term capability studies. My understanding of the logic behind this assumption is as follows: If you perform a short-term capability study, you can assume the process will be less capable if studied over a longer period, which makes sense. The assumption goes that a 1.5-sigma shift can be assumed.

I can live with that, but my personal issue comes with the application of this 1.5-sigma shift assumption. It appears to me that the application goes the opposite way. For example, a Six Sigma table or calculator might report 3.14 as the sigma value, but I understand that proper application of the 1.5-sigma shift should result in 0.14 being reported as the sigma value.

I favor reporting 1.64 as the sigma value (and therefore reporting no shifting), whether we’re talking about a short-term or a long-term study. Can you help provide me with a sanity check?

Andrew McDermott
Lansdale, PA

A: You have some interesting questions. Let’s take them one at a time:

First, for the data scenario in the article—and many others—I would not agree with taking subgroups and counting on the central limit theorem to assure the normality of the averages, which would allow the use of a standard X-bar chart.

The reason I disagree is because the samples that constitute a subgroup are assumed to contain only common-cause variation. As a result, the pooled standard deviation from all the subgroups can be assumed to be an estimate of the short-term variability or common-cause variability in the response. These are commonly referred to as rational subgroups.

This is an important assumption about subgroups that allows you to use the pooled standard deviation to estimate the control limits for an effective X-bar chart. These data points are individual readings, with a relatively long time between readings. As a result, we could not assume that five or more samples contained only common-cause variation.

Combining individual data points into subgroups to generate an X-bar chart risks inflating the estimate of the within-subgroup standard deviation, resulting in wide control limits and an ineffective control chart. In a different data scenario in which a sufficient amount of measurements are taken during a relatively short timeframe, simulations have shown that for a subgroup size of five or more, the high rate of false alarms caused by assuming normality for non-normal data decreases to a negligible rate when using the standard X-bar chart calculations. So I would guess that five is the number for which you are looking.

I should warn you, however, about using subgroups of auto-correlated data that are very similar when taken in subgroups over short periods. The variation within the subgroup will be very small, causing tight control limits that result in an X-bar chart with many data points appearing to be out of control.

As for your second question, this is a highly debated topic with several lines of thinking. Here is mine: The logic behind the 1.5-sigma shift is not wrong; it is just confusing in its application.

In your example, I believe the assumption is that the original calculation of defects per million opportunities is from data calculated over the long term. Thus, 1.64 is a long-term capability estimate. Because empirical evidence estimates an additional shift of the mean of approximately 1.5 standard deviations over the long term, and because sigma is supposed to be an estimate of short-term or potential capability, sigma should be 1.64 + 1.5 = 3.14.

So, 3.14 is correct only if you assume the original estimate was arrived at using data taken over the long term, if you agree with the 1.5-sigma shift estimate, and if you understand sigma to be a short-term process capability index. How to incorporate the 1.5-sigma shift in your calculation is confusing, but it does not lack a logical backing.

Louis Johnson
Senior technical trainer, Minitab
State College, PA

For more information

  1. Breyfogle, Forrest W., Implementing Six Sigma, second edition, pp. 12-15, John Wiley Press, 2003.

Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers