Q: I would like to know the best way of monitoring a process using an X-bar and R chart when you combine three processes.
Let’s say, for example, that for a validation, I need to perform three runs: one at the nominal setting (pressure), one at the high setting (+10% pressure) and one at the low setting (-10% pressure).
For each process, I collect 75 shots and three consecutive shots at X frequency. For each consecutive shot, I measure a dimension on a specific cavity number and end up with 25 subgroups, each of which has three measurements. Then, I calculate the control limits for an X-bar and R chart with a subgroup of three.
For the nominal run, the results are:
UClx = U1,
mean x1 = u1
and LClx = L1.
For the high process, the results are:
UClx = U2,
mean x1 = u2
and LClx = L2.
For the low process, the results are:
UClx = U3,
mean x1 = u3
and LClx = L3.
Can you tell me the best way of calculating limits for an X-bar and R chart combining the three processes or recommend a tool for this situation?
A: Because the question involves a validation with three runs, it sounds like a problem with demonstrating process stability and capability for a manufacturer regulated by the Food and Drug Administration. If so, I think I can help.
First, the validation process traditionally requires at least three runs–one run at minimum process setting, one at the nominal setting and one at the maximum setting. If the process factor being validated is pressure, the manufacturer would conduct three manufacturing runs at the minimum, nominal and maximum pressure settings.
The purpose of each run is to demonstrate that the process will be stable and that all of the output will be acceptable as long as the process setting (pressure) stays within its specified tolerance.
As you suggest in your question, monitoring the output with an X-bar and R chart is simple if there is only one process. Each run of 25 subgroups would be plotted on the control chart, perhaps with low, nominal and high pressure settings. The final control chart would have 75 subgroups, and you would evaluate the process output.
If any of the final product is out of specification, the validation failed. If the pressure setting is determined to be the root cause of the failure, the pressure settings must be revised, and the validation should be repeated with tighter limits for pressure.
If some other factor caused the failure, appropriate corrective actions must be implemented and added to the process control scheme to prevent future failures. After the corrective actions are in place, repeat the validation runs.
Next, evaluate the control chart for out-of-control signals, including points beyond the control limits and shifts in the average. Out-of-control signals indicate the process is unstable and unpredictable. If pressure has a significant effect on the process, shifts in the average should correspond to changes in the pressure settings. Large shifts in the process average may indicate the pressure settings need to be more tightly controlled to ensure the output is acceptable.
Manipulating the pressure settings to stress the process will not happen during typical production operations, so some manufacturers will create a separate control chart for each pressure setting. This would result in three control charts with 25 runs each. If each chart is in control and all of the output is good, the validation is acceptable.
But what happens when there are multiple process factors and responses? First, each response should be evaluated separately. X-bar charts are a univariate statistical tool, so it is not appropriate to combine responses. Multivariate tools are available, but for the purpose of process validation, they do not offer enough of an advantage over univariate tools to be worth the trouble.
Process factors, on the other hand, can and should be combined. Suppose you have five process factors that are critical to ensure quality output. If you test each process setting separately, you will quickly discover how some consultants have carved out a career for themselves. Five factors, three runs per factor (low, nominal and high), 25 subgroups per run and three observations per subgroup add up to 1,125 observations. Classic overkill.
One option for combining factors is to run all factors simultaneously at the low setting, followed by a run with all factors at the nominal setting and, finally, a run with all factors at the high setting. This approach is a worst-case scenario because you may never need to run more than one or two factors simultaneously at their extreme settings.
In some cases, the factors can interact and cause the output to change dramatically. So do not pursue this approach unless you are confident the process is robust and the factor settings are tight enough to prevent unexpected failures.
Another option I have used frequently when performing characterization and validation studies is designed experiments, which provide a structure to systematically test various combinations of factor settings.
These experiments can be used to identify which factor settings influence the response. Therefore, the experiment can be used to validate the process tolerances and demonstrate the output is acceptable. Designed experiments are extremely efficient, so a validation effort with five factors could be completed in as few as 48 runs.
There are potential pitfalls with the sampling plan you propose. Three consecutive shots are used to create a subgroup. This will drive the subgroup range to the smallest possible variation.
Because the average range is used to estimate the control limits on the X–bar chart, the control limits may be artificially tight, and the control chart will appear to show an extreme lack of control. Use rational subgrouping to capture some variation within each subgroup.
Carefully select the samples so the control chart is more likely to detect the type of variation you want to monitor with the control chart. If the raw data are not normally distributed, it may be prudent to use larger subgroup sizes. A sample size of five is adequate in most cases to ensure the averages are normally distributed.
If necessary, the subgroups can be combined into a single, long control chart. The limits can be estimated using the first 25 subgroups and applied or extended to the remaining runs. You could also wait until all the data are collected and estimate the limits using all 75 subgroups. If you wait to calculate the limits, however, you might miss an important signal during the run and end up throwing away more product. It is always better to plot in real time.
Ideally, there will be no difference in the range or the average between the low, nominal and high groups. If this is the case, the subgroups should fall randomly between the limits, with no out-of-control points or long runs above or below the average. If there are differences in the average between the groups, a combined control chart will probably indicate out-of-control signals. This does not mean the validation failed.
Separate the data into the three groups and assess the control chart for each group. If each chart is in control, estimate the capability within each group. Is the process capability greater than 1 at the low setting? How about at the nominal or high settings? If the answer is no, more work is needed. Perhaps the process tolerance needs to be tightened or shifted to ensure the output will be good, regardless of the setting.
Understand that validation is an intentional effort to prove the output will be good, even if the process settings are pushed to the extreme limits of the tolerance. But this is not the normal operating condition. Normally, the process will be run at the nominal setting.
If the long chart is in control, you can use the limits for ongoing process control. If you need to use separate charts to demonstrate capability and control, use the limits from the nominal run for ongoing production.
Consultant, Master Black Belt