Attribute Gage R&R

Contents

Download the Article (PDF, 102 KB)

By Samuel E. Windsor, Delta Sigma Solutions LLC

SIMPLE BUT ROBUST TOOL SAVED ONE COMPANY $400,000 A YEAR.

Measurement systems are routinely analyzed using traditional gage repeatability and reproducibility (R&R) studies. These studies are ANOVA (analysis of variance) methods used to quantify the variation attributable to gage repeatability and operator reproducibility.

The gage R&R study as it applies to continuous data is widely used and written about. But another form of this tool—the attribute gage R&R—can improve process yields and reduce costs dramatically.

Most processes require at least some form of subjective inspection or validation. It could be a check for blemishes on a painted or plated finish of a part or a judgment concerning the color, taste or smell of a product. In some cases, measuring equipment is available to access the acceptability of such characteristics.

Many times, however, test equipment is not used due to cost or is simply not available. For example, although profilometers may be available during the inspection of a machined surface finish, the surface finish may be judged using a fingernail test.

This fingernail type of inspection method has the potential for variability among inspectors and even variability by the same inspector over a period of time. Any variability in the measurement system will affect the measured process variability, thereby affecting the measure of process capability.

Although the math is different, the effect of misreporting process capability is the same for both continuous and attribute gage studies.

One advantage of the attribute gage study is that, unlike the variable gage study, it can easily be applied to transactional processes. For example, a study could be performed on how customer service representatives interpret a customer complaint or the way a customer requirement is converted into an internal order.

In the following case study during the measurement phase of a Black Belt project, the simplest form (short method) of the attribute gage R&R is credited with saving a company more than $400,000 annually.

Background
An electroplating company supplying silver plated machined parts for a telecommunications company was experiencing a rejection rate of just over 16,000 ppm at the customer facility.

The parts were 100% visually inspected at the silver plating facility for defects consisting of pits, blisters, voids and rough surfaces. When accepted, the parts were wrapped and shipped to the customer, where they were sampled and inspected by the customer. If parts were rejected, they were returned to the supplier for a process referred to as strip and replate.

In this process the existing silver was removed, the part cleaned and new silver applied. The strip and replate process cost the plater two times the cost of the initial plating due to the cost of removing the existing silver and applying new silver.

For example, a part with a surface area of 100 square inches would cost 10 cents per square inch to plate, at a total cost of $10 per part. The cost of the strip and replate process would be 20 cents per square inch. This means the rework cost $20 per unit.

Investigation
An investigation was conducted to determine what specifications were used by both the customer and supplier. In both cases the specification required the parts to be free of blisters, voids, scratches and roughness.

Even with identical specifications, nearly 2% of parts plated were rejected at the customer’s facility. Further investigation revealed no part was or could be expected to be completely free of blisters, voids, scratches and roughness. In addition, there was no real reason for the parts to be perfect.

There was a need, however, for the blemishes and other defects to be minimal. The difficulty became defining “minimal.”

The attribute gage R&R was employed to investigate and determine the actual compliance of both the customer’s and supplier’s attribute measuring system. As the project was initiated by the customer, the initial gage R&R was performed at the customer’s location using a slight variation of the short method attribute gage R&R study, referenced in the Automotive Industry Action Group’s (AIAG) Measurement System Analysis text.(1)

Results
The results were analyzed using a simple spreadsheet (many statistical software packages also have this capability.) But a spreadsheet was not required because the data could also be analyzed manually.

The study was conducted with a sample of 30 parts selected by their degree of compliance to the actual engineering requirement. Eight of the 30 parts were considered unacceptable to varying degrees, and 22 were considered acceptable, some marginally.

Acceptability was determined by the agreement of two of the customer’s product engineers. Each part was numbered and the engineering decision recorded for each part as the standard.

Two experienced inspectors from the customer’s receiving department were chosen to participate. Each inspector would evaluate each part in the morning and afternoon of the same day, yielding a total of 120 inspection data points. As the study was conducted, the results were recorded for each piece next to the appropriate number on the data collection sheet. The results are shown in Table 1.

The data analysis indicates inspector one agreed with himself in 83% of the cases and with the standard in 53% of the cases. Inspector two agreed with his own results 90% of the time and with the standard 23% of the time.

In total, the percentage of time both inspectors agreed with their own results and with the standard was 13%. In 33% of the cases, the inspectors agreed with each other on both trials but not necessarily with the standard.

An identical experiment with the same 30 parts was conducted at the supplier using experienced final inspectors, with the results shown in Table 2.

Comparisons of the study results indicate the supplier’s inspection is actually more consistent with the customer’s engineering requirements. (Instances in which both inspectors agreed with each other and with the standard were 13% for the customer vs. 70% for the supplier).

Further study of the data indicates inspector two in the initial customer study agreed with himself 90% of the time but with the standard only 23% of the time.

A detailed review of the results, shown in Tables 3 and 4, also indicates customer inspector two is much more likely to reject a part than is customer inspector one. Also, customer inspector one rejected acceptable parts (type one error) in both trials on five occasions, while accepting discrepant parts (type two error) on both trials on four occasions. Inspector two consistently rejected acceptable parts on 19 of the 30 occasions.

Explanation of Results
The “agreed with own results” was calculated as the percentage of agreement over each of the two trials. In the initial customer study, inspector one had agreement between the first and second attempt on 25 of the 30 parts (83.3%).

The agreement with standard percentage shows how often each inspector agreed with himself and with the standard. In this case, inspector one had 16 cases of 30 when the results were consistent between the trials and with the standard (53.3%).

The overall effectiveness—or “inspector one vs. inspector two vs. the standard” is the percentage of time each inspector agreed with himself and with the standard—in this case four of 30 times (13.3%).

What the Data Say
The most important part of any such exercise is to turn the raw data into either a validation of the system or an action plan to fix the system. In this case the measurement system is in clear need of repair or replacement.

Replacement is not an option as there are no known methods other than human inspection for the process. The challenge is to correct the present system to provide consistency between the customer and supplier results.

The data show customer inspector two rejected many more parts than necessary, indicating he did not understand the requirements, took a more critical view of the requirement or was just afraid to accept any part that had a small inconsequential defect. Operator one accepted more parts but still made the correct decision only about 53% of the time.

To address the problem, the customer’s engineering and quality representatives worked with the supplier’s quality group to create a standard for the most common defect types along with minimum/maximum type photos.

This information was included with the specification for silver plating at both the customer and supplier facilities. All inspectors were trained in this specification, and the actual requirements were discussed in great detail.

In the weeks after the training session, the gage R&R study was performed again using the original parts, with the results shown in Tables 5 and 6. Original data for the last three studies are shown in Tables 7 and 8.

Savings
In the measurement phase of any Six Sigma Black Belt project you must evaluate all forms of measurement, not only the measurement systems with variable outputs. You have to ask whether a fingernail type test is repeatable within a given inspector and reproducible between inspectors at both customers and suppliers facilities.

While work continues to further improve the measurement system used in the case study (according to AIAG’s Measurement System Analysis,(2) agreement should be 100%), the attribute gage R&R yielded significant results by dramatically improving the agreement between the customer and supplier and among inspectors within the two organizations.

In this case, the gage R&R resulted in an annualized savings of nearly $400,000 on just the cost of the strip and replate operation. If the time lost and transportation costs of returning the material to the supplier had been tracked, the reported savings would have been even greater.

The application of attribute gage R&R demonstrates the variation in inspection methods between experts when inspection standards are not utilized. The control phase of a project involving visual repeatability and reproducibility is an important consideration. Publication and ongoing document control for visual standards, along with periodic training, are critical to ensure visual inspection methods remain consistent.

The tool can be used in this very simple form or expanded to include confidence intervals and probabilities of defects within particular ranges, as explained in AIAG’s attribute gage R&R long method.(3)

The case study in this article shows how the tool can be used without using software. But the data can also be analyzed using popular statistical software packages or with a simple spreadsheet that will perform the calculation for you.

An attribute gage R&R can normally be performed at very low cost with little impact on the process. Significant benefits can be gained from looking at even our most basic processes.

References

  1. Automotive Industry Action Group, Measurement System Analysis, www.aiag.org.
  2. Ibid.
  3. Ibid.

Featured advertisers


ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.