Decreasing quality costs while increasing measurement confidence
by Philip Stein
I frequently get questions about whether a particular process or measurement requires calibration. For the most part, these questions are quite similar. For example, a quality engineer from a printing company recently wrote me, asking whether his densitometer (used to check colors on a print job) needed to be included in his ISO 9000 paragraph 4.11 activities.
First, I answered by quoting the 4.11.1 clause.
The supplier shall establish and maintain documented procedures to control, calibrate and maintain inspection, measuring and test equipment (including test software) used by the supplier to demonstrate the conformance of product to the specified requirements.1
If a measurement is not used to demonstrate conformance to requirements, it's not covered by the 4.11 clause. In the printing example, if a subsequent inspection or test procedure is used as the final arbiter of whether the color of a job is right, the densitometer need not be calibrated--at least according to the requirements of the standard.
But does it need to be calibrated for other reasons? My final answer to this correspondent, and to you, the reader, is to suggest this simple test:
Ask the question: Does it matter whether the answer from this measurement is correct?
If it does matter, then calibration is needed.
If it doesn't matter, then why is the measurement being made in the first place?
I've seen some cases where the value of a measurement is not important, but the trend is. In lense grinding, for example, a dial indicator is used to enable the operator to see the depth of penetration of the lap into the blank. When progress is stopped, or nearly so, the grinding is finished. The value of the measurement on the dial is not important. The only thing that matters is the rate of change of the pointer's position. It doesn't matter whether the answer from the measurement is correct, but there is still a reason to make the measurement. This instrument, therefore, does not need calibration and can be marked "for reference only."
Saving costs: a case study
Of course, the engineer with the densitometer did not send me his question in idle curiosity. Someone, probably his boss, wanted to save money by eliminating a few costly calibrations. With our test--asking whether it matters if the densitometer's measurement is accurate--the engineer can see that the measurement does indeed matter. Therefore, this is the wrong place to cut corners.
But while cutting corners may not always be a good idea, it doesn't mean that money can't be saved. Too often, calibration is not done economically. Costs can be reduced without compromising results. In my last Quality Progress column, I said that calibration, and the choice of how often to calibrate, is dependent on economic risk. The calibration interval is an economic tradeoff in which you balance the frequency of calibration against the cost of calibration and against the potential consequences and costs of not calibrating. In some cases, it might be smart to calibrate more frequently if the costs of being wrong are enormous.
Here's a real-life situation in which big bucks were saved just by being smart and knowing something about measuring tools. One of my clients makes rubber products. Some dimensions of these products are held to tight tolerances and, therefore, the correctness of readings from handheld micrometers is important.
To start the money saving process, we needed to understand how a micrometer works and, more important, why it is likely to read incorrectly and need calibration. When a micrometer is new (or when an old micrometer is introduced into a system), several of its factors need to be verified. We must learn how the tool's factors work, understand the wear they endure and determine an appropriate calibration interval.
Consider the pitch of a micrometer's screw. Unless the tool is used hundreds of times a day to check exactly the same dimensions, screw wear doesn't happen very fast and doesn't have to be checked often. What do wear on most micrometers, however, are the faces of the spindle and anvil. Measuring abrasive materials such as titanium or teak can significantly wear the tool in a matter of weeks. Measuring rubber, however, may never make a difference.
This client was calibrating micrometers monthly, regardless of how often the devices were used. There are a few unusual circumstances in which I would have said that this wasn't often enough, but in a rubber factory, it's far too often. The calibration interval for micrometers used on rubber was changed to one year. This saved a lot of money and, with more experience, we may be able to comfortably increase the interval to two years.
The same company used separate but identical micrometers in general machining work. And these micrometers wore faster. Their calibration intervals were reduced to three or six months at first and will be reduced further if the data indicate that this is wise. Note that although we saved money, we now have to be much smarter, treating each tool according to how it is used. This is not as easy as having the same interval for all like tools.
Even more interesting is that some of the company's tools stayed in the instrument cabinet for months at a time, being removed only for calibration. Consequently, maintaining tools according to the amount of time they have been used also requires more thought, but the savings can be immense.
Conduct checks to meet the real goal
Like most other tools, micrometers are subject to abuse--being dropped and so forth--and they may not last for two years without needing repair. It would be wrong to set an interval for two years if tools broke or bent more frequently than that. This leads us to a further consideration--not only deciding when to calibrate, but what to do each time you calibrate.
If you really care about the results of a measurement, you want to be confident in the stability and reliability of the tool you use. This confidence can be had at the cost of frequent calibrations, but it can be had more cheaply as well.
Continuing our exposition of the micrometer, we determined that under normal use, the device wears slowly and needs calibration infrequently. But at any time it could catastrophically break or become damaged. How do we guard against this while keeping costs to a minimum? The answer is to perform checks.
Let's use our understanding of how a micrometer works to develop an inexpensive procedure that will verify its operation without a full calibration. The simplest check is to run the spindle out to the end of its travel, then back to zero. If the motion feels smooth, without catching, binding or rattling loosely, the instrument has passed. A second check is to close the jaws and see if the instrument reads zero (close the jaws of a larger tool onto a gage block). A digital mike can be forced to read zero at this point, so you have to look at the scale on the spindle.
These two checks can be made weekly or even before every shift. They only take a minute or less. If the instruments pass muster, it's likely that everything is okay. If not, send them for repair and a full recalibration.
The last act of the check is to document that it has been completed. Since no numerical data are taken, a control chart would probably be overkill. Marking the date on a calendar or wall sheet that covers many instruments will produce a permanent quality record.
Remember that while measuring the reliability of the tool in this way is possible, it isn't the main point. The point is to keep the process simple and cheap, and just monitor to see that nothing catastrophic has happened. If your check finds a problem, the tool will be taken out of service, so it's unlikely that a broken tool will ever be used. In this way you have decreased your cost, decreased your calibration frequency and increased your confidence at the same time.
1. ANSI/ISO/ASQC Q9001-1994 Quality Systems--Model for Quality Assurance in Design, Development, Production Installation and Servicing (Milwaukee: ASQ Quality Press, 1994).
PHILIP STEIN is a metrology and quality consultant in private practice in Pennington, NJ. He holds a master's degree in measurement science from The George Washington University, in Washington DC, and is an ASQ Fellow. For more information, go to www.measurement.com.