2019

STATISTICS ROUNDTABLE

Complaint Department

Statistical engineering expands sphere of influence

by Lynne B. Hare

George: How are you doing, John?
John: I can’t complain.
George: Sounds like a complaint to me.

Perhaps you know what goes on in most corporate complaint departments. Given euphemistic names such as "consumer affairs" and "consumer response," their business is still the same. In most of them, telephone operators enjoy the happy task of answering phone calls from often-irate customers who are upset with what they’re getting for their money. Complaints come in from other sources as well—mostly letters or emails. With the publishing of 800-numbers on packages, websites and other sources of consumer interface, most complaints are received by phone.

Trained as models of sympathy, operators make inquiries regarding the nature of the complaint. The first step is to douse the flames by apologizing. The real source of the irritation could be high heat and humidity, but they apologize anyway. Then the operators gather relevant information such as UPC, item size, color, SKU and the nature of the complaint, such as package damage or disappointing product performance.

Operators cannot be expected to be technical experts in all product-related matters, so to carry out the consumer interface, they are often prompted by computer screens that guide inquiry while logging relevant data. Successful sessions end in soothed customer nerves and valuable corporate information, assuming the data are used properly. Doubtless, at day’s end, the operators are not in the mood to listen to their children’s complaints about homework.

Accumulated complaint data are used in various ways, depending on the corporation and the unit within it. Marketing staff want to know about product negatives, while manufacturing staff want to isolate quality problems so they can be reduced or eliminated.

Some organizations publish internal tabulations arranged by product, manufacturing facility and type of complaint. Such tabulations can cause brain cramps. For most of us, eyes glaze over after the second page. When questioned about one such report’s use, a senior vice president was heard to confess that he circled the large numbers and threw the report in the waste basket. You just thought of a way to save a step, didn’t you?

Putting data to work

How can data of this nature best be put to corporate advantage? A first concern should be data quality control. Are the data reasonably representative of the complaining population to which inference is being made? If not, any effort to make sense out of them is doomed.

Not realizing the sweeping implications of its actions, one organization that was short of operators arranged for its telephone system to hang up on potential complainers after the 10th ring. Such practice renders summary statistics useless for the purposes of assessing consumer dissatisfaction, to say nothing about what it does to blister an already aggravated customer.

After issues of data representation are resolved, you also might examine how the data are categorized and if the categories are mutually exclusive. Operators should be in synchrony with category definitions and bounds which, in turn, should include things likely to go wrong with the product and process, and should extend to be specific to differing product characteristics from one SKU to another.

That stage being set provides more fertile ground for the sensible application of statistical thinking and methods directed toward greater customer satisfaction and productivity improvements.

Make no mistake; the messages from consumers come much too late after the fact to exert a major role in the quality and productivity improvement processes. Certainly they are important, but other quality control techniques that are closer to the process are much sharper tools. Still, the data cannot and should not be ignored: They are often placed under the nose of the CEO, and that might be all the inspiration you need to take them seriously.

Consumer complaint data sets can be enormous and highly varied. There is enough work in their care and feeding to occupy the time and talents of at least one statistician, but there are not—nor will there ever be—enough statisticians to go around, given all the other expectations placed on them.

Enter statistical engineering

What to do? The principles of statistical engineering1 come to the rescue. Statisticians can and should work with those who own the problem to establish systems for complaint handling. A team might be composed of the head of the consumer complaint department, someone with strong programming skills, stakeholders from departments dependent on complaint reporting and, of course, a statistician.

Together, they plan the generation of informative reports tailored to fit organizational needs. Typical among these needs are:

  • The need to detect important changes and trends.
  • The need to recognize improvement when it occurs.
  • The need to avoid inundation by tables of data containing no important information.

Given these needs, an exception report—a document that alerts users of important events and doesn’t bother them otherwise—would seem appropriate. A report of this nature might be generated by tracking complaints over time, modeling them to learn of their baseline variability and informing users whenever complaint numbers wander beyond the bounds of expectation.

Those well versed in tools common to statistical quality control might be tempted to draw C-charts to track complaints. Put simply, the center line is the average number of complaints, , per reporting period (usually months), while the upper and lower limits are .

We don’t say this very much in polite company, but the assumption here is that complaints are Poisson distributed, which in turn assumes that:

  • The probability of a complaint is proportional to the length of the reporting period.
  • The probability of multiple complaints occurring at the same time is negligible.
  • The probability of a complaint is consistent from interval to interval.
  • The probability of a complaint in one time period is independent of the probability of a complaint in another time period.

Right off the bat, you can see the first assumption is not likely to be true because product exposure to the market changes from month to month. One month, sales might be high; the next, low. An effective solution is to substitute the C-chart with a U-chart to track complaints per million customer units sold.

The U-chart (Figure 1) sets as its center line the average number of complaints over a reasonably long reporting period, such as 24 or 36 months, divided by the average monthly sales for that same period. Then, control limits vary depending on the sales for the month in question. They are calculated as , in which ni is the number of millions of customer units sold during the ith month.

Figure 1

The U-chart is an improvement over the C-chart for this application, but it is not without its problems. First, it is almost certainly true that the denominator—the sales data—represents shipments from a distribution center or a manufacturing facility. Unless the sales tracking system is up to date, perhaps representing cash-register sales, it does not actually reflect a given month’s sales.

To overcome the problem of the lag between product distribution measured this way and actual consumption, some complaint tracking routines are written to lag and smooth the sales series. That manipulation may help stabilize the complaint rate series, but it also may cause problems with the assumption of independence (point No. 4 mentioned earlier).

There are other charting and modeling techniques that might be considered to overcome violations of underlying assumptions. Among these are the use of exponentially weighted moving average control charts and cumulative sum charts. Modeling could be generalized to autoregressive integrated moving average (ARIMA) models, seasonally adjusted as needed. For example, you might need seasonality for barbecue sauce and allergy medication complaints.

The downside of ARIMA modeling is that the number of observations required for precise coefficient estimates is large—at least 100. Many complaint applications do not have that much history captured, and when they do, the history is not stable because of numerous market changes.

Trial run

Regardless of the charting method used, there is always a need for trial runs before a formal system is put in place. The trial runs should record what proportion of the charts trigger out-of-control messages. If there are too few, the system probably won’t provide information on important consumer issues. Too many, and the complaint department will be perceived as crying wolf. A healthy balance must be established so charts trigger action on the most important consumer issues. Control limit action rules can be adjusted to alter control chart vigilance.

There also will be a temptation to examine control charts at higher levels in the reporting hierarchy. For example, you might be interested in all complaints relating to a particular manufacturing facility.

Users should be cautioned that such charts lack the power to detect differences that might be important to the organization because low complaint numbers will average out high numbers. The most informative and valid charts will be at the low levels of the data hierarchy because the assumption of a single Poisson mean is more likely to hold there.

In complaint handling, as in most other potential statistical applications areas, organizational considerations extend far beyond statistics. Yet much of the underpinning technology is statistical.

Successful complaint handling systems depend on team efforts supported by the statistician. There is strong synergy to statistical engineering enterprises of this kind. Neither the statisticians nor the other team members can develop successful systems alone. But working together, they can create powerful management tools.


Note

  1. For more about statistical engineering, see "Closing the Gap," by Roger W. Hoerl and Ronald D. Snee, Quality Progress, May 2010, pp. 52–53.

Lynne B. Hare is a statistical consultant. He holds a doctorate in statistics from Rutgers University in New Brunswick, NJ. He is past chairman of the ASQ Statistics Division and a fellow of ASQ and the American Statistical Association.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers