Q: Our company is redesigning its customer service surveys. To date, the question of whether the customer was satisfied with the service gave the option of answering yes or no, with a space provided for the customer to provide additional comments.
On the new survey, the company wants to start using a Likert scale of 1 to 10 so customers can indicate their level of satisfaction with the service, but it also wants to leave the yes-or-no question on the survey. It would be structured so customers would indicate yes or no as to whether they were satisfied, and then the company would have the customer indicate the extent of satisfaction or dissatisfaction by rating on a scale of 1 to 10. The ability to write additional comments would still be provided.
Is it redundant to have the yes-or-no aspect of the question in addition to the scale? I feel it is, but maybe I’m wrong. Also, if the yes-or-no question is eliminated, how should research and reporting be done across that dividing line between the historical data that has the yes-or-no question and the new data that will not have it?
A: I strongly suggest you not use two ways to answer the same question. While the proposed approach seems simple and promises more information than you would get by using only one type of scale, there are two problems:
- Anything that seems even a little complicated will dissuade some people from completing the survey.
- A fraction of people will be confused, and you will end up with questionable data.
Imagine your confusion if a respondent checked the "no" box, and then indicated a rating of 7 on the satisfaction scale. Does this suggest he or she wasn’t sure how to use the scale and thought the 7 was the degree of dissatisfaction? Or maybe the respondent just wanted to send you a mixed message—that he or she was generally satisfied but was dissatisfied with some portion of your service and couldn’t answer yes. The problem is that you’ll suspect some of your respondents were confused, but you won’t know which ones.
As for which type of scale to use, I would definitely recommend the Likert scale. Just looking at the distribution of scores from a questionnaire that employs the Likert approach will give you a great deal more information than the percentage you will get from the same number of responses to a yes-or-no survey.
If you want to do a sophisticated statistical analysis, you will see that you can more quickly assess significant patterns if your data is captured on a Likert scale. The average of scores from Likert questions, along with their distribution of variation, gives you more information than the percentage you would get with yes-and-no responses.
There is one exception to my preference for the Likert scale: It’s obviously of little use for questions that can only have a yes-or-no response. For example, most political polls offer only two very distinct choices. Asking people to choose a candidate by using a Likert scale would not make much sense.
In choosing how broad a scale to use, there are many opinions among experts, all of whom have their reasons for selecting that range. You indicated that your company is considering using a scale of 1 to 10. I have always preferred 1 to 5—or at most a 1 to 7—because it’s easy to fill out and gives the survey provider good information.
Years ago, I took a class from Joseph Juran, and he used a scale from 1 to 20 on his course evaluation survey. Perhaps his grades were so consistently high that he needed the broader scale to look for subtle changes.
In comparing new data to previously collected data, if you use a Likert scale from 1 to 10, the simplest approach is to consider all responses that fall between 1 and 5 as "no" and all responses from 6 to 10 as "yes." There may be some sophisticated rationale for doing it otherwise, but I’m not aware of what it is.
Some people insist on using a scale that has an even number of choices—for example, 1 to 6 or 1 to 10, thus forcing the respondent to choose an answer that leans toward a positive or negative direction. I prefer an odd number, which gives the respondent an opportunity to indicate that he or she is neutral on the question. Why force an answer that really doesn’t mean anything?
In addition to the Likert scale, you should also allow a choice of "no opinion." This would let someone without enough information to indicate such while still answering the question. Thus, it would avoid skewing your data with people who should not count.
Finally, if space on your form permits, provide space after each question for open-ended comments. While these might not have much statistical validity, they would provide insight that otherwise could be missed.
Fort Collins, CO
For more information
- Allen, I. Elaine and Christopher A. Seaman, "Likert Scales and Data Analysis," Quality Progress, July 2007, pp. 64-65.
- Fontenot, Gwen, Lucy Henke and Kerry Carson, "Take Action on Customer Satisfaction," Quality Progress, July 2005, pp. 40-47.
- Hayes, Bob E., "The True Test of Loyalty," Quality Progress, June 2008, pp. 20-26.
Q: Is there a maturity model that shows how to gauge an organization’s progress toward auditing the elements of a quality management system (QMS) or a standard from the International Organization for Standardization (ISO)?
Reviewing our audit-findings data from the last couple of years, document and record control (elements 4.2.3 and 4.2.4 in ISO 9001:2008) tend to be the most common areas of nonconformance. As a QMS process matures, I believe that findings would tend to move toward operational issues, such as ISO 9001 element 7.5.
Quality assurance manager
A: The short answer to your question of whether such a maturity model exists is no. However, you touch on an important issue relating to the purpose and evolution of a management system audit.
Like any process, an audit must add value. Any value an audit adds will change as the management system being audited matures. In a newly implemented management system, the focus of the internal audit program is conformance.
As the management system matures, however, conformance issues (including document control) should become less of an issue. Instead, the focus should shift toward the effectiveness of the system. For example, the audit should ask whether the processes accomplish planned results. If not, it should then ask whether actions are being taken to resolve the issues.
Auditors should spend less time focusing on issues such as document and record control and more time evaluating process performance and metrics. That doesn’t mean conformance issues such as document and record control should be ignored. It’s just my opinion that less time should be spent looking for problems in these areas in the absence of serious historical issues relating to these processes.
Unfortunately, many auditors never leave the comfort zone of auditing for conformance, which is relatively easy and painless. Being able to audit for effectiveness takes quite a bit of skill and a good measure of perspective, and it often surfaces some sensitive (but quite important) issues.
For the audit program to continue to provide value in driving process performance, it must be performed thoroughly.
Manager, operational excellence
Rio Tinto Minerals
For more information