Magazines & Journals
Quality Management Journal

Printer Friendly
Issues
Article Access Key
  • Public Article
  • Log-In to View
  • Full, Senior, or Fellow members with no subscription.
  • Full, Associate, Senior or Fellow members who are also subscribers.
  • Enterprise and Site Members have access to all issues.

July 2000
Volume 7 • Number 3

Contents

An Instrument for Measuring
Quality Practices in Education

This paper reports the results of three studies designed to create and validate a survey instrument for measuring quality management/continuous improvement practices in educational settings. The intent was to develop an instrument that would both complement and enhance the Baldrige Award audit while overcoming some of its time and length limitations–namely, one that would be efficient, provide timely feedback, and be representative of the entire organization. Measures of fit and interpretability suggest the Quality Practices in Education survey achieves this purpose. The process used in the development of this instrument included both exploratory and confirmatory factor analytic techniques, the use of qualitative data to provide insights into the development of the quantitative instrument, and built-in replication. This strategy, and the discussion of the tools for carrying it out, are useful for researchers interested in any substantive area of research.

Key words: K-12 education, quality practices survey

by James R. Detert, Harvard University and Roger Jenni, Northfield Public Schools and University Of Minnesota

INTRODUCTION

The philosophy and principles of total quality management (TQM) have been extensively documented over the past 15 years (Aguayo 1990; Creech 1994; Deming 1986). The names of leading quality gurus like Deming, Crosby, and Juran are now ubiquitous in industry circles. Recently, scholars and practitioners from education, health care, and other nonindustrial settings have begun publishing articles and books that translate TQM into the dialect of these settings. For example, California school superintendent Lee Jenkins’ recent work is replete with examples of teachers using quality tools to tackle classroom problems and improve student learning (Jenkins 1997), while Harvard Medical School pediatrician Donald Berwick and his colleagues’ book outlines the use of quality management to “cure” health care (Berwick, Godfrey, and Roessner 1990). While the philosophy of TQM remains the same whether applied to industry or education, these “translations” are useful in creating a wider audience of potential adopters, since leaders and employees in most settings become more interested in investing time and money in programs once they see its direct applicability to their circumstances.

If the quality paradigm is to serve as a useful vehicle for improvement in these new settings, and be documented as such, valid measures of the implementation of these practices must be developed. A number of survey instruments have been developed for measuring the use of quality practices in industrial settings, but these instruments have limited applicability in other settings. For example, questions measuring new product quality or interfunctional design process cannot be used in education or health care settings (Flynn, Sakakibara, and Schroeder 1994). Without valid survey instruments at their disposal, most organizations attempt to measure their use of quality management practices by conducting a Baldrige Award audit, which is a written self-assessment based on the Malcolm Baldrige National Quality Award (NIST 1999a). Although the Baldrige Award audit is quite thorough in its coverage, it has several major limitations. First, because the audit requires written answers and documentation for dozens of questions, its completion requires hundreds of hours of time (usually by key personnel). It is often such a fatiguing process that organizational leaders say they cannot imagine conducting another audit for years. (In their national study of high schools employing TQM, the authors were told by several principals or superintendents that the Baldrige Award audit was “a one-time deal.”) Second, because the process of conducting and writing the audit takes months to complete, and the time required for the auditor to review it and provide feedback is a few more months, the feedback aspect of the audit is often not timely. In fact, the data reported in an audit are often more than one year old by the time feedback is received. Third, the audit is often completed by several key personnel, and is therefore not necessarily an accurate representation of all employees’ practices or beliefs. For example, the authors and their research team have visited several high schools that have won state quality awards based on self-conducted audits, and found that the majority of staff members cannot articulate a single quality principle or state a single personal use of a quality practice.

For these reasons, the process of creating a valid survey instrument for measuring quality practices in educational settings (specifically secondary education environments) was undertaken. The intent was to develop an instrument that would both complement and enhance the Baldrige Award audit while overcoming some of its time and length limitations–namely, one that would be efficient, provide timely feedback, and be representative of the entire organization. This article reports the results of three studies designed to achieve this purpose, and presents a valid instrument for measuring quality in educational settings–the Quality Practices in Education survey. Furthermore, this work outlines the process used in the development of this instrument, which included both exploratory and confirmatory factor analytic techniques, the use of qualitative data to provide insights into the development of the quantitative instrument, and built-in replication.