Baldrige Assessment and Organizational Learning: The Need for Change Management

July 2001
Volume 8 • Number 3


Baldrige Assessment and Organizational Learning: The Need for Change Management

by Matthew W. Ford, University of Cincinnati, James R. Evans, University of Cincinnati

Self-assessment using the Malcolm Baldrige Award Criteria for Performance Excellence (CPE) has become a widespread practice among all types of organizations. Empirical evidence suggests that Baldrige Award-based assessment typically results in improvements to managerial processes. Although the notion of process change is embedded within the CPE framework, the criteria do not explicitly address how an organization manages such change. In this article, the linkage between the criteria and change management is described. It is suggested that an effective process change management model can be derived from the framework of the criteria for performance excellence. Using concepts from the literatures on organizational change, assessment, and learning, a model for managing change in the context of the criteria is generated. The change process model parallels the model of strategic change that has been historically well specified by the CPE and refines the criteria notion of learning. These two models are linked by the exchange of information between the organizational performance review item and diagnostic self-assessment activities. Suggestions on how the organizational performance review item in the criteria can be expanded to incorporate change management as an explicit area to address are made.

Key words: evaluation, organizational change, quality management, self-assessment

Self-assessment using the Malcolm Baldrige Award Criteria for Performance Excellence (CPE) has been an increasing phenomenon inside organizations (for example, Jordan 1994; Knutton 1994; Zaremba and Crew 1995; Brereton 1996; Wu, Wiebe, and Politi 1997; Caravatta 1997; Fountain 1998). Data suggest that many firms are using self-assessment as the basis for improvement. For example, a study of 70 companies in the United Kingdom revealed that approximately two-thirds had at least experimented with self-assessment using a quality management framework similar to the CPE (Finn and Porter 1994). Here, self-assessment using the CPE means an evaluation of an organization’s performance management processes against the criteria, with the objective of identifying key strengths and opportunities for improvement. Key strengths can be leveraged and refined, while opportunities for improvement are intended to lead to actions that will take the organization to higher levels of performance. Self-assessment activities are governed primarily by the organization’s managers, although external consultants—often current or former Baldrige Award examiners—may be included to enhance objectivity.

The increase in self-assessment coincides with the diffusion of the CPE as a basis for evaluating performance management system effectiveness. In addition to the national award program for manufacturing, service, and small business, which has been in existence since 1988 (nonprofit, education, and health care sectors were added in 1999), many state and local quality award programs based on the CPE have been initiated. NIST (1998) reported that Baldrige Award-based state quality award programs increased from 6 in 1991 to 43 in 1997. Many of these programs also allow nonprofit service organizations and government agencies to apply. The CPE have spread internationally as well, as some 25 countries have developed Baldrige Award-based award programs (Nabitz, Quaglia, and Wangen 1999).

As the CPE becomes the basis for more formal award programs, applicants are motivated to conduct internal assessments to prepare for the formal evaluation process (Meyers and Heller 1995; Blazey 1998). Regardless of whether firms enter the formal award process, increased diffusion of the criteria exposes more managers to the framework, which, in turn, encourages increased use of the CPE for organizational evaluation or assessment (Herrington 1994).

Few formal studies have investigated the consequences of Baldrige Award-based self-assessment. In an investigation of a self-assessment that included organizations using criteria other than the CPE, Van der Weile et al. (2000) found that self-assessment generally led to better agreement on the organization’s strengths and improvement opportunities and better planning. Organizations that practiced self-assessment also appeared to realize better improvements in market share and profitability than organizations that did not practice self-assessment.

Anecdotal evidence suggests that Baldrige Award-based self-assessment frequently leads to organizational change—stemming from managerial actions to improve management processes and practices based on assessment findings (Meyers and Heller 1995; Markels 1999; Prybutok and Spink 1999). In the criteria, such “process” change is supported by the core value of organizational learning. Organizational learning “refers to continuous improvement of existing approaches and processes and adaptation to change, leading to new goals and/or approaches” (NIST 2000, 2), and is described as a four-stage “learning cycle” that includes planning, execution of plans, assessment of progress, and revision of plans (NIST 2000, 7). Indeed, the CPE scoring guidelines clearly seek evidence of “systematic evaluation and improvement” and “organizational learning/sharing” as key management tools for strengthening the effectiveness of these processes (NIST 2000, 45). Although the framework encourages process change, the criteria, however, do not explicitly address how an organization manages such change. The authors’ premise is that managing the process change resulting from self-assessment should be viewed as a critical leadership process in the criteria.

As Baldrige Award-based self-assessments represent a catalyst for change, organizations need a clearly defined approach for effecting change to its key management processes that result from self-assessments. In this article, the link between the CPE and change management is described. It is suggested that an effective process change management model can be derived from the framework of the criteria for performance excellence. Using concepts from the literatures on organizational change, assessment, and learning, a model for managing change in the context of the criteria is generated. The implications of this work, including how the CPE might be expanded to incorporate process change management as an explicit area to address are also discussed.


It is important to differentiate between organizational changes resulting from strategy development and implementation (that is, “strategic change”), and organizational changes resulting from self-assessment (that is, “process change”). The theoretical aspects of strategic change are well captured by the CPE (Ford and Evans 2000). Indeed, a key objective of the CPE is to provide a framework for performance management excellence based on the development and implementation of strategy (NIST 2000, 6). Strategic change stems from strategic objectives, which the CPE defines as follows:

An organization’s major change (authors’ emphasis) opportunities and/or the fundamental challenges the organization faces. Strategic objectives are generally externally focused, relating to significant customer, market, product/service, or technological opportunities and challenges. Broadly stated, they are what an organization must change (authors’ emphasis) or improve to remain or become competitive. Strategic objectives set an organization’s longer-term directions and guide resource allocations and redistributions. (NIST 2000, 29)

Strategy might be built around or lead to any or all of the following: new products, services, and markets; revenue growth; cost reduction; business acquisitions; and new partnerships and alliances. Strategy might be directed toward becoming a preferred supplier, a low-cost producer, a market innovator, and/or a high-end or customized service provider.
(NIST 2000, 12)

This empirical view aligns closely with the theoretical notion of strategic change (for example, Ansoff 1965, Andrews 1971, Hofer and Schendel 1978, Nadler and Tushman 1989). Strategic change is broad in scope, is driven by environmental forces, and is tied closely to the organization’s ability to achieve its goals.

In contrast, change resulting from self-assessment activities might be viewed as more of an operational exercise. These changes result from an examination of organizational processes and aim at changing organizational “infrastructures”—the organization’s processes for achieving results. Evans (1997) noted that the CPE requirements addressed a number of infrastructural processes; he identified more than 40 managerial processes in the 1996 version of the Baldrige Award criteria. These managerial processes are often the focus of self-assessment activities.

Empirical evidence supports the notion that Baldrige Award-based assessments result in process change. Solectron’s first feedback report in 1989 suggested a lack of customer focus and long-range planning. The company subsequently initiated a “customer executive survey” to identify long-term technology and production needs and established a hoshin kanri strategic planning process (Markels 1999). When a Baldrige Award-based self-assessment of a health care organization revealed weaknesses in the organization’s ability to collect and analyze information, management subsequently approved a $50 million information system upgrade (Prybutok and Spink 1999). Self-assessment findings at an AT&T division suggested that many employees did not recall the division’s strategic vision, which prompted managers to increase meetings and interactions with employees to improve communication (Meyers and Heller 1995). One of the 1999 Baldrige Award winners, BI, a consulting firm in Minneapolis, did not receive the award until its tenth try—clear evidence of continual process change in the face of evolving criteria.

Although change to a business process tends to have lasting effects, the change tends to be narrow in scope. Unlike strategic change, which motivates organizationwide changes in behavior, process change is often confined to a particular unit, division, or function of the organization. For example, changing an organization’s process for measuring customer satisfaction usually requires substantive adjustment to a limited number of functional areas, such as marketing or information systems. Nadler and Tushman (1989) labeled such change as “tuning”—incremental change of narrow scope to increase organizational efficiency, but usually not in response to an immediate problem. In Table 1, the characteristics of strategic change are separated from those of process change.

The processes for developing and implementing strategic change have been addressed by scholars in strategic management. Recently, Ford and Evans (2000) demonstrated that the criteria’s processes for managing strategic change, centered on the CPE’s strategic planning framework, aligned considerably with scholarly theory. However, the link between the notion of process change and theory is not as direct. The characteristics of process change just noted resemble the change construct often addressed by organizational scholars, particularly those researchers who have defined the organizational development (OD) literature (see, for example, Bennis, Benne, and Chin 1962; Leavitt 1965, 1144–1170; Beckhard and Harris 1977; Beer 1980; Dyer 1981; Harrison 1989; McLennan 1989). Consider the following definition by Beer (1980, 10).

Organizational development is a system-wide process of data collection, diagnosis, action planning, intervention, and evaluation aimed at: (1) enhancing congruence between organization structure process, strategy, people, and culture; (2) developing new and creative organizational solutions; and (3) developing the organization’s self-renewing capacity.

Although the notion of change in the OD context displays some of the general character of strategic change (for example, change determination followed by change implementation), there are some important differences as well. First, unlike the notion of strategic change, change in the OD context tends to be driven by internal motivation to improve the existing relationship between various organizational elements (revisit Beer’s definition above). Usually, there is little analysis of the organization inside the larger market and competitive environment. Consequently, the view of organizational change in the OD context usually assumes a narrow perspective (Lundberg 1989, 61–82; Woodman 1989). Second, consistent with the clinical groundings of OD in behavioral and organizational psychology, change in the OD context usually follows a “diagnosis” of the current, usually dysfunctional, organizational situation (for example, Levinson 1972). The data generated from diagnosis provide better understanding of the organizational system by its members (Alderfer 1976). Finally, once the pathology has been diagnosed, then a change is determined and implemented through what is commonly termed an “intervention” (for example, Kotter 1978). The intervention changes organizational work and behavior in a way that the organization is better able to achieve its objectives.

Effective organizational diagnosis and intervention promotes a state of congruence (Nadler and Tushman 1980) between organizational objectives and the structural variables for achieving those objectives. Consider, for example, an organization that is pursuing an external market focus. An organizational diagnosis might reveal that particular aspects of the organization’s structure do not support a market-focused orientation. For example, supervisors might be practicing command-and-control behavior that inhibits individuals from making fast, customer-oriented decisions. The reward system might encourage cost reduction rather than customer satisfaction. Service representatives may not know how to effectively interact with customers when a problem arises. Effective intervention would eliminate these dysfunctions, and promote better congruence between the organization and its objectives.

In this context, it is possible to view Baldrige Award-based self-assessment as a process for achieving effective organizational diagnosis and intervention. Indeed, self-assessment appears to contribute to what has been labeled in the OD literature as “implementation theory” (Bennis 1966; Argyris 1970; Porras and Robertson 1987, 1–57). Implementation theory focuses on specific intervention activities necessary to carry out a change initiative.

The CPE can be viewed as supporting processes to manage both strategic change and process change. The processes for managing strategic change in the CPE have been well defined historically and align considerably with scholarly work (Ford and Evans 2000). A framework for managing process change is also present in the criteria, although it is less obvious. It can be articulated by coupling elements of the CPE to theoretical notions from organizational research. In the following sections a CPE-based framework for managing process change is developed.


The CPE are built upon a set of 11 core values and concepts (Table 2). The core values serve as the foundation for substantive requirements of the criteria (NIST 2000, 2). Evans and Ford (1997) found evidence to support this claim. Using an earlier version of the CPE, the researchers found that the criteria’s substantive content significantly reflected the core values and concepts. The core value of managing for innovation, introduced in the 2000 criteria, suggests a strong change management orientation. In the Baldrige Award context, innovation relates to “making meaningful change to improve an organization’s products, services, and processes. “…Organizations should be structured in such a way that innovation becomes part of the culture and daily work.” (NIST 2000, 4) The criteria also note that, although many people associate the notion of innovation to technology, “[innovation] is applicable to all key organizational processes that would benefit from breakthrough improvement and/or change.” (NIST 2000, 4) This core value supports the practice of Baldrige Award-based self-assessment, since self-assessments frequently drive improvement of key managerial processes that affect organizational performance. As noted, this type of change is labeled as “process” change.

How do the substantive requirements of the CPE encourage or specify process change? The criteria are focused primarily on strategic change. Since their inception, the CPE have gradually evolved from a tactical focus on quality improvement toward a focus on the development and implementation of strategic change that provides centrality to the CPE framework (Ford and Evans 2000). Although process changes may be realized in the CPE model via the deployment of action plans, strategic change processes seem inappropriate for managing the development and implementation of most process changes. Process changes are narrow in scope (Table 1); they lack the broad, organizationwide imperative that befits change to be installed via the CPE’s strategic planning process. Nevertheless, process changes are important to an organization’s future growth and effectiveness.

A different CPE requirement provides structure for managing process change. The requirement, termed “organizational performance review,” appears under the leadership category of the criteria. The intent of the organizational performance review is

To cover all areas of performance, thereby providing a picture of the “state of health” of your organization. This includes not only how well you are currently performing, but also how well you are moving towards the future. It is anticipated that the review findings will provide a reliable means to guide both improvement and change. (NIST 2000, 30)

According to the CPE, the organization’s senior leadership is expected to conduct the performance review. Leadership involvement in organizational processes has been a historical theme of the CPE, and it reflects a long-standing core value of the framework (Table 2). The substantive questions that organizations must answer in response to this item appear in Table 3.

A distinguishing feature of the organizational performance review is its “big picture” perspective. Rather than focusing on the status of a particular strategic initiative, the organizational review encourages managers to step back from operations and assess the overall state of the organization and its future viability. The broad perspective of the organizational review sets it apart from the conventional diagnostic managerial control systems (Anthony 1965; Merchant 1985) that contribute to the effective achievement of strategic change.

The organizational performance review item provides an outlet for using the findings of Baldrige Award-based self-assessments to initiate change. Self-assessments generate process-oriented feedback that enables managers to obtain a snapshot of how well organizational processes are functioning and of where process improvements can be made. Such feedback should be valuable in the conduct of the organizational performance review, since the intent of the organizational review is to provide a reliable guide for both improvement and change (NIST 2000, 30).

The CPE specify that the data used in the organizational performance review stem from information that comprise the business results category of the criteria (NIST 2000, 10). Business results encompass a number of areas, including those related to customers, financial performance, human resources, supplier/partners, and other operational results indicative of organizational effectiveness (NIST 2000, 24–26). Since interventions stemming from Baldrige Award-based self-assessments are focused on improving management processes, it is likely that the business results category related to organizational effectiveness would be highly reflective of self-assessment activity. Organizational effectiveness results are intended to capture measures or indicators of production and support processes, as well as progress toward accomplishing key organizational performance goals (NIST 2000, 26). Organizational effectiveness results would likely include key measures of accomplishing the process change resulting from self-assessment.

The information and analysis category provides the conduit for connecting business results with the organizational performance review. The information and analysis category examines the organization’s performance measurement system and how the organization utilizes performance data and information (NIST 2000, 16). Analysis provides a basis for effective decisions and helps managers prioritize and select opportunities for improvement, as well as to monitor the effectiveness of change strategic objectives (NIST 2000, 27). The criteria require the provision of information and analysis to support the senior executive’s performance review, particularly the effectiveness of the analysis to address the overall health of the organization (NIST 2000, 17). Baldrige Award-based self-assessments can be viewed as the analytical tool that generates the process-based findings that are used in the conduct of the organizational performance review.

To summarize, the CPE, through the organizational performance review, information and analysis, and business results items, provide a general framework for managing process change. The criteria, however, lack a specific approach for managing diagnostic activity and ensuring the successful implementation of interventions. These researchers suggest that Baldrige Award-based self-assessments provide a substantive, diagnostic component for the organizational performance review. Self-assessments generate findings for senior managers to conduct the organizational review. Before specifying a Baldrige Award-based model for managing process change, some theoretical perspective to lend some content validity to the prospective model is sought.


Three literature streams provide useful theoretical insight upon which a model based on self-assessment and the CPE can be based. These literatures relate to organizational assessment, organizational learning, and organizational change.

Organizational Assessment

In the organizational context, assessment refers to the process of measuring the effectiveness of an organization from the behavioral or social-system perspective (Lawler, Nadler, and Cammann 1980, 9). A distinguishing feature of organizational assessments is a holistic perspective. True to the tenets of systems theory (Forrester 1958; Churchman 1968; Ackoff and Emery 1974) and to the widely noted linkage between the organization and its environment (Lawrence and Lorsch 1967), the unit of analysis is the organizational system and its relationship to performance, rather than myopic focus on the system’s individual moving parts.

An important factor in organizational assessments is the model (or models) upon which the evaluation is based. Models help answer an important question: What gets assessed? Hausser (1980, 132–161) noted that all organizational assessments employ models, regardless of whether the model is explicit or not. Models used in organizational assessments are primarily models of organizational behavior and functioning (Nadler 1980, 119–131). An assessment model includes constructs that reflect aspects of organizational behavior, and at least one construct that reflects an outcome or “effectiveness” measure.

Many aspects of organizations can be assessed. Examples include culture (Schein 1985; Kilmann 1985, 351–369; Kotter and Heskett 1992), work and group behavior (Hackman and Oldham 1975; Hackman 1987, 315–342; Ancona 1990), and politics (Pfeffer 1981; Enz 1989). Each organizational aspect, of course, may require a different model.

In the context of this work, the authors are primarily interested in models used in the assessment of overall organizational effectiveness. Drawing from the intent of organizational performance review item defined in the criteria, the managerial questions may be several, including the following:

    • How healthy are we as an organization?
    • Are our organizational processes capable of achieving long-term objectives?
    • How can we improve the structure of our organization?

An effective model that assesses organizational effectiveness helps answer such questions. Scholars have developed a variety of organizational effectiveness models. (See Cameron and Whetten 1983 for some frameworks.) The variation among these models is considerable; the model’s content depends mainly on the developer’s perspective of just what constitutes “organizational effectiveness” (Cameron 1986). Consider, for example, the number of possible ways that organizational effectiveness can be measured. Dozens of organizational effectiveness indicators have been observed or proposed. (See Goodman and Pennings 1977 for a review.)

Consistent with Nadler’s (1980, 119–131) general depiction of an assessment model, the CPE framework reflects a model of organizational functioning. The criteria contain several constructs that reflect processes of organizational behavior (leadership, strategic planning, customer and market focus, information and analysis, human resource focus, and process focus), and a construct that reflects the output of those processes (business results). Although the CPE framework has been empirically derived from the collective wisdom of practitioners, it resembles some of the organizational effectiveness frameworks found in the literature. For example, Day and Wensley (1988) proposed an organizational effectiveness model based on concepts of customer focus and sustainable competitive advantage that closely parallels the CPE model. Thus, the CPE framework can be viewed as an empirically derived model for assessing organizational effectiveness.

Organizational Learning

Self-assessment encourages organizational learning. In its basic form, organizational learning can be viewed as the organization’s detection and correction of error, where error is a mismatch between the organization’s intentions and what really happened (Argyris 1989, 5). Learning that results in improved organizational performance usually derives from experience or action (Fiol and Lyles 1985; Huber 1991; Nevis, DiBella, and Gould 1995, 73). In this sense, organizations learn by doing. Although recent work appears to have elevated the status of the “learning organization” (Senge 1990), many believe that all organizations engage in collective learning as work progresses (Child and Kieser 1981, 28–64; Schein 1993).

A fundamental concept in the literature is the distinction between single- and double-loop levels of learning (Argyris and Schon 1978). Single-loop learning is the most common level of learning, and encompasses the organization’s ability to perceive deviations from perceived performance and “fix” them. Single-loop learning occurs, for example, when managers detect that a problem in implementing a specific strategic initiative and then take action to correct the deviation. Most diagnostic management control systems (Anthony 1965; Lorange and Scott Morton 1974; Otley and Berry 1980; Merchant 1985; Simons 1995) exhibit single-loop learning characteristics.

Double-loop learning is more sophisticated, since the organization must review the underlying assumptions that created the problem to be “fixed” in the first place, and adapt a better set of assumptions to support future performance. Extending the earlier example, in addition to correcting the immediate deficit from plan, managers might also step back and question their assumptions and policies that led them to believe that the strategic initiative was effective and could be implemented. Such questioning might cause them to revise some key organizational processes, such as leadership processes or processes for understanding market behavior, so that future strategic initiatives will be more effectively implemented. Diagnosis of assumptions and processes, and subsequent improvements, characterize double-loop learning.

Organizational learning is enhanced when people gather for dialogue, which is defined as a sustained collective inquiry into the processes, assumptions, and certainties that compose everyday experience (Isaacs 1993). An important consequence of effective organizational dialogue is the reduction of “defensive routines” (Argyris 1985). A defensive routine is a policy, practice, or action that prevents people involved in a group activity from being embarrassed or threatened, and, at the same time, prevents people from learning how to reduce the causes of embarrassment or threat.

Defensive routines can adversely impact the organization’s ability to implement large-scale strategic change (Janis 1989). Argyris (1989) reported the results of an experiment where executives learned to overcome defensive routines by engaging in dialogue that forced the managers to articulate their underlying assumptions and reservations about particular strategic decisions to be implemented in their organizations. After the experiment, most managers reported that their strategies were being implemented more effectively.

The organizational performance review item in the CPE encourages double-loop learning. The organizational performance review requires managers to assess the organization from a holistic perspective, where the focus is on the evaluation of processes that produce results. The review process encourages dialogue about the current assumptions behind current structural decisions. Consequently, defensive routines that hinder good decision making are better understood and likely reduced.

Note that Baldrige Award-based self-assessments enhance the double-loop learning process, since the CPE framework provides specific guidance on problematic processes that require improvement.

Organizational Change

How does self-assessment and organizational learning translate into organizational change? The linkage between self-assessment and organizational change is not well understood (Ford 2000); however, some general concepts of change developed in the organizational literature can be drawn.

Change has been commonly portrayed as moving through three phases. Lewin (1958) called these phases: unfreezing, movement, and refreezing. Beckhard and Harris (1977) proposed a conceptually similar model, referring to three organizational states: the current state, the transition state, and the future state.

A large portion of change management research can be placed in the initial phase. In the organizational literature, much has been written about diagnosing the current state of the organization and determining the appropriate change based on the diagnosis (Levinson 1972; Alderfer 1976; Beckhard and Harris 1977; Harrison and Shirom 1999). Considerably less is known about the other two phases of change, which deal primarily with implementation.

Scholars have developed a wealth of frameworks to model the organizational change process. Some of the more prominent organizational models used in the context of change management include the following: Weisbord’s (1976) six box model; Nadler and Tushman’s (1980) congruence model; the McKinsey seven-S model (Peters and Waterman 1982); Tichy’s (1983) change framework and TPC (technical, political, cultural) matrix; and the Burke-Litwin model (Burke and Litwin 1992).

Few change models have been rigorously validated (Nadler 1980, 119–131). Rather, the validity of most change models has been tested less formally, usually through consultant-based applications with clients and through comparison to empirical observation (for example, Weisbord 1976; Tushman and O’Reilly 1997; Burke and Litwin 1992; Kotter 1995). The plethora of change models in use suggests that there are many viewpoints of how organizational change is achieved. The success by which a number of these models have been employed by scholars and consultants in empirical situations suggests that a number of these viewpoints are valid.

One commonality among these change models is a teleological perspective of organizational change. Van de Ven and Poole (1995) suggested the teleological perspective as one of four categories of change process theory. Teleological change theory posits that organizations change through an iterative process of goal setting, implementation, evaluation, and revision. The unit of analysis is the individual organization. Unlike other theories of change that suggest the environment as the dominant change force (Alchian 1950; Hannan and Freeman 1977; Aldrich 1979), teleology represents change as a deliberate undertaking by individuals
affiliated with the organization.

The CPE framework contains components of teleological change theory. Most visibly, these components are reflected in the CPE’s overall orientation as a framework for developing and implementing strategic objectives. (See Ford and Evans 2000 for an overview.) Using the teleological perspective, the management of large-scale strategic change can be viewed as a two-step model of moving from strategy development to strategy implementation. The more narrow process change can be viewed in a similar fashion. In the OD paradigm, an organizational change is determined through diagnosis, and then implemented via an intervention.

At a fundamental level, managing change requires two basic categories of structured activities: one set related to determining the change, and one set for implementing the change. Depending on the type of change, for example, a strategic change versus a process change (Table 1), the structure that reflects the two general constructs likely differs. Precisely how the constructs would differ depending on the scope of the change is an opportunity for future research.

Self-assessment can contribute to both the change determination and change implementation categories of change process. Regarding change determination, self-assessment provides information that leads to effective diagnosis of organizational problems. If the self-assessment is done using the Baldrige Award criteria, the information largely points to key managerial processes that can be improved via process change. The appropriate changes, or interventions, can then be determined.

Regarding change implementation, self-assessment can be viewed as enhancing managerial control (Ford 2000). One problem that hinders the effectiveness of diagnostic managerial control systems is that many processes requiring control are difficult to measure or quantify (Simons 1995). Monitoring and controlling a change in culture, for example, has been notoriously difficult, due in part to the lack of objective—particularly quantitative—data that can be generated about the change’s status. Self-assessment provides objective information on the status of current process interventions, perhaps initiated from a previous assessment cycle, which may be difficult to capture using other means. For example, routine Baldrige Award-based assessments can update managers on the status of an initiative to implement remedial training programs for service personal, or to realign the reward system with the organization’s new strategy. If the assessment findings suggest that the implementation is not effectively proceeding, then managers can follow up. Effectively following up on useful information allows managers to “keep things on track,” which is the hallmark of diagnostic managerial control systems (Merchant 1985; Simons 1995).

Self-assessment findings might also lead to longer-term improvement in change implementation by providing managers with insight on the effectiveness of the organization’s change process itself. For example, a Baldrige Award-based self-assessment might reveal that the various functional areas, such as production, research and development, and customer service, have varied views on the objectives of the organization and the importance of the current strategic thrust. Such feedback might encourage managers to deeply examine the organization’s general processes for communicating change initiatives; substantive changes made to the general communication process will affect the implementation of not only the current change initiative, but future initiatives as well.


It has been suggested that the CPE framework specifies both strategic change (well defined) and process change (weakly defined) in its substantive requirements. The two fundamental constructs of change development and change implementation allow elaboration on the relationships between strategic change and process in the context of the criteria.

In Table 4, key points from the literature are highlighted and connected to relevant elements of the CPE. The CPE requirements linked to strategic planning describe a model for creating, deploying, and monitoring strategic change in organizations, as shown in Figure 1. Organizational performance review provides the initial direction and catalyst; strategic planning and deployment provides the process for deployment; and measurement and analysis of performance the means of monitoring and control. Organization performance review is linked to both strategic planning and deployment since findings from the performance review can be applied to change the trajectory of strategy implementation as well as toward revision to strategic plans. The strategic planning category represents the core process for developing and implementing a large-scale strategic initiative in the organization. For example, the strategic planning category and its linkages to other CPE categories provide the structure for installing a particular change. This structure facilitates single-loop learning, since change implementation structure provides for the detection of a deficit from plan and its subsequent correction.

Although the organizational performance review item provides a useful element to the single-loop, change-specific strategic change process portrayed in Figure 1, the real value of this item may be in its potential to provide a “big picture” assessment of the performance management system as a whole and its ongoing capability to achieve results. In effect, it requires managers to evaluate “how do we change in general?” Such inquiry facilitates the determination of whether the organization has the processes in place to sufficiently adapt to current and, perhaps, future environmental demands.

Figure 2 presents a conceptual model of a process change management structure, using items derived from an analogy with Figure 1. This model refines the “learning cycle” concept in the criteria, links it more clearly to areas to address (item 1.1b,) and provides a construct suitable for further study and empirical research. Specifically, organizational performance results, particularly those related to organizational effectiveness, are used by senior managers in a self-assessment mode. The self-assessment is a “big picture” evaluation of how the organization, through its management processes, is achieving its long-term objectives. Managers learn about any assumptions and design flaws in current processes that are impeding the organization from achieving success. For the organization to better adapt in the future, the fundamental processes that derive and implement change must be adjusted.

The answers obtained from self-assessment can facilitate double-loop learning. Managers engage in dialogue as part of the organizational review to understand fundamental organizational processes and the impediments to achieving long-term results. Changes to these processes, such as improved leadership activities, better customer listening posts, or a more efficient value chain, reflect a revised managerial perspective—a change in the assumptions and policies that underpin the design of processes that produce results.

The models in Figures 1 and 2 are linked by the relationship between the organizational performance review item and the need for diagnostic self-assessment. Findings from the organizational performance review may provide insight on the organization’s ability to manage change, which might help the diagnostic self-assessment effort. Self-assessment provides information on the status of change initiatives and the effectiveness of the organization’s change process in general. This information may be useful for effective strategic management. This change management information could be made available to the strategic management process if self-assessment findings were reviewed as part of the organizational performance review. Information exchange between the organizational performance review item and diagnostic self-assessment of change process helps connect processes for managing strategic change with those for managing process change.

Such a framework offers some advantages over other models of change. Unlike most of the abstract models proposed in the literature, the constructs of the CPE model have been detailed by the substantive CPE requirements. Since the CPE represent the collective wisdom of practitioners, the model garners considerable face validity among managers. Specificity and face validity are important characteristics of useful assessment models (Nadler 1980, 119–131).

A unique feature of a CPE-based change model is that it considers change management at three levels of analysis. The measurement and analysis of organizational performance provides a framework to assess priorities for change in terms of their relevance and impact to the organization. The substantive requirements for strategy development and deployment encourage managers to evaluate activities during a specific change. Such strategic controls help managers to monitor the implementation of large-scale change, to diagnose specific problems during installation, and to effectively correct significant deviations between performance and plan. The organizational performance review prompts a higher-level assessment—managers must evaluate the organization’s overall ability to achieve change. This “big picture” evaluation is consistent with the classic perspective of organizational assessment (Lawler, Nadler, and Cammann 1980).


In this investigation, the link between the Baldrige Award criteria and organizational structure for managing change was explored. It was concluded that the CPE encourages managers to engage in a self-assessment of the organization’s ability to change and adapt. Self-assessment encourages organizational learning, particularly learning of the double-loop variety, that results in the adjustment of the processes used by the firm to develop and implement strategic change. Indeed, self-assessment contributes to Adler’s (1999) notion of “enabling bureaucracy,” since self-assessment provides structure that supports and improves organizational processes.

While organizational learning is a key core value and its importance is clearly reflected in the criteria scoring guidelines, the criteria do little to explicitly require an effective approach and deployment. Change management capabilities embedded in the Baldrige Award criteria should be made more explicit at the process-change level. Although the organizational performance review item prompts managerial assessment of overall performance, the language of the item could be oriented more toward evaluating the organization’s ability to change. The question “How are [performance review findings] deployed throughout your organization, and, as appropriate, to your suppliers/partners and key customers to ensure organizational alignment?” (NIST 2000, 10) does not clearly articulate the importance of change management. To strengthen the criteria in this respect, the authors suggest changing the title of item 1.1b from Organizational Performance Review to Organizational Performance Review and Managing Change, and adding an additional diagnostic question.

How do senior leaders effectively manage organizational change to promote organizational learning and appropriate improvements to the organization’s management infrastructure?

In the accompanying notes to the criteria item, the authors also suggest a reference to self-assessment as one of the possible means for generating diagnostic information that could contribute to developing effective change initiatives and for monitoring the implementation of those initiatives. These additions would strengthen the CPE’s application as a diagnostic self-assessment tool, and provide clearer guidance to senior leadership toward improvement efforts.

From a research standpoint, scholars have been slow to embrace both the CPE and the phenomenon of self-assessment. Van der Weile et al.’s (2000) recent investigation of self-assessment practices and performance constitutes a rare formal study of these two areas. Their findings suggested that self-assessment helps managers focus on process strengths and areas for improvement, and that performance benefits may accrue to organizations that practice self-assessment routinely. But little is known about how self-assessment actually focuses management’s attention or how subsequent improvements occur. Latham’s (1997) findings suggested that characteristics of the feedback report to the client organization are an important factor in the effectiveness of a Baldrige Award-based self-assessment. Ford (2000) proposed that the defining element of the self-assessment process is the venue, usually a meeting, where managers gather to analyze and discuss self-assessment findings. Such a meeting might encourage information sharing, consensus building, and organizational learning—perhaps by reducing defensive routines (Argyris 1985) that impede effective execution of planned change. Since there are many organizational variations on how Baldrige Award-based self-assessments are conducted (for example, some with feedback reports/some without, some with review meetings/some without), it should be possible to conduct some comparative studies to better pin down the salient variables of self-assessment process.

Early research and anecdotal evidence suggests that Baldrige Award-based self-assessment findings can be applied toward improving managerial processes, toward planning of future change initiatives, or perhaps even toward correcting the trajectory of change initiatives in progress. These plausible outcomes of self-assessment require elaboration and verification. Case studies that identify outcomes of self-assessment events and that follow the outcomes through the organization would be useful in shedding light onto the mechanism of self-assessment process and how outcomes of the process feed into other organizational processes for managing change.

Finally, conditions under which self-assessment might be practiced should be investigated. Van der Weile et al.’s (2000) findings suggest, for instance, that smaller organizations may not practice self-assessment to the degree that larger organizations do. Ford (2000) suggested other possible antecedents, such as the perceived value of the assessment model, factors present in the organization that promote learning, and management involvement. Research that studies the salient properties of the self-assessment process, its downstream consequences, and the conditions that encourage the practice of self-assessment should put scholars and managers on a path toward better theory and practice.


While this paper was being prepared, the Baldrige Award criteria were undergoing their annual revision. Coincidently, the following question was added to the new Organizational Profile (formerly called the Business Overview) in the 2001 Criteria.

How do you maintain an organizational focus on performance improvement? Include your approach to systematic evaluation and improvement of key processes and to fostering organizational learning and knowledge sharing.

This question is intended to help the applicant and examiners set a context for an applicant’s approach to performance improvement, since it is an assessment dimension used in the scoring system to evaluate the maturity of organizational approaches and deployment. However, it is not part of the actual criteria and hence, is not directly included in the scoring or comment evaluation as has been suggested.


Ackoff, R. L., and F. Emery. 1974. On purposeful systems. Chicago: Aldine-Atherton.

Alchian, A. A. 1950. Uncertainty, evolution, and economic theory. Journal of Political Economy 58, no. 2:211–221.

Adler, P. S. 1999. Building better bureaucracies. Academy of Management Executive 13, no. 4:36–47.

Alderfer, C. P. 1976. Boundary relations and organizational diagnosis. In Humanizing organizational behavior, edited by L. Meltzer and F. Widkert. Springfield, Ill: Charles C. Thomas.

Aldrich, H. E. 1979. Organizations and environments. Englewood Cliffs, N.J.: Prentice-Hall.

Ancona, D. G. 1990. Outward bound: Strategies for team survival in organizations. Academy of Management Journal 33, no. 2:334–365.

Andrews, K. 1971. The concept of corporate strategy. Homewood, Ill.: Irwin.

Ansoff, H. I. 1965. Corporate strategy: An analytical approach to business policy for growth and expansion. New York: McGraw Hill.

Anthony, R. N. 1965. Planning and control systems: A framework for analysis. Boston: Division of Research, Graduate School of Business Administration, Harvard University.

Argyris, C. 1970. Intervention theory and method: A behavioral science view. Reading, Mass.: Addison-Wesley.

—. 1985. Strategy, change, and defensive routines. Boston: Putnam.

—. 1989. Strategy implementation: An experience in learning. Organizational Dynamics 18, no. 2:5–15.

Argyris, C., and D. A. Schon. 1978. Organizational learning: A theory of action perspective. Reading, Mass.: Addison-Wesley.

Beckhard, R., and R. Harris. 1977. Organizational transitions: Managing complex change. Reading, Mass.: Addison-Wesley.

Beer, M. 1980. Organization change and development: A systems view. Santa Monica, Calif.: Goodyear Publishing Company.

Bennis, W. G. 1966. Changing organizations. New York: McGraw Hill.

Bennis, W. G., K. D. Benne, and R. Chin. 1962. The planning of change. New York: Holt, Rinehart & Winston.

Blazey, M. L. 1998. Insights into organizational self-assessments. Quality Progress 31, no. 10:47–52.

Brereton, M. 1996. Introducing self-assessment—One of the keys to business excellence. Management Services 40, no. 2:22–23.

Burke, W. W., and G. H. Litwin. 1992. A causal model of organizational performance and change. Journal of Management 18, no. 3:523–545.

Cameron, K. S. 1986. Effectiveness as paradox: Consensus and conflict in conceptions of organizational effectiveness. Management Science 32, no. 5:539–553.

Cameron, K. S., and D. A. Whetten, editors. 1983. Organizational effectiveness: A comparison of multiple models. New York: Academic Press.

Caravatta, M. 1997. Conducting an organizational self-assessment using the 1997 Baldrige Award criteria. Quality Progress 30, no. 10:87–91.

Child, J., and A. Kieser. 1981. Development of organizations over time. In Handbook of organizational design, edited by N. C. Nystrom and W. H. Starbuck. Oxford: Oxford University Press.

Churchman, C. W. 1968. The systems approach. New York: Delacorte Press.

Day, G. S., and R. Wensley. 1988. Assessing advantage: A framework for diagnosing competitive superiority. Journal of Marketing 52, no. 2:1–20.

Dyer, W. G. 1981. Selecting an intervention for organization change. Training and Development Journal 35, no. 4:62–68.

Enz, C. 1989. The measurement of perceived intraorganizational power: A multi-respondent perspective. Organization Studies 10, no. 2:241–251.

Evans, J. R. 1997. Critical linkages in the Baldrige Award criteria: Research models and educational challenges. Quality Management Journal 5, no. 1:13–30.

Evans, J. R., and M. W. Ford. 1997. Value-driven quality. Quality Management Journal 4, no. 4:19–31.

Finn, M., and L. J. Porter. 1994. TQM self-assessment in the UK. TQM Magazine 6, no. 4:56–61.

Fiol, C. M., and M. A. Lyles. 1985. Organizational learning. Academy of Management Review 10, no. 4:803–813.

Ford, M. W. 2000. A model of change process and its use in self-assessment. Unpublished doctoral dissertation. University of Cincinnati.

Ford, M. W., and J. R. Evans. 2000. Conceptual foundations of strategic planning in the Malcolm Baldrige Award criteria for Performance Excellence. Quality Management Journal 7, no. 1:8–26.

Forrester, J. W. 1958. Industrial dynamics: A major breakthrough for decision makers. Harvard Business Review 3, no. 4:37–66.

Fountain, M. 1998. The target assessment model as an international standard for self-assessment. Total Quality Management 9, no. 4:S95–S99.

Goodman, P. S., and J. M. Pennings, editors. 1977. New perspectives on organizational effectiveness. San Francisco: Jossey-Bass.

Hackman, J. R. 1987. The design of work teams. In Handbook of organizational behavior, edited by J. Lorsch. Englewood Cliffs, N.J.: Prentice Hall.

Hackman, J. R., and G. R. Oldham. 1975. Development of the job diagnostic survey. Journal of Applied Psychology 60, no. 2:159–170.

Hannan, M. T., and J. Freeman. 1977. The population ecology of organizations. American Journal of Sociology 82, no. 5:929–964.

Harrison, M. I. 1989. Diagnosis and planned organizational change. Journal of Management Consulting 5, no. 4:34–42.

Harrison, M. I., and A. Shirom. 1999. Organizational diagnosis and assessment: Bridging theory and practice. Thousand Oaks, Calif.: Sage Publications.

Hausser, D. L. 1980. Comparison of different models for organizational analysis. In Organizational assessment: Perspectives on the measurement of organizational behavior and the quality of working life, edited by E. E. Lawler III, D. A. Nadler, and C. Cammann. New York: Wiley.

Herrington, M. 1994. Why not a do-it-yourself Baldrige Award Award? Across the Board 31, no. 9:34–38.

Hofer, C., and D. Schendel. 1978. Strategy formulation: Analytic concepts. St. Paul, Minn.: West.

Huber, G. P. 1991. Organizational learning: The contributing processes and the literatures. Organization Science 2, no. 1:88–115.

Isaacs, W. N. 1993. Taking flight: dialogue, collective thinking, and organizational learning. Organizational Dynamics 22, no. 2:24–39.

Janis, I. L. 1989. Crucial decisions. New York: The Free Press.

Jordan, D. W. 1994. Using the Baldrige Award criteria for self-assessment. Engineering Management Journal 6, no. 2:16–19.

Kilmann, R. 1985. Five steps for closing culture gaps. In Gaining control of corporate culture, edited by R. Kilmann, M. Saxton, R. Serpa & Associates. San Francisco: Jossey-Bass.

Knutton, P. 1994. A model approach to self-assessment. Works Management 47, no. 12:12–16.

Kotter, J. P. 1978. Organization dynamics: Diagnosis and intervention. Reading, Mass.: Addison-Wesley.

—. 1995. Leading change: Why transformation efforts fail. Harvard Business Review 73, no. 2:59–67.

Kotter, J. P., and J. L. Heskett. 1992. Corporate culture and performance. New York: The Free Press.

Latham, J. R. 1997. From strategy to results: Understanding, evaluating, and improving organizations through self-assessment using non-prescriptive criteria. Unpublished doctoral dissertation. Walden University.

Lawler, E. E. III, D. A. Nadler, and C. Cammann, editors. 1980. Organizational assessment: Perspectives on the measurement of organizational behavior and the quality of work life. New York: John Wiley & Sons.

Lawrence, P. R., and J. W. Lorsch. 1967. Organization and environment. Boston: Division of Research, Graduate School of Business Administration, Harvard University.

Leavitt, H. J. 1965. Applied organizational change in industry. In Handbook of organizations, edited by J. G. March. New York: Rand McNally.

Levinson, H. 1972. Organizational diagnosis. Cambridge, Mass.: Harvard University Press.

Lewin, K. 1958. Group decisions and social change. In Readings in Social Psychology, edited by E. E. Maccoby, T. M. Newcomb, and E. L. Hartley. New York: Holt, Rinehart & Winston.

Lorange, P., and M. S. Scott Morton. 1974. A framework for management control systems. Sloan Management Review 16, no. 1:41–56.

Lundberg, C. C. 1989. On organizational learning: Implications and opportunities for expanding organizational development. In Research in organizational change and development, vol. 3, edited by R. W. Woodman, and W. A. Pasmore. Greenwich, Conn.: JAI Press.

Markels, A. 1999. The wisdom of Chairman Ko. Fast Company Magazine (November): 259–276.

McLennan, R. 1989. Managing organizational change. Englewood Cliffs, N.J.: Prentice Hall.

Merchant, K. A. 1985. Control in business organizations. Marshfield, Mass.: Pitman.

Meyers, D. H. and J. Heller. 1995. The dual role of AT&T’s self-assessment process. Quality Progress 28, no. 1:79–83.

Nabitz, U., G. Quaglia, and P. Wangen. 1999. EFQM’s new excellence model. Quality Progress 32, no. 10:118–120.

Nadler, D. A. 1980. Role of models in organizational assessment. In Organizational assessment: Perspectives on the measurement of organizational behavior and the quality of working life, edited by E. E. Lawler III, D. A. Nadler, and C. Cammann. New York: Wiley.

Nadler, D. A., and M. L. Tushman. 1989. Organizational frame bending: Principles for managing reorientation. Academy of Management Executive 1, no. 3:194–204.

—. 1980. A model for diagnosing organizational behavior: Applying the congruence perspective. Organizational Dynamics 9, no. 2:35–51.

Nevis, E. C., A. J. DiBella, and J. M. Gould. 1995. Understanding organizations as learning systems. Sloan Management Review (winter): 73–85.

National Institute of Standards and Technology (NIST). 1998.
State quality award statistics.

—. 2000. Criteria for performance excellence. Gaithersburg, Md.: U.S. Department of Commerce.

Otley, D. T., and A. J. Berry. 1980. Control, organization, and accounting. Accounting, Organizations and Society 5, no. 2:231–246.

Peters, T. J., and R. H. Waterman, Jr. 1982. In search of excellence. New York: Harper & Row.

Pfeffer, J. 1981. Power in organizations. Marshfield, Mass.: Pitman.

Porras, J. I., and P. J. Robertson. 1987. Organization development theory: A typology and evaluation. In Research in organizational change and development, vol. 1, edited by R. W. Woodman and W. A. Pasmore. Greenwich, Conn.: JAI press.

Prybutok, V. R., and A. Spink. 1999. Transformation of a health care information systems: A self-assessment survey. IEEE Transactions of Engineering Management 46, no. 3:299–310.

Schein, E. H. 1985. Organizational culture and leadership. San Francisco: Jossey-Bass.

—. 1993. How can organizations learn faster? The challenge of entering the green room. Sloan Management Review 34, no. 2:84–92.

Senge, P. M. 1990. The fifth discipline. New York: Doubleday.

Simons, R. 1995. Levers of control: How managers use innovative control systems to drive strategic renewal. Boston: Harvard Business School Press.

Tichy, N. M. 1983. Managing strategic change: Technical, political, and cultural dynamics. New York: Wiley.

Tushman, M. L., and C. A. O’Reilly, III. 1997. Winning through innovation: A practical guide to leading organizational change and renewal. Boston: Harvard Business School Press.

Van de Ven, A. H., and M. S. Poole. 1995. Explaining development and change in organizations. Academy of Management Review 20, no. 3:510–540.

Van der Weile, T., A. Brown, R. Millen, and D. Whelan. 2000. Improvement in organizational performance and self-assessment practices by selected American firms. Quality Management Journal 7, no. 4:8–22.

Weisbord, M. R. 1976. Organizational diagnosis: Six places to look for trouble with or without a theory. Group and Organization Studies 1, no. 4:430–447.

Woodman, R. W. 1989. Organizational change and development: New arenas for inquiry and action. Journal of Management 15, no. 2:205–228.

Wu, H., H. A. Wiebe, and J. Politi. 1997. Self-assessment of total quality management programs. Engineering Management Journal 9, no. 1:25–31.

Zaremba, D., and T. Crew. 1995. Increasing involvement in self-assessment: The Royal Mail approach. TQM Magazine 7, no. 2:29–32.


Matthew W. Ford is a visiting assistant professor of operations management and entrepreneurship at the University of Cincinnati. He received his Ph.D. in operations management from the University of Cincinnati. His research interests include strategic operations, quality management, corporate entrepreneurship, and the implementation and control of change. Prior to receiving his doctorate, Ford served as corporate quality systems manager for a large U.S. manufacturer, where his duties included the design and management of a Baldrige Award-based self-assessment program for the company’s operating divisions.

James R. Evans is professor of quantitative analysis and operations management and the director of the Total Quality Management Center in the College of Business Administration at the University of Cincinnati. He holds a Ph.D. in industrial and systems engineering from Georgia Tech and has published over 70 papers in academic research journals and has served on numerous journal editorial boards, including currently Quality Management Journal. He is lead author of The Management and Control of Quality (5th edition), and Total Quality: Management, Organization, and Strategy (2nd edition), as well as other books on management science, operations management, simulation and risk analysis, statistics, and creative thinking. Evans served as an examiner for the Malcolm Baldrige National Quality Award from 1994–1996, senior examiner from 1997–1999, and alumni examiner in 2000; is a senior examiner and member of the training team for the Ohio Award for Excellence; and is a judge for the Greater Cincinnati Chamber of Commerce Small Business of the Year Award.

The authors may be contacted as follows: College of Business Administration, University of Cincinnati, PO Box 210130, Cincinnati, Ohio 45221-0130; 513-556-7052; Fax: 513-556-5499; E-mail:

If you liked this article, subscribe now.

Featured advertisers


(0) Member Reviews

Featured advertisers

ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.