Experiences Implementing a Software Project Measurement Methodology

December 1999
Volume 2 • Number 1

Contents

Experiences Implementing a Software Project Measurement Methodology
This article discusses practical experiences gained after three years of implementing a proven software measurement process that emphasizes the use of quantitative data to manage software projects. The authors have helped commercial businesses and government agencies implement the process known as Practical Software Measurement: A Foundation for Objective Project Management (PSM). A brief description of the process, project measurement roadblocks to avoid, and advice for institutionalizing project measurement are discussed.

Key words: decision making, information needs, process improvement, project analysis, software measurement

byBeth Layman, TeraQuest Metrics, Inc., and Sharon Rohde, Lockheed Martin Mission Systems

INTRODUCTION

A software project manager’s worst nightmare is having his or her project canceled. Unfortunately, studies have shown that as many as one in 10 software projects are canceled, often due to excessive cost or schedule overruns, unmanageable scope creep, or unmet technical objectives. Many software organizations measure schedule delays in years, not months or weeks. The industry is aware of the many contributors to this software project crisis: unrealistic estimates, poor planning, poor risk management, lack of information to support decision making, and so on. What can the industry do about it (Layman and Rhode 1998)?

There is a growing awareness in the software industry that measurement plays an important role in solving these problems. Measurement, when integrated into the overall project management process, provides the information necessary to identify and manage the issues inherent in software projects. Measurement at the project level can be used to objectively validate estimates and plans, track progress, and even anticipate potential problems such as schedule slippage and cost overruns. The goal of project-level measurement is to provide project managers with sufficient insight into the project to support decision making and positively influence project outcomes.

A few years ago, the U.S. Department of Defense, as a major acquirer of software, identified this software project crisis as a major problem and recognized that better use of quantitative techniques were needed on its programs. It initiated a program called Practical Software Measurement. Practical Software Measurement: A Foundation for Objective Project Management (PSM) (McGarry et al. 1998) is one of the program’s primary products. It is a guidebook that presents a systematic measurement approach and explains techniques for using measurement to make project decisions in time to affect the outcome of a software project. PSM is unique in that it was developed by a working group of measurement experts from both government and industry, and has received endorsements throughout the international software measurement community.

One of the authors of this article helped write the guidebook and both authors are qualified PSM trainers. Both authors have been working to transition the PSM guidance to software-intensive commercial information technology, government, and aerospace software projects. This article provides an overview of the PSM guidance and then describes the lessons the authors have learned through their experiences implementing PSM during the last three years. These lessons should be useful to anyone implementing PSM or any other project measurement approach.

PSM OVERVIEW

PSM is a guidebook designed to help software project managers: 1) identify issues and objectives that are important to their project’s success; 2) implement a measurement program focused on those issues; and 3) gain objective insight into those issues throughout the project’s life cycle. PSM represents a practical, easy-to-use set of best practices for software measurement. Because PSM presents a flexible measurement process vs. a fixed set of software measures, it can be applied to virtually any software project. Information on how to obtain a complete copy of the PSM guidance is provided at the end of this article.

layman_fig1.gif

PSM characterizes the key elements or principles of a successful measurement program, then describes a comprehensive measurement process based on those principles. The process consists of three major activities, as shown in Figure 1. The first activity describes how to tailor the measurement program to address project-specific issues, risks, and objectives. The second activity describes a systematic process for applying, or using, measurement to gain insight into the project’s issues and to aid in decision making. The third activity, implementing measurement, explains how to put measurement into practice within an organization. In addition to the process guidance, PSM includes detailed selection and specification guidelines for proven software measures, sample indicators, measurement case studies from real-life software projects, and guidance for putting measurement on contract.

DEVELOPING A MEASUREMENT PLAN

During the tailoring phase, measurement requirements for the project are identified. PSM’s issue-driven approach stipulates that the project’s unique issues and objectives drive the identification of measurement requirements. This is because the purpose of measurement is, first and foremost, to help the project achieve its objectives, identify and track risks, satisfy constraints, and recognize problems early. PSM defines the following common types of project issues:

  • Schedule and progress
  • Resources and cost
  • Growth and stability
  • Product quality
  • Development performance
  • Technical adequacy

layman_fig2.gif

PSM emphasizes identifying project issues at the start of a project and then using the measurement process to provide insight into those issues. While some issues are common to most or all projects, each project typically has some unique issues. Examples of project-specific issues might be lack of available object-oriented expertise/resources or concerns about the implementation of a particular software package. Figure 2 shows an example of how project issues can be mapped to PSM common issues in order to further use the tailoring guidance to help identify useful measures and apply them.

Also, issue priority usually varies from project to project. Moreover, it is important to note that many of these issues are interrelated. For example, while an incremental development approach may help uncover or clarify requirements, it may also lead to more schedule slippage. Lack of available object-oriented expertise may not only contribute to additional costs and schedule delays but may also jeopardize software quality. The relationships between issues must be considered when prioritizing.

Once project-specific issues are identified and prioritized, measures can be selected that will provide insight into those issues. A measure is a quantification of an attribute of a software process or product. A variety of measures may be needed to provide insight into a single issue. Measurement selection will be driven by a number of factors including:

  • The cost to collect the measure
  • The availability of the measurement data
  • The timeliness, accuracy, and validity of the measure
  • The measures “fit” given relevant project/organizational characteristics
For example, if requirements growth and stability is an issue, then a functional size measure will be needed to track it. The appropriate measure will depend on the nature of the project. Application domain and organizational history/experience will influence the choice of a functional size measure. Information technology organizations may already use function points. Contract software development organizations with a history of tight requirements management techniques, however, may be more comfortable using a requirements-counting schema. An important principle to consider at this stage of the tailoring process is whether the measures selected can realistically be integrated into the project’s day-to-day operating procedures.

PSM provides a list of approximately 40 candidate measures. The measures identified in PSM are measures that have been used successfully by members of PSM’s technical working group, which is composed of measurement practitioners from more than 40 different software producing organizations. While this list of measures is by no means complete, it represents a starting place for identifying and specifying measurement requirements. For each measure identified, helpful information is included in the guidance such as data items and useful attributes to collect; recommended unit of measure, collection level, and reporting level; applicability to various domains; sample indicators; and so on.

The measurement plan documents the measurement requirements for the project, starting with the project’s issues and ending with a complete specification of each measure selected. It does not have to be a lengthy document, but should document the following:

  • Issues
  • Measures (data elements to be collected, data definitions, data sources/tools, data collection level, data collection frequency, and access mechanisms)
  • Aggregation strategy (how low-level data will be summarized)
  • Frequency of analysis and reporting
  • Reporting roles and responsibilities
APPLYING AND USING MEASUREMENT

Once a project gets under way, measurement data are regularly collected according to the measurement plan. Measurement tools, databases, and spreadsheets are often used to collect, store, and process raw measurement data. Once collected, the raw data are turned into information. As data are aggregated, compared, and analyzed within the context of recent project events, information emerges that can be used to help manage the project. PSM advocates using a flexible and dynamic analysis process that promotes the use of measurement as primarily an investigative activity. The key building blocks of this activity are indicators. An indicator is a measure or combination of measures that provides insight into a project issue. Indicators often compare actual project performance data to a plan or baseline and are often portrayed graphically.

One of the unique aspects of the PSM guidance is the detail provided regarding the measurement analysis process. PSM 3.1 describes three types of analysis that are performed on data:

  1. Estimation. The development of targets based on historical data and project assumptions.

  2. Feasibility analysis. The analysis of the feasibility of initial and subsequent project plans that use the estimates as a basis.

  3. Performance analysis. The analysis of actual performance compared to project plans.

PSM also describes a four-step process that can be applied whenever data are analyzed: 1) identify the problem; 2) assess problem impact; 3) project possible outcomes; and 4) evaluate alternatives.

layman_fig3.gif

Finally, the importance of understanding the relationships between project issues and the data that represent them is stressed. PSM prescribes using an analysis model (see Figure 3), which shows how some indicators can serve as leading indicators for a particular issue, because they provide insight into something that contributes to the emergence of the issue. The plusses and minuses show whether an increase in the contributor results in an increase (+) or decrease (-) in the resulting issue. For example, an increase in functional size (due to requirements growth) can result in an increase in product size, and an increase in product size can result in an increase in the effort required to complete the project. Therefore, requirements growth could be viewed as a leading indicator of effort overruns; this means that this relationship and the possible resulting outcomes should be considered during the analysis process.

The last step in applying measurement on a project is to actually use the insight gained from measurement analysis to make decisions. This involves communicating the results of the analysis (current problems, impact, outcomes, and alternatives) to the decision makers and taking action. PSM provides guidance on how to clearly communicate results and how to track the results of actions taken.

Because new issues and problems can emerge at any time throughout a software project, PSM advocates that the measurement process implemented be flexible and responsive to change as the project evolves. This means revisiting the tailoring activities and modifying measurement plans as needed throughout the project life cycle. The issues, measures, and analysis techniques must be changed over time to best meet the project’s information needs.

LESSON LEARNED IMPLEMENTING PSM

The authors have provided training, conducted measurement planning workshops, and assisted with full-scale implementation of PSM on a number of projects–large and small–in both development and maintenance groups. The PSM process has received a very favorable response from the project teams they have worked with because the focus is on meeting their project’s information needs vs. meeting some requirement to provide “outsiders” with project data.

The authors have also encountered a number of difficulties. These potential problems must be understood and resolved before the process can be successfully deployed. The authors’ experiences described in this section represent common or recurring implementation issues in the organizations they have consulted with.

Lesson 1: There is often a disconnect between the measures currently collected and the issues “real projects” face.

One way the authors help organizations implement PSM is through one- to two-day facilitated workshops. Using PSM’s issue-driven approach, they typically lead project teams through an issue identification process early in the workshop. Here, they use brainstorming and project team synergy to identify existing project issues and constraints, and potential issues/risks that may affect future phases. Next, the authors identify the type of information that would provide insight into the highest priority issues. Only then do they look at data currently collected and measures/metrics requirements, if any. A disconnect between what is currently being measured and what information is needed to help the project address their issues often becomes apparent at this stage.

Many of the organizations the authors consult with already have measurement initiatives under way. Typically, a measurement group has been established, and that group has developed a required list of metrics that all projects are required to produce on a regularly scheduled basis. Project staff members often say that the required metrics are of no value to them. Also, because their project’s process does not support, as a natural by-product, the collection of the data or the analysis (which should be an integral part of the project management and decision-making process), they often just meet the requirements without concern for data validity or accuracy (that is, fudged numbers).

Often, PSM’s flexible-at-the-project-level, issue-driven approach is counter to existing measurement practices. This usually becomes apparent during the workshop–just as projects are starting to see that measurement might be of value in managing their projects and making real-time decisions, the measurement group begins raising objections about losing control of the measurement process. Another concern is losing the ability to capture common measures and compare status and performance across projects. While this seems like a big stumbling block, it usually is not. The concerns are usually overcome with one or more of the following:

  • Recognize that organizational reporting requirements are simply a subset of the project measurement process, and learn to make better use of what is required on the project. If properly implemented, the measurement data collected can often be used to gain insight into a variety of different issues. The authors try to get everyone to realize that projects can provide a static set of graphs for organizational use while, at the same time, dynamically analyzing the same raw data and generating other graphs to get insight into their real-time issues.
  • Encourage the measurement group to look at the issues driving the organizational reporting requirements and streamline the requirements based on the highest priority organizational information needs. Sometimes closer examination reveals that the required measures are not really being used to make organizational decisions or drive future estimates/plans, or the return on investment of collecting certain information is questionable. Sometimes, a more aggregated view of the data is all that is really needed (for example, planned and actual work effort by phase to use in future estimates vs. detailed effort reporting by person, time period, and so on). Other times, it becomes clear that different measures for different types of projects are more appropriate (that is, function-point sizing for new development vs. sizing based on change requests for maintenance).
  • Differentiate the information needed to make organizational decisions from information needed to provide senior management oversight into key projects. If senior management wants oversight into key projects, this can usually be accomplished with regular project briefings where project-specific measures are presented. In fact, this approach is superior to the same-status-report-for-every-project approach, because it makes visible the things that are really impacting the project and forces management to help the project staff remove any real-time roadblocks. It does mean that senior managers may be slightly more taxed because they may now have to view different graphic indicators for each project.
  • Resolve to “walk first.” Recognize that, without effective use of measurement at the project team level, the measurement program within an organization will be weak. This is because most data used within a software-intensive organization come from the performance of project work. It is easier to get buy-in for collecting a common set of measures across projects after individuals buy in to the value of measurement.
layman_fig4.gif

The disconnect between organizational and project needs can be seen in the following example of the often-required schedule data. While monthly Gantt or schedule variance charts are often required for each project, they provide little insight into the cause of schedule slips and often do not indicate a problem until it is of major proportion. Once projects identify the nature of the schedule issue, simple work unit progress charts, like the one shown in Figure 4, can be used to augment or even replace the Gantt chart. Project leads can use these progress charts to pinpoint schedule problems long before they appear on a Gantt chart.

Lesson 2: Making people need measurement is the best first step toward institutionalizing it.

The authors have learned that the best philosophy to adopt is this: Rather than fighting with logic the many and varied objections project teams may raise against project measurement, get them “hooked” on measurement instead. Measurement planning workshops are a good way to do this. The authors isolate the project team for one to two days, immerse them in PSM, change their perception of measurement, help them realize their need for information, and make them feel their “data deprivation.” The authors show them how the information they need could be derived from their existing process, and show them how the information could be used during a project. This often transforms resistive types into chomping-at-the-bit advocates. A key to this transformation process is getting project teams to take ownership of their issues and recognize their responsibility for making visible the things that are happening on the project–things both within and outside their control.

Once people feel they need specific data or indicators, they find ways to use them. After they begin using them, they often find ways to build on what they have in order to meet other needs and build more measurement into their process. To get this cycle going and achieve real institutionalization (for example, where measurement is a natural by-product of the process and represents the “way we do things here”), startup assistance and ongoing consulting services are usually needed. This is where the measurement group can be of service. Many measurement groups the authors have encountered have traditionally “owned” the analysis of measurement data. They collect the data from the project and produce various charts and graphs. They interpret the data and often report results within the organization. With PSM, measurements are analyzed and used by the project team. The authors work with these measurement groups to help them transition from measurement “doers” into measurement “consultants.” They offer their expertise to projects and help them build simple spreadsheets and collection systems to collect the data they need. They help project members learn how to generate graphs from spreadsheets. They share these simple tools across projects. And finally, they advise projects in the proper use of measurement and in this way, help establish true quantitative software project management within their organizations.

layman_fig5.gif

Many of the authors’ clients with maintenance or sustaining projects have the recurring information need to make visible the impact of a “trickling resource drain.” Typically the project staff is constantly being tagged for nonproject work, yet this drain is never assumed in project estimates nor is the impact of the drain quantified or even given visibility at the program management/customer level. A fixed staffing level is usually assumed. Project staff members identify the resource drain as a major issue, construct simple charts like the one in Figure 5 showing a gap between planned and actual staffing, and present it along with schedule data. Management and customers often say, “Wow–I had no idea!” and are willing to take corrective action to curb nonproject activities. A very simple measurement often helps solve a very big problem and gets project teams hooked on metrics.

Lesson 3: The project culture will impact the implementation of measurement.

One of the biggest roadblocks to implementing effective project measurement is the prevailing project culture, which is difficult and slow to change. Unfortunately, some organizations still suffer from one or both of the following:

  • Do not make bad news visible or be prepared to get blamed. One workshop attendee lamented: “If this type of data become available, management will know where the project actually stands!” Of course, that is precisely what PSM is advocating–identify a problem as early as possible in order to fix the problem before it becomes catastrophic or unsolvable. Management that reacts negatively rather than constructively to less-than-stellar performance creates a culture that finds itself constantly in “crisis” mode. The authors have seen strong quality advocates and management consultants within developer organizations successfully alter this climate by educating and coaching management on the need to personally change from “blamers” to “helpers,” but it is not a fast or easy transition.
  • Do not give customers insight into the development process–they do not understand these things. PSM advocates sharing measurement information with the customer. This is particularly important when the customer is internal (MIS/IT) and/or when a single customer is paying the bill (MIS/IT or contractual situations). In these cases, customers play an important role in the project’s success. Their inputs drive the development process and sometimes their decisions directly affect the project outcome. Because of this, they need to understand how the project is performing and why.

The bottom line is that both management and customers must be willing to listen and respond constructively to bad news. If this type of maturity is not present in an organization’s current project culture and no attempt is being made to change it, measurement will only be useful within the project’s limited scope of control.

OTHER SUGGESTIONS FOR GETTING STARTED

Based on their experiences with implementing PSM to date, the authors put forth the following additional suggestions for getting started with project measurement:

  • Market the approach. Tie quantitative project management to current software process improvement efforts and show how measurement is integral to risk management, meeting project commitments, and improving process maturity. For example, the measurement approach described in PSM is consistent with, and can be easily linked to, ISO and Capability Maturity Model (CMM)-based improvement initiatives (CMM’s repeatable level requires planning and tracking of project costs, schedule, effort, size, and quality). This type of marketing must occur continuously.
  • Provide education. Ensure that all levels of the organization understand the benefits of measurement and are taught the basic measurement principles, steps, and techniques, such as those outlined in PSM. Management, in particular, needs to understand its special role in supporting the process and its responsibility to actively participate in analyzing/interpreting measurement results and taking corrective actions when needed. The authors have found that some managers do not understand how to properly analyze and interpret charts and graphs, so be sure to reinforce the use of a systematic analysis process like the one discussed in this article.
  • Conduct project planning workshops. Consider measurement planning workshops as an alternative to traditional training courses to introduce people to project-level measurement concepts. This enables projects to develop their own measurement plan while learning the process.
  • Focus on a few measures. For projects just starting to perform measurement, ensure that a feasible measurement plan is developed. This may translate into a very small subset of measures. Stress the notion that what one measures can change throughout the process as the project’s issues evolve and as the project gains more experience with measurement.
CONCLUSION

The PSM guidance has served as a useful framework for introducing software organizations to the concept of an adaptable measurement approach that can be tailored to fit unique project needs. It has been used to establish measurement approaches on projects with little or no measurement, and has been used to hone comprehensive, formalized measurement programs. The authors hope that by introducing the PSM approach and that by sharing some of the implementation challenges they have encountered, readers will consider this approach to software project measurement and will be ready to deal with these common implementation difficulties.

Obtaining a Copy of the PSM Guidebook

At the time of this writing, version 3.1 of the PSM Guidebook (April 13, 1998) was available on the PSM Web site at www.psmsc.com and version 4 was under development.

ACKNOWLEDGEMENTS

The authors of this article would like to acknowledge the other authors of PSM: John McGarry, Cheryl Jones, David Card, Betsy Bailey, Joseph Dean, Fred Hall, and George Stark; the PSM Support Center; and the many interesting (but unnamed!) projects and project team members whose experiences have served as input for the writing of this article.

REFERENCES

Layman, Beth, and Sharon Rohde. 1998. Experiences implementing software measurement. Presented at Pacific Northwest Software Quality Conference and the 8th International Conference on Software Quality, Portland, Ore.

Layman, Beth. 1998. Lesson learned implementing a best practice: Practical software measurement. Presented at the Lockheed Martin Systems Engineering and Software Symposium, New Orleans, La.

McGarry, Jack et al. 1998. Practical software measurement: A foundation for objective project management, version 3.1. Available at www.psmsc.com.

BIOGRAPHIES

Beth Layman is a senior associate at TeraQuest Metrics, Inc. She has more than 20 years of software industry experience with a system development background and specialization in quality and process management. She provides consulting support to TeraQuest clients in the areas of software measurement, process improvement, and quality management. Prior to joining TeraQuest, Layman worked for Lockheed Martin and served as research director at the Quality Assurance Institute.

Layman is a CMM-based lead assessor, one of the principal authors of Practical Software Measurement: A Foundation for Objective Project Management (PSM), and is an associate editor for Software Quality Professional. Layman can be contacted at TeraQuest Metrics, 5523 Cord Grass Ln., Melbourne Beach, FL 32951 or by e-mail at blayman@teraquest.com.

Sharon Rohde is a senior consultant at Lockheed Martin Mission Systems. She has more than eight years of experience in software engineering, program management, and applied engineering research and development, with three years specifically in software and systems engineering processes and measurement. Other areas of expertise include software methodologies, tools, testing, reuse, quality, and risk management. Prior to joining Lockheed Martin, Rohde developed and implemented a measurement plan for a major government agency and contributed to the Practical Software Measurement’s Product Engineering Working Group. Rohde has authored journal articles dealing with systems engineering automation, risk management, and software reuse. She can be reached by e-mail at sharon.l.rohde@lmco.com.

Featured advertisers

Article
Rating

(0) Member Reviews

Featured advertisers





ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.