A Testing Maturity Model for Software Test Process Assessment and Improvement

September 1999
Volume 1 • Number 4

Contents

A Testing Maturity Model for Software Test Process Assessment and Improvement
This article reports on the development of a testing maturity model (TMM) designed to support software development organizations in assessing and improving their software testing processes. The internal structure of the TMM is described, as well as the model framework of maturity goals, subgoals, and activities tasks and responsibilities that support the incremental growth of test process maturity. This article also addresses the TMM Assessment Model, which allows organizations to determine the current state of their testing processes and provides guidance for implementing actions to support improvements. Results from a trial evaluation of the TMM questionnaire in industry are discussed, and feedback received from the software industry regarding the TMM and maturity model integration issues is presented.

Key words: inspections, maturity models, process assessment, software processes, software quality management

by Ilene Burnstein, Ariya Homyen, Taratip Suwanassart, Gary Saxena, and Rob Grom

INTRODUCTION

Software systems are becoming increasingly important in modern society. They have a strong impact on vital operations in domains such as the military, finance, and telecommunications. For this reason, it is imperative to address quality issues that relate to both the software development process and the software product. The authors’ research focuses on process and its impact on quality issues. The authors are developing a testing maturity model (TMM) designed to assist software development organizations in evaluating and improving their testing processes (Burnstein, Suwanassart, and Carlson 1996a,1996b, 1996c). The TMM complements the Capability Maturity Model (CMM) by specifically addressing those issues important to test managers, test specialists, and software quality professionals. Testing as defined in the TMM is applied in its broadest sense to encompass all software quality-related activities. The authors believe that applying the TMM maturity criteria will improve the testing process and have a positive impact on software quality, software engineering productivity, and cycle-time reduction efforts.

APPROACH TO MODEL DEVELOPMENT

The TMM is designed to support assessment and improvement drives from within an organization. It is to be used by:

  • An internal assessment team to identify the current testing capability state
  • Upper management to initiate a testing improvement program
  • Software quality assurance engineers to develop and implement process improvement plans
  • Development teams to improve testing effectiveness
  • Users/clients to define their role in the testing process
There are several existing process evaluation and assessment models, including the Software Capability Maturity Model (SW-CMM) (Paulk et al. 1995, 1993a,1993b), ISO 9001 (Coallier 1994), BOOTSTRAP (Bicego and Kuvaja 1993), and SPICE (Paulk and Konrad 1994). None of these models, however, focuses primarily on the testing process. The widely used SW-CMM does not adequately address testing issues. For example, in the SW-CMM:
  • The concept of testing maturity is not addressed.
  • Inadequate attention is paid to the role of high-quality testing as a process improvement mechanism.
  • Testing issues are not adequately addressed in the key process areas.
  • Quality-related issues such as testability, test adequacy criteria, test planning, and software certification are not satisfactorily addressed.
Because of the important role of testing in software process and product quality and the limitations of existing process assessment models, the authors have focused their research on developing a TMM. The following components support their objectives:
  • A set of levels that define a testing maturity hierarchy. Each level represents a stage in the evolution to a mature testing process. Movement to a higher level implies that lower-level practices continue to be in place.
  • A set of maturity goals and subgoals for each level (except level 1). The maturity goals identify testing improvement goals that must be addressed to achieve maturity at that level. The subgoals define the scope, boundaries, and needed accomplishments for a particular level. There are also activities, tasks, and responsibilities (ATRs) associated with each maturity goal that are needed to support it.
  • An assessment model consisting of three components. These components include: 1) a set of maturity goal-related questions designed to assess current test process maturity; 2) a set of guidelines designed to select and instruct the assessment team; and 3) an assessment procedure with steps to guide the assessment team through test process evaluation and improvement.
The general requirements for TMM development are as follows. The model must be acceptable to the software development community and be based on agreed-upon software engineering principles and practices. At the higher maturity levels it should be flexible enough to accommodate future best practices. The model must also allow for the development of testing process maturity in structured step-wise phases that follow natural process evolution. There must also be a support mechanism for test process assessment and improvement. To satisfy these requirements, the following four sources served as the principal inputs to TMM development:
  1. The CMM. The SW-CMM is a comprehensive process evaluation and improvement model developed by the Software Engineering Institute that has been widely accepted and applied by the software industry (Paulk et al. 1995, 1993a, 1993b). Like the SW-CMM, the TMM uses the concept of maturity levels as a script for testing process evaluation and improvement. The TMM levels have a structural framework as do those in the SW-CMM. The authors have added a component called “critical views” to their framework to formally include the key groups necessary for test process maturity growth. Both models require that all of the capabilities at each lower level be included in succeeding levels. To support the self-assessment process, the TMM also uses the questionnaire/interview evaluation approach of the SW-CMM. Besides these structural similarities, the TMM is visualized as a complement to the SW-CMM. This view is essential, since a mature testing process is dependent on general process maturity, and organizational investment in assessments can be optimized if assessments in several process areas can be carried out in parallel. TMM/SW-CMM relationships are discussed in more detail later.

    The TMM reflects the evolutionary pattern of testing process maturity growth documented over the last several decades. This model design approach will expedite movement to higher levels of the TMM as it will allow organizations to achieve incremental test process improvement in a way that follows natural process evolution. Designers of the SW-CMM also considered historical evolution an important factor in process improvement model development. For example, concepts from Philip B. Crosby’s quality managementmaturity grid, which describes five evolutionary stages in the adaptation of quality practices, were adjusted for the software process and used as input for developing the SW-CMM maturity levels (Paulk et al. 1995).

  2. Gelperin and Hetzel’s Evolutionary Testing Model. The authors used the historical model provided in a paper by Gelperin and Hetzel (1988) as the foundation for historical-level differentiation in the TMM. The Gelperin and Hetzel model describes phases and test goals for the 1950s through the 1990s. The initial period is described as “debugging oriented,” during which most software development organizations had not clearly differentiated between testing and debugging. Testing was an ad hoc activity associated with debugging to remove bugs from programs. Testing has since progressed to a “prevention-oriented” period, which describes best current testing practices and reflects the optimizing level 5 of both the SW-CMM and the TMM.

  3. Current industrial testing practices. A survey of industrial practices also provided important input to TMM level definition (Durant 1993). It illustrated the best and worst testing environments in the software industry at that time, and has allowed the authors to extract realistic benchmarks by which to evaluate and improve testing practices.

  4. Beizer’s Progressive Phases of Tester’s Mental Model. The authors have also incorporated concepts associated with Beizer’s evolutionary model of the individual testers’ thinking process (1990). Its influence on TMM development is based on the premise that a mature testing organization is built on the skills, abilities, and attitudes of the individuals who work within it.
At the time the TMM was being developed, two other models that support testing process assessment and improvement were reported. Gelperin and Hayashi’s model (1996), the Testability Maturity Model, uses a staged architecture for its framework. Three maturity levels are described, along with six key support areas that are reported to be analogous to key process areas in the SW-CMM. The three levels are loosely defined as weak, basic, and strong. The internal level structure is not described in detail in the report, nor is it clear where the six key support areas fit into the three-level hierarchy. A simple score card that covers 20 test-process-related issues is provided to help an organization determine its Testability Maturity Model level (Gelperin 1996). No formal assessment process is reported.

Koomen and Pol (1998) describe what they call a Test Process Improvement Model (TPI), which does not follow a staged architecture. Their model contains 20 key areas, each with different maturity levels. Each level contains several checkpoints that are helpful for determining maturity. In addition, improvement suggestions for reaching a target level are provided with the model, which are helpful for generating action plans.

In contrast to these researchers, the authors have used a systematic approach to developing their TMM based on the four sources described, allowing them to satisfy the requirements for TMM development. The authors believe that their developmental approach has resulted in a TMM that is:

  • More comprehensive and fine-grained in its level structure
  • Supported by a well-defined assessment model
  • Well defined and easier to understand and use
  • Able to provide greater coverage of test-related issues
  • Better suited to support incremental test process maturity growth.
The TMM as described in this article is also more compatible and conceptually similar to the SW-CMM. This is beneficial to organizations involved in SW-CMM assessment and improvement drives. THE MODEL STRUCTURE: A FRAMEWORK FOR THE LEVELS

Burnstein_fig1.gif The TMM is characterized by five testing maturity levels within a framework of goals, subgoals, activities, tasks, and responsibilities. The model framework is shown in Figure 1 . (Burnstein, Suwanassart, and Carlson 1996a,1996b, 1996c). Each level implies a specific testing maturity. With the exception of level 1, several maturity goals, which identify key process areas, are indicated at each level. The maturity goals identify testing improvement goals that must be addressed to achieve maturity at that level. To be placed at a level, an organization must satisfy that level’s maturity goals.

Each maturity goal is supported by one or more maturity subgoals, which specify less-abstract objectives and define the scope, boundaries, and needed accomplishments for a particular level. The maturity subgoals are achieved through a set of ATRs. The ATRs address implementation and organizational adaptation issues at a specific level. Activities and tasks are defined in terms of actions that must be performed at a given level to improve testing capability; they are linked to organizational commitments. Responsibility for these ATRs is assigned to the three groups that the authors believe are the key participants in the testing process: managers, developers/testers, and users/clients. In the model they are referred to as “the three critical views.” The manager’s view involves commitment and the ability to perform activities and tasks related to improving testing process maturity. The developer/tester’s view encompasses the technical activities and tasks that when applied, constitute mature testing practices. The user/client’s view is defined as a cooperating, or supporting, view. The developers/testers work with user/client groups on quality-related activities and tasks that concern user-oriented needs. The focus is on soliciting user/client support, consensus, and participation in activities such as requirements analysis, usability testing, and acceptance test planning. Examples of ATRs are found in the sidebar “Sample ATRs.”

MATURITY GOALS AT THE TMM LEVELS

Burnstein_fig2_tn.gif The operational framework of the TMM provides a sequence of hierarchical levels that contain the maturity goals, subgoals, and ATRs that define the state of testing maturity of an organization at a particular level, and identify areas that an organization must focus on to improve its testing process. The hierarchy of testing maturity goals is shown in Figure 2 . (Burnstein, Suwanassart, and Carlson 1996a, 1996b, 1996c). Following is a brief description of the maturity goals for all levels (except level 1, which has no maturity goals).

Level 2: Phase Definition

At TMM level 2 an organization begins to address both the technical and managerial aspects of testing in order to mature. A testing phase is defined in the software life cycle. Testing is planned, is supported by basic testing techniques and tools, and is repeatable over all software projects. It is separated from debugging, the latter of which is difficult to plan. Following are the level-2 maturity goals:

  1. Develop testing and debugging goals This calls for a clear distinction between testing and debugging. The goals, tasks, activities, and tools for each must be identified and responsibilities must be assigned. Management must accommodate and institutionalize both processes. Separating these two processes is essential for testing maturity growth since they are different in goals, methods, and psychology. Testing at TMM level 2 is now a planned activity and therefore can be managed. Managing debugging, however, is more complex, because it is difficult to predict the nature of the defects that will occur and how long it will take to repair them. To reduce the process unpredictability often caused by large-scale debugging-related activities, the project manager must allocate time and resources for defect localization, repair, and retest. At higher levels of the TMM this will be facilitated by the availability of detailed defect and repair data from past projects.

  2. Initiate a test planning process Planning is essential for a process that is to be repeated, defined, and managed. Test planning requires stating objectives, analyzing risks, outlining strategies, and developing test design specifications and test cases. Test planning also involves documenting test-completion criteria to determine when the testing is complete. In addition, the test plan must address the allocation of resources, the scheduling of test activities, and the responsibilities for testing on the unit, integration, system, and acceptance levels.

  3. Institutionalize basic testing techniques and methods To improve testing process maturity, basic testing techniques and methods must be applied across the organization. How and when these techniques and methods are to be applied and any basic tool support for them should be clearly specified. Examples of basic techniques and methods include black-box and white-box (glass-box) testing strategies; use of a requirements validation matrix; and the division of execution-based testing into subphases such as unit, integration, system, and acceptance testing.
Level 3: Integration

Testing at TMM level 3 is expanded into a set of well-defined activities that are integrated into all phases of the software life cycle. At this level management also supports the formation and training of a software test group. These are specialists who are responsible for all levels of testing, and along with software quality assurance professionals, serve as liaisons with the users/clients to ensure their participation in the testing process. Following are the level-3 maturity goals:

  1. Establish a software test organization Since testing in its fullest sense has a great influence on product quality and consists of complex activities that are usually done under tight schedules and high pressure, it is necessary to have a well-trained and dedicated group in charge of this process. The test group formed at TMM level 3 oversees test planning, test execution and recording, defect tracking, test metrics, the test database, test reuse, test tracking, and evaluation.

  2. Establish a technical training program A technical training program ensures that a skilled staff is available to the testing group. At level 3, the staff is trained in test planning, testing methods, standards, techniques, and tools. The training program also prepares the staff for the review process and for supporting user participation in testing and review activities.

  3. Integrate testing into the software life cycle Management and technical staff now realize that carrying out testing activities in parallel with all life-cycle phases is critical for test process maturity and software product quality. Support for this integration may come from application of a development model that supports integration of test-related activities into the life cycle. As a result of the integration efforts, test planning is now initiated early in the life cycle. User input to the testing process is solicited through established channels during several of the life-cycle phases.

  4. Control and monitor the testing process Monitoring and controlling the testing process provides visibility to its associated activities and ensures that the testing process proceeds according to plan. Test progress is determined by comparing the actual test work products, test effort, costs, and schedule to the test plan. Support for controlling and monitoring comes from: standards for test products, test milestones, test logs, test-related contingency plans, and test metrics that can be used to evaluate test progress and test effectiveness. Configuration management for test-related items also provides essential support for this maturity goal.
Level 4: Management and Measurement

At TMM level 4 the testing process becomes fully managed; that is, it is now planned, directed, staffed, organized, and controlled (Thayer 1998). Test-related measurements are defined, collected, analyzed, and used by managers, software quality assurance staff members, and testers. The definition of a testing activity is expanded to formally include inspections at all phases of the life cycle. Peer reviews and inspections serve as complements to execution-based testing. They are viewed as quality control procedures that can be applied to remove defects from software artifacts. Following are the level-4 maturity goals:

  1. Establish an organizationwide review program At TMM level 3 an organization integrates testing activities into the software life cycle. The emphasis is on developing test plans early in the development process. At level 4 this integration is augmented by the establishment of a formal review program. Peer reviews, in the form of inspections and walk throughs, are considered testing activities and are conducted at all phases of the life cycle to identify, catalog, and remove defects from software work products and test work products early and effectively. An extended version of the V-model as shown in Suwanassart (1996) can be used to support the integration of peer-review activities into the software life cycle. Other means for integration of review/test activities can also be used.

  2. Establish a test measurement program A test measurement program is essential for evaluating the quality and effectiveness of the testing process, assessing the productivity of the testing personnel, and monitoring test process improvement. A test measurement program must be carefully planned and managed. Test data to be collected must be identified and decisions made on how they are to be used and by whom.

  3. Software quality evaluation One purpose of software quality evaluation at this level of the TMM is to relate software quality issues to the adequacy of the testing process. Software quality evaluation requires that an organization define measurable quality attributes and quality goals for evaluating each type of software work product. Quality goals are tied to testing process adequacy since a mature testing process should lead to software that is reliable, usable, maintainable, portable, and secure.
Level 5: Optimization/Defect Prevention and Quality Control

There are several test-related objectives at the highest level of the TMM. At this level one tests to ensure the software satisfies its specification, that it is reliable, and that one can establish a certain level of confidence in its reliability. Testing is also done to detect and prevent defects. The latter is achieved by collecting and analyzing defect data.

Since the testing process is now repeatable, defined, managed, and measured, it can be fine-tuned and continuously improved. Management provides leadership and motivation and supports the infrastructure necessary for continuously improving product and process quality. Following are the level-5 maturity goals:

  1. Application of process data for defect prevention Mature organizations are able to learn from their past. Following this philosophy, organizations at the highest level of the TMM record defects, analyze defect patterns, and identify root causes of errors. Action plans are developed, and actions are taken to prevent defect recurrence. There is a defect-prevention team that is responsible for defect-prevention activities. Team members interact with developers to apply defect-prevention activities throughout the life cycle.

  2. Quality control At level 4 of the TMM organizations focus on testing for a group of quality-related attributes, such as correctness, security, portability, interoperability, usability, and maintainability. At level 5 organizations use statistical sampling, measurements of confidence levels, trustworthiness, and reliability goals to drive the testing process. The testing group and the software quality assurance group are quality leaders; they work with software designers and implementers to incorporate techniques and tools to reduce defects and improve software quality. Automated tools support the running and rerunning of test cases and defect collection and analysis. Usage modeling, based on a characterization of the population of intended uses of the software in its intended operational environment, is used to perform statistical testing (Walton, Poore, and Trammell 1995).

  3. Test process optimization At the highest level of the TMM the testing process is subject to continuous improvement across projects and across the organization. The test process is quantified and can be fine-tuned so that maturity growth is an ongoing process. An organizational infrastructure consisting of policies, standards, training, facilities, tools, and organizational structures that has been put in place by progressing up the TMM hierarchy supports this continuous maturity growth.
Optimizing the testing process involves: 1) identifying testing practices that need to be improved; 2) implementing the improvements; 3) tracking progress; 4) evaluating new test-related tools and technologies for adaptation; and 5) supporting technology transfer.

THE TMM ASSESSMENT MODEL: AN APPROACH TO DEVELOPMENT

The TMM Assessment Model (TMM-AM) is necessary to support self-assessment of the testing process. It uses the TMM as its reference model. The authors’ research objectives for the TMM-AM were to: 1) provide a framework based on a set of principles in which software engineering practitioners could assess and evaluate their software testing processes; 2) provide a foundation for test process improvement through data analysis and action planning; and 3) contribute to the growing body of knowledge in software process engineering. The TMM-AM is not intended to be used for certification of the testing process by an external body.

The SW-CMM and SPICE Assessment Models were used to guide development of the TMM-AM (Paulk et al. 1995,1993a,1993b; ISO 1995; Zubrow et al. 1994). The goals were for the resulting TMM-AM to be compliant with the CMM Appraisal Framework so that in the future, organizations would be able to perform parallel assessments in multiple process areas (Masters and Bothwell 1995). A set of 16 principles has been developed to support TMM-AM design (Homyen 1998). Based on the 16 principles, the SW-CMM Assessment Model, SPICE, and the CMM Appraisal Framework, the authors have developed a set of components for the TMM-AM.

THE TMM-AM COMPONENTS

Burnstein_fig3.gif The TMM-AM has three major components: the assessment procedure, the assessment instrument (a questionnaire), and team training and selection criteria. A set of inputs and outputs is also prescribed for the TMM-AM (Homyen 1998). The relationship among these items is shown in Figure 3. .

The Assessment Procedure

The TMM-AM assessment procedure consists of a series of steps that guide an assessment team in carrying out testing process self-assessment. The principal goals for the TMM assessment procedure are: 1) to support the development of a test process profile and the determination of a TMM level; 2) to guide the organization in developing action plans for test process improvement; 3) to ensure the assessment is executed with efficient use of the organizations’ resources; and 4) to guide the assessment team in collecting, organizing, and analyzing the assessment data. A brief summary of the steps in the assessment procedure follows.

  1. Preparation. This step includes selecting and training the assessment team, choosing the team leader(s), developing the assessment plan, selecting the projects, and preparing the organizational units participating in the assessment.

  2. Conducting the assessment. In this step the team collects and records assessment information from interviews, presentations, questionnaires, and relevant documents. A test management support system as described by Hearns and Garcia (1998) is very helpful for automatically collecting and organizing test process related data and for use in cross-checking data from multiple sources. The TMM traceability matrix, as described later, can also be used to check data accuracy, consistency, and objectivity. This helps ensure that assessment results will be reliable and reproducible. An organization’s TMM level, which is a measure of its current testing maturity level, is determined by analyzing the collected data and using a ranking algorithm. The ranking algorithm developed for the TMM-AM is similar to the algorithm described by Masters and Bothwell (1995) in their work on the CMM Appraisal Framework. The TMM ranking algorithm requires first a rating of the mat-urity subgoals, then the maturity goals, and finally the maturity level (Homyen 1998).

  3. Reporting the assessment outputs. The TMM-AM outputs include a process profile, a TMM level, and the assessment record. The assessment team prepares the process profile, which gives a summary of the state of the organizations’ testing process. The profile also includes a summary of test process strengths and weaknesses, as well as recommendations for improvements. The TMM level is a value from 1 to 5 that indicates the current testing process maturity level of the organization. Level values correspond to the testing maturity hierarchy shown in Figure 2. The assessment record is also completed in this step. It is a written account of the actual assessment that includes: names of assessment team members, assessment inputs and outputs, actual schedules and costs, tasks performed, task duration, persons responsible, data collected, and problems that occurred.

  4. Analyzing the assessment outputs. The assessment team, along with management and software quality engineers, now use the assessment outputs to identify and prioritize improvement goals. An approach to prioritization is described in (Homyen 1998). Quantitative test process improvement targets need to be established in this phase. The targets will support the action plans developed in the next step.

  5. Action planning. An action-planning team develops plans that focus on high-priority improvements identified in the previous step. This team can include assessors, software engineering process group members, software quality assurance staff, and/or opinion leaders chosen from the assessment participants (Puffer and Litter 1997). The action plan describes specific activities, resources, and schedules needed to improve existing practices and add missing practices so the organization can move up to the next TMM level.

  6. Implementing improvement. After the action plans have been developed and approved they are applied to selected pilot projects. The pilot projects need to be monitored and tracked to ensure task progress and goal achievement. Favorable results with the pilot projects set the stage for organizational adaptation of the new process.

The TMM Assessment Questionnaire

Assessment instruments, such as the questionnaire used by the authors, are needed to support the collection and analysis of information from an assessment, maintain a record of results, and provide information for assessment post mortem analysis. Use of a questionnaire supports CMM Appraisal Framework compliance (Masters and Bothwell 1995), facilitates integration with other process assessment instruments (Zubrow et al. 1994), ensures assessment coverage of all ATRs identified in each maturity goal for each level of the TMM, provides a framework in which to collect and store assessment data, and provides guidelines for the assessors as to which areas should be the focus of an interview.

It should be noted that the TMM questionnaire is not the sole source of input for determining TMM rank and generating testing assessment results. The data from completed questionnaires must be augmented and confirmed using information collected from interviews and presentations, as well as by inspection of relevant documents.

The TMM questionnaire consists of eight parts: 1) instructions for use; 2) respondent background; 3) organizational background; 4) maturity goal and subgoal questions; 5) testing tool use questions; 6) testing trends questions; 7) recommendations for questionnaire improvement; and 8) a glossary of testing terms (Homyen 1998; Grom 1998).

Parts 2 and 3 of the questionnaire are used to gather information about the respondent, the organization, and the units that will be involved in the TMM assessment. The maturity goal and subgoal questions in part 3 are organized by TMM version 1.0 levels and include a developer/tester, manager, and user/client view. The questions are designed to determine the extent to which the organization has mechanisms in place to achieve the maturity goals and resolve maturity issues at each TMM level. The testing tool component records the type and frequency of test-tool use, which can help the team make recommendations for the future. The authors added the testing-trends section to provide a perspective on how the testing process in the organization has been evolving over the last several years. This information is useful for preparing the assessment profile and the assessment record. The recommendations component allows respondents to give TMM-AM developers feedback on the clarity, completeness, and usability of the questionnaire.

Assessment Training and Team Selection Criteria

The authors have designed the TMM-AM to help an organization assess its testing process (the assessment is internal to the organization–it has initiated the drive toward test process improvement, and it will be the sole possessor of the assessment data and results). Upper management must support the self-assessment and improvement efforts, ensure that proper resources will be available for conducting the assessment, and ensure that recommendations for improvements will be implemented.

A trained assessment team made up of members from within the organization is needed. Assessment team members should understand assessment goals, have the proper knowledge experience and skills, have strong communication skills, and be committed to test process improvement. Assessment team size should be appropriate for the purpose and scope of the assessment (Homyen 1998).

The authors have adapted SPICE guidelines for selecting and preparing an effective assessment team (ISO 1995). Preparation, which is conducted by the assessment team leader, includes topics such as an overview of the TMM, interviewing techniques, and data analysis. Training activities include team- building exercises, a walk through the assessment process, filling out a sample questionnaire and other assessment-related forms, and learning to prepare final reports.

FORMS AND TOOLS FOR ASSESSMENT SUPPORT

To support an assessment team, the authors have developed several forms and a tool that implements a Web-based version of the TMM questionnaire (Grom 1998). These forms and tools are important to ensure that the assessments are performed in a consistent, repeatable manner to reduce assessor subjectivity and to ensure the validity, usability, and comparability of the assessment results. The tools and forms include the process profile and assessment record forms, whose roles have been described previously, as well as the following:

  1. Team training data recording template. This template allows the team leader to record and validate team training data. Completed instances of the template can be used in future assessments to make necessary improvements to the assessment training process.

  2. Traceability matrix. The traceability matrix, in conjunction with the assessment team training procedure, the team data recording template, and the traceability matrix review is introduced to address the issue of interrater agreement and general assessment reliability (El Emam et al. 1996). The matrix, which is completed as data are collected, allows the assessors to identify sources of data, cross-check the consistency and correctness of the data, and resolve any data-related issues. Review of the matrix data by the assessment team helps eliminate biases and inaccuracies. The matrix and the matrix review process help ensure data integrity and reproducibility of results.

  3. Web-based questionnaire. A complete version of the TMM-AM questionnaire (version 1.1) appears at www.cs.iit.edu/~tmm. The Web-based questionnaire was designed so assessment data could easily be collected from distributed sites and organized and stored in a central data repository that could be parsed for later analysis. Tool design also allows it to run on multiple operating systems and collect data from assessment teams around the world, thus providing support for test process assessment to local and global organizations. A detailed description of tool development is given in (Grom 1998).

PRELIMINARY DATA FROM TRIAL QUESTIONNAIRE USAGE

Software engineers from two software development organizations evaluated the TMM questionnaire (Homyen 1998). The questionnaire evaluation for this study focused on: 1) clarity; 2) organization; 3) ease of use; and 4) coverage of TMM maturity goals and subgoals. Feedback from the evaluation made it possible to revise and reorganize the TMM questions for better understandability and sequencing. The glossary of terms was also upgraded. These revisions resulted in version 1.1 of the TMM questionnaire, which is displayed on the web site described earlier.

Trial usage of the TMM questionnaire focused on applying the questionnaire to software development and maintenance groups in actual industrial settings. The purpose of the trial usage was to further evaluate the usability of the questionnaire, experiment with the ranking algorithm using actual industrial data, generate sample action plans, and study problems of testing process improvement in real-world environments. One interesting result of the experiment was that although both organizations were evaluated at TMM level 1, the strengths and weaknesses of each were quite different. One of the organizations did satisfy several of the maturity goals at the higher levels of the TMM. Given the state of the existing test process for the latter, it should be able to reach TMM level 2 in a relatively short time period. More details concerning these experiments can be found in (Homyen 1998).

It must be emphasized that a complete TMM assessment was not done in these experiments; a TMM level was determined only with the questionnaire data. In a formal TMM assessment, documents, interviews, and measurement data would also help determine TMM level. In addition, data integrity would be confirmed using the traceability matrix, and a more comprehensive view of strengths and weaknesses would be obtained for the final test process profile. While these small-scale experiments are promising with respect to the usability of the TMM questionnaire and the ranking algorithm, more industry-based experiments are needed to further evaluate the TMM with respect to the organization of the levels, the distribution of the maturity goals over the levels, and the appropriateness of the ATRs. The usefulness and effectiveness of the TMM for large-scale test process assessment and improvement must also be evaluated. The authors are now engaged in planning for these experiments and identifying organizations that are willing to participate in case studies.

TMM EVALUATION AND FEEDBACK

Throughout the development of the TMM the authors received feedback from software engineers, software testers, managers, and software quality professionals from more than 35 organizations around the world. Comments confirmed the need for a TMM since most correspondents believe that existing process improvement models do not adequately support the concept of testing process maturity and do not sufficiently address the special issues relevant to testing process assessment and improvement. An important issue for many practitioners was integration of maturity models and process assessments that would result in: 1) a common architecture and vocabulary; 2) common training requirements; and 3) support for performance of parallel assessments in multiple process areas. Fulfilling these requirements would ensure effective use of organizational resources, both for the assessment and the process improvement efforts.

Burnstein_fig4.gif Initially the authors viewed the TMM as a complement to the SW-CMM. They believed it would simplify parallel process improvement drives in industry if both the SW-CMM and TMM had corresponding levels and goals. In addition, they believed (and still believe), that test process maturity is supported by, and benefits from, general process maturity. Therefore, as part of the initial TMM development effort they identified relationships between TMM/SW-CMM levels and supporting key process areas. A matrix showing these relationships is shown in Figure 4. . (Burnstein, Suwanassart, and Carlson 1996b).

In the course of their research, however, the authors realized that maturity model integration issues and intermodel support relationships are more complex than simple-level correspondences. Meeting industry requirements for maturity model integration required focusing research efforts in a new direction. These efforts have resulted in the development of a framework for building and integrating process maturity models for software development subdomains such as design and testing. The framework includes procedures, templates, and checklists to support maturity model development and integration in a systematic way (Saxena 1999). A publication on the work accomplished in this project is currently being prepared.

SUMMARY

The authors have been developing a TMM to help organizations assess and improve their software testing processes. Feedback from industry concerning the TMM shows a need for a specific focus on testing process maturity and a need for a specialized test process assessment and improvement model.

Now that the complete model has been developed and trial tested, there must be wider industrial application of the TMM-AM. This will provide the additional data necessary to further evaluate and adapt the TMM so that it becomes an accepted and effective tool for test process improvement. Plans for these case studies are now being developed.

The authors’ future plans also include the development of a tester’s workbench that will recommend testing tools to support achievement of the maturity goals at each TMM level, as well as refinement of the TMM to include additional testing process concepts, such as certification of components. Research on integration mapping of the TMM with other maturity models also continues. The latter is especially important since success in this area will allow organizations to carry out parallel assessment and improvement efforts in multiple process domains, thus making optimal use of organizational resources.

Sample ATRs

The structure of the TMM is such that each maturity goal is supported by several maturity subgoals, which are achieved by a set of activities, tasks, and responsibilities (ATRs) assigned to the three groups that play key roles in the testing process: the managers, developers/testers, and user/client groups. Managers and developers/testers (including software quality professionals) are responsible for development, implementation, and organizational adaptation of the policies, plans, standards, practices, and organizational structures associated with the testing process. They receive support and/or consensus for these tasks and responsibilities from user/client groups. The following paragraphs describe an example of one set of ATRs. The complete set is described in (Suwanassart 1996).

This example comes from level 2 of the TMM, Phase Definition. One of the maturity goals at this level is “Initiate a test-planning process.” Examples of maturity subgoals associated with this goal are:

  1. A framework for organizationwide test planning policies must be established and supported by upper management.
  2. A test-plan template must be developed, recorded, and distributed to project managers and developers.
  3. Test work products must be defined, recorded, and documented.
  4. A mechanism must be put in place to integrate user-generated requirements as inputs into the test plan.
  5. Basic planning tools must be evaluated and recommended, and usage must be supported by management.
Example ATRs for these subgoals are described here. Note that the authors refer only to developers in the ATRs below, since at level 2, a testing group has not yet been established.

The Manager’s View

  • Upper management provides support, resources, and training for test-planning policies and policy implementation.
  • The project manager is responsible for negotiating commitments and assigning responsibilities to developers in order to develop the test plans and test work products.
  • The project manager ensures that there is input and/or consensus from users for the acceptance test plan. User’s requirements and concerns are recorded.
The Developer/Tester’s View

  • Developers/testers provide input to the test plan; they help determine test goals, approaches, methods, procedures, schedules, and tools.
  • Developers/testers are responsible for developing test specifications, test designs, test cases, and pass/fail criteria for each level in the test plan.
  • Developers/testers plan for the test environment, including what hardware, laboratory space, and software tools are required.
The User/Client’s View

  • The users/clients meet with team representatives to give input to the acceptance test plans. Inputs are recorded and documented.
  • The users describe functional requirements, performance requirements, and other quality attributes such as security, portability, interoperability, and usability. These are recorded and documented.

REFERENCES

Beizer, B. 1990. Software system testing techniques, second edition.New York: Van Nostrand Reinhold.

Bicego, A., and D. Kuvaja. 1993. Bootstrap, Europe’s assessment method. IEEE Software 10, no. 3: 93-95.

Burnstein, I, T. Suwanassart, and C. Carlson. 1996a. The development of a testing maturity model. In Proceedings of the Ninth International Quality Week Conference. San Francisco: The Software Research Institute.

–––. 1996b. Developing a testing maturity model: Part 1. CrossTalk, Journal of Defense Software Engineering 9, no. 8: 21-24.

–––. 1996c. Developing a testing maturity model: Part 2. CrossTalk, Journal of Defense Software Engineering 9, no. 9: 19-26.

Coallier, F. 1994. How ISO 9001 fits into the software world. IEEE Software 11, no. 1: 98-100.

Durant, J. 1993. Software testing practices survey report (TR5-93). Software Practices Research Center.

El Emam, K., D. Goldenson, L. Briand, and P. Marshall. 1996. Interrater agreement in SPICE-based assessments: Some preliminary reports. In Proceedings of the Fourth International Conference on the Software Process. Los Alamitos, Calif.: IEEE Computer Society Press.

Gelperin, D., and B. Hetzel. 1988. The growth of software testing. Communications of the Association of Computing Machinery 31, no. 6: 687-695.

Gelperin, D., and A. Hayashi. 1996. How to support better software testing. Application Trends (May): 42-48.

Gelperin, D. 1996. What’s your testability maturity? Application Trends (May): 50-53.

Grom, R. 1998. Report on a TMM assessment support tool. Chicago: Illinois Institute of Technology.

Hearns, J., and S. Garcia. 1998. Automated test team management – it works! In Proceedings of the 10th Software Engineering Process Group Conference. Pittsburgh: Software Engineering Institute.

Homyen, A. 1998. An assessment model to determine test process maturity. Ph.D. thesis, Illinois Institute of Technology.

International Organization for Standardization. 1995. ISO/IEC Software process assessment working draft-Part 3: Rating processes, version 1.0, Part 5: Construction, selection and use of assessment instruments and tools, version 1.0, Part 7: Guide for use in process improvement, version 1.0. Geneva, Switzerland: International Organization for Standardization.

Koomen, T., and M. Pol. 1998. Improvement of the test process using TPI. Available at www.iquip.nl.

Masters, S., and C. Bothwell. 1995. A CMM appraisal framework, version 1.0 (CMU/SEI-95-TR-001). Pittsburgh: Software Engineering Institute, Carnegie Mellon University.

Paulk, M., C. Weber, B. Curtis, and M. Chrissis. 1995 The capability maturity model guideline for improving the software process. Reading, Mass.: Addison-Wesley.

Paulk, M., and M. Konrad. 1994. An overview of ISO’s SPICE project. American Programmer 7, no. 2: 16-20.

Paulk, M., B. Curtis, M. Chrissis, and C. Weber. 1993a. Capability maturity model, version 1.1. IEEE Software 10, no. 4: 18-27.

Paulk, M., C. Weber, S. Garcia, M. Chrissis, and M. Bush. 1993b. Key practices of the capability maturity model, version 1.1 (CMU/SEI-93-TR-25). Pittsburgh: Software Engineering Institute, Carnegie Mellon University.

Puffer, J., and A. Litter. 1997. Action planning. IEEE Software Engineering Technical Council Newsletter 15, no. 2: 7-10.

Saxena, G. 1999. A framework for building and evaluating software process maturity models. Ph.D. thesis, Illinois Institute of Technology.

Suwanassart, T. 1996. Towards the development of a testing maturity model. Ph.D. thesis, Illinois Institute of Technology.

Thayer, R., ed. 1998. Software Engineering Project Management, second edition. Los Alamitos, Calif.: IEEE Computer Society Press.

Walton, G., J. Poore, and C. Trammel. 1995. Statistical testing of software based on a usage model. Software–Practice and Experience 25, no. 1: 97-108.

Zubrow, D., W. Hayes, J. Siegel, and D. Goldenson. 1994. Maturity questionnaire (CMU/SEI-94-SR-7). Pittsburgh: Software Engineering Institute, Carnegie Mellon University.

* The CMM and SW-CMM are service marks of Carnegie Mellon University

BIOGRAPHIES

Ilene Burnstein is an associate professor of computer science at the Illinois Institute of Technology. She teaches both undergraduate and graduate courses in software engineering. Her research interests include: software process engineering, software testing techniques and methods, automated program recognition and debugging, and software engineering education. Burnstein has a doctorate from the Illinois Institute of Technology. She can be reached at Illinois Institute of Technology, Computer Science Department, 10 West 31st St., Chicago, IL 60616 or e-mail at csburnstein@minna.iit.edu.

Ariya Homyen holds a research position at the Ministry of Science, Technology, and Energy in Thailand. She has a doctorate in computer science from the Illinois Institute of Technology. Her research interests include: test process improvement, test management, and process reuse. Taratip Suwanassart is a faculty member at Chulalongkom University in Thailand. She has a doctorate in computer science from the Illinois Institute of Technology. Her research interests include: test management, test process improvement, software metrics, and data modeling.

Gary Saxena is a member of the technical staff in the Telematics Communications Group at Motorola. He has a doctorate in computer science from the Illinois Institute of Technology. His research interests include: software architecture, software development and system development processes, and software process maturity modeling.

Robert Grom is manager of data collection for SAFCO Technologies. He has worked as a hardware engineer and now designs software. He has a master’s degree from the Illinois Institute of Technology. Grom’s research interests include software testing and test process improvement.

Featured advertisers

Article
Rating

(0) Member Reviews

Featured advertisers





ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.