Volume 1 • Number 4
This article reports on the development of a testing maturity model (TMM) designed to support software development organizations in assessing and improving their software testing processes. The internal structure of the TMM is described, as well as the model framework of maturity goals, subgoals, and activities tasks and responsibilities that support the incremental growth of test process maturity. This article also addresses the TMM Assessment Model, which allows organizations to determine the current state of their testing processes and provides guidance for implementing actions to support improvements. Results from a trial evaluation of the TMM questionnaire in industry are discussed, and feedback received from the software industry regarding the TMM and maturity model integration issues is presented.
Key words: inspections, maturity models, process assessment, software processes, software quality management
by Ilene Burnstein, Ariya Homyen, Taratip Suwanassart, Gary Saxena, and Rob Grom
Software systems are becoming increasingly important in modern society. They have a strong impact on vital operations in domains such as the military, finance, and telecommunications. For this reason, it is imperative to address quality issues that relate to both the software development process and the software product. The authors research focuses on process and its impact on quality issues. The authors are developing a testing maturity model (TMM) designed to assist software development organizations in evaluating and improving their testing processes (Burnstein, Suwanassart, and Carlson 1996a,1996b, 1996c). The TMM complements the Capability Maturity Model (CMM) by specifically addressing those issues important to test managers, test specialists, and software quality professionals. Testing as defined in the TMM is applied in its broadest sense to encompass all software quality-related activities. The authors believe that applying the TMM maturity criteria will improve the testing process and have a positive impact on software quality, software engineering productivity, and cycle-time reduction efforts.APPROACH TO MODEL DEVELOPMENT
The TMM is designed to support assessment and improvement drives from within an organization. It is to be used by:
The TMM reflects the evolutionary pattern of testing process maturity growth documented over the last several decades. This model design approach will expedite movement to higher levels of the TMM as it will allow organizations to achieve incremental test process improvement in a way that follows natural process evolution. Designers of the SW-CMM also considered historical evolution an important factor in process improvement model development. For example, concepts from Philip B. Crosbys quality managementmaturity grid, which describes five evolutionary stages in the adaptation of quality practices, were adjusted for the software process and used as input for developing the SW-CMM maturity levels (Paulk et al. 1995).
Koomen and Pol (1998) describe what they call a Test Process Improvement Model (TPI), which does not follow a staged architecture. Their model contains 20 key areas, each with different maturity levels. Each level contains several checkpoints that are helpful for determining maturity. In addition, improvement suggestions for reaching a target level are provided with the model, which are helpful for generating action plans.
In contrast to these researchers, the authors have used a systematic approach to developing their TMM based on the four sources described, allowing them to satisfy the requirements for TMM development. The authors believe that their developmental approach has resulted in a TMM that is:
The TMM is characterized by five testing maturity levels within a framework of goals, subgoals, activities, tasks, and responsibilities. The model framework is shown in Figure 1 . (Burnstein, Suwanassart, and Carlson 1996a,1996b, 1996c). Each level implies a specific testing maturity. With the exception of level 1, several maturity goals, which identify key process areas, are indicated at each level. The maturity goals identify testing improvement goals that must be addressed to achieve maturity at that level. To be placed at a level, an organization must satisfy that levels maturity goals.
Each maturity goal is supported by one or more maturity subgoals, which specify less-abstract objectives and define the scope, boundaries, and needed accomplishments for a particular level. The maturity subgoals are achieved through a set of ATRs. The ATRs address implementation and organizational adaptation issues at a specific level. Activities and tasks are defined in terms of actions that must be performed at a given level to improve testing capability; they are linked to organizational commitments. Responsibility for these ATRs is assigned to the three groups that the authors believe are the key participants in the testing process: managers, developers/testers, and users/clients. In the model they are referred to as the three critical views. The managers view involves commitment and the ability to perform activities and tasks related to improving testing process maturity. The developer/testers view encompasses the technical activities and tasks that when applied, constitute mature testing practices. The user/clients view is defined as a cooperating, or supporting, view. The developers/testers work with user/client groups on quality-related activities and tasks that concern user-oriented needs. The focus is on soliciting user/client support, consensus, and participation in activities such as requirements analysis, usability testing, and acceptance test planning. Examples of ATRs are found in the sidebar Sample ATRs.MATURITY GOALS AT THE TMM LEVELS
The operational framework of the TMM provides a sequence of hierarchical levels that contain the maturity goals, subgoals, and ATRs that define the state of testing maturity of an organization at a particular level, and identify areas that an organization must focus on to improve its testing process. The hierarchy of testing maturity goals is shown in Figure 2 . (Burnstein, Suwanassart, and Carlson 1996a, 1996b, 1996c). Following is a brief description of the maturity goals for all levels (except level 1, which has no maturity goals).
Level 2: Phase Definition
At TMM level 2 an organization begins to address both the technical and managerial aspects of testing in order to mature. A testing phase is defined in the software life cycle. Testing is planned, is supported by basic testing techniques and tools, and is repeatable over all software projects. It is separated from debugging, the latter of which is difficult to plan. Following are the level-2 maturity goals:
Testing at TMM level 3 is expanded into a set of well-defined activities that are integrated into all phases of the software life cycle. At this level management also supports the formation and training of a software test group. These are specialists who are responsible for all levels of testing, and along with software quality assurance professionals, serve as liaisons with the users/clients to ensure their participation in the testing process. Following are the level-3 maturity goals:
At TMM level 4 the testing process becomes fully managed; that is, it is now planned, directed, staffed, organized, and controlled (Thayer 1998). Test-related measurements are defined, collected, analyzed, and used by managers, software quality assurance staff members, and testers. The definition of a testing activity is expanded to formally include inspections at all phases of the life cycle. Peer reviews and inspections serve as complements to execution-based testing. They are viewed as quality control procedures that can be applied to remove defects from software artifacts. Following are the level-4 maturity goals:
There are several test-related objectives at the highest level of the TMM. At this level one tests to ensure the software satisfies its specification, that it is reliable, and that one can establish a certain level of confidence in its reliability. Testing is also done to detect and prevent defects. The latter is achieved by collecting and analyzing defect data.
Since the testing process is now repeatable, defined, managed, and measured, it can be fine-tuned and continuously improved. Management provides leadership and motivation and supports the infrastructure necessary for continuously improving product and process quality. Following are the level-5 maturity goals:
THE TMM ASSESSMENT MODEL: AN APPROACH TO DEVELOPMENT
The TMM Assessment Model (TMM-AM) is necessary to support self-assessment of the testing process. It uses the TMM as its reference model. The authors research objectives for the TMM-AM were to: 1) provide a framework based on a set of principles in which software engineering practitioners could assess and evaluate their software testing processes; 2) provide a foundation for test process improvement through data analysis and action planning; and 3) contribute to the growing body of knowledge in software process engineering. The TMM-AM is not intended to be used for certification of the testing process by an external body.
The SW-CMM and SPICE Assessment Models were used to guide development of the TMM-AM (Paulk et al. 1995,1993a,1993b; ISO 1995; Zubrow et al. 1994). The goals were for the resulting TMM-AM to be compliant with the CMM Appraisal Framework so that in the future, organizations would be able to perform parallel assessments in multiple process areas (Masters and Bothwell 1995). A set of 16 principles has been developed to support TMM-AM design (Homyen 1998). Based on the 16 principles, the SW-CMM Assessment Model, SPICE, and the CMM Appraisal Framework, the authors have developed a set of components for the TMM-AM.
The TMM-AM has three major components: the assessment procedure, the assessment instrument (a questionnaire), and team training and selection criteria. A set of inputs and outputs is also prescribed for the TMM-AM (Homyen 1998). The relationship among these items is shown in Figure 3. .
The Assessment Procedure
The TMM-AM assessment procedure consists of a series of steps that guide an assessment team in carrying out testing process self-assessment. The principal goals for the TMM assessment procedure are: 1) to support the development of a test process profile and the determination of a TMM level; 2) to guide the organization in developing action plans for test process improvement; 3) to ensure the assessment is executed with efficient use of the organizations resources; and 4) to guide the assessment team in collecting, organizing, and analyzing the assessment data. A brief summary of the steps in the assessment procedure follows.
The TMM Assessment Questionnaire
Assessment instruments, such as the questionnaire used by the authors, are needed to support the collection and analysis of information from an assessment, maintain a record of results, and provide information for assessment post mortem analysis. Use of a questionnaire supports CMM Appraisal Framework compliance (Masters and Bothwell 1995), facilitates integration with other process assessment instruments (Zubrow et al. 1994), ensures assessment coverage of all ATRs identified in each maturity goal for each level of the TMM, provides a framework in which to collect and store assessment data, and provides guidelines for the assessors as to which areas should be the focus of an interview.
It should be noted that the TMM questionnaire is not the sole source of input for determining TMM rank and generating testing assessment results. The data from completed questionnaires must be augmented and confirmed using information collected from interviews and presentations, as well as by inspection of relevant documents.
The TMM questionnaire consists of eight parts: 1) instructions for use; 2) respondent background; 3) organizational background; 4) maturity goal and subgoal questions; 5) testing tool use questions; 6) testing trends questions; 7) recommendations for questionnaire improvement; and 8) a glossary of testing terms (Homyen 1998; Grom 1998).
Parts 2 and 3 of the questionnaire are used to gather information about the respondent, the organization, and the units that will be involved in the TMM assessment. The maturity goal and subgoal questions in part 3 are organized by TMM version 1.0 levels and include a developer/tester, manager, and user/client view. The questions are designed to determine the extent to which the organization has mechanisms in place to achieve the maturity goals and resolve maturity issues at each TMM level. The testing tool component records the type and frequency of test-tool use, which can help the team make recommendations for the future. The authors added the testing-trends section to provide a perspective on how the testing process in the organization has been evolving over the last several years. This information is useful for preparing the assessment profile and the assessment record. The recommendations component allows respondents to give TMM-AM developers feedback on the clarity, completeness, and usability of the questionnaire.
Assessment Training and Team Selection Criteria
The authors have designed the TMM-AM to help an organization assess its testing process (the assessment is internal to the organizationit has initiated the drive toward test process improvement, and it will be the sole possessor of the assessment data and results). Upper management must support the self-assessment and improvement efforts, ensure that proper resources will be available for conducting the assessment, and ensure that recommendations for improvements will be implemented.
A trained assessment team made up of members from within the organization is needed. Assessment team members should understand assessment goals, have the proper knowledge experience and skills, have strong communication skills, and be committed to test process improvement. Assessment team size should be appropriate for the purpose and scope of the assessment (Homyen 1998).
The authors have adapted SPICE guidelines for selecting and preparing an effective assessment team (ISO 1995). Preparation, which is conducted by the assessment team leader, includes topics such as an overview of the TMM, interviewing techniques, and data analysis. Training activities include team- building exercises, a walk through the assessment process, filling out a sample questionnaire and other assessment-related forms, and learning to prepare final reports.
FORMS AND TOOLS FOR ASSESSMENT SUPPORT
To support an assessment team, the authors have developed several forms and a tool that implements a Web-based version of the TMM questionnaire (Grom 1998). These forms and tools are important to ensure that the assessments are performed in a consistent, repeatable manner to reduce assessor subjectivity and to ensure the validity, usability, and comparability of the assessment results. The tools and forms include the process profile and assessment record forms, whose roles have been described previously, as well as the following:
Software engineers from two software development organizations evaluated the TMM questionnaire (Homyen 1998). The questionnaire evaluation for this study focused on: 1) clarity; 2) organization; 3) ease of use; and 4) coverage of TMM maturity goals and subgoals. Feedback from the evaluation made it possible to revise and reorganize the TMM questions for better understandability and sequencing. The glossary of terms was also upgraded. These revisions resulted in version 1.1 of the TMM questionnaire, which is displayed on the web site described earlier.
Trial usage of the TMM questionnaire focused on applying the questionnaire to software development and maintenance groups in actual industrial settings. The purpose of the trial usage was to further evaluate the usability of the questionnaire, experiment with the ranking algorithm using actual industrial data, generate sample action plans, and study problems of testing process improvement in real-world environments. One interesting result of the experiment was that although both organizations were evaluated at TMM level 1, the strengths and weaknesses of each were quite different. One of the organizations did satisfy several of the maturity goals at the higher levels of the TMM. Given the state of the existing test process for the latter, it should be able to reach TMM level 2 in a relatively short time period. More details concerning these experiments can be found in (Homyen 1998).
It must be emphasized that a complete TMM assessment was not done in these experiments; a TMM level was determined only with the questionnaire data. In a formal TMM assessment, documents, interviews, and measurement data would also help determine TMM level. In addition, data integrity would be confirmed using the traceability matrix, and a more comprehensive view of strengths and weaknesses would be obtained for the final test process profile. While these small-scale experiments are promising with respect to the usability of the TMM questionnaire and the ranking algorithm, more industry-based experiments are needed to further evaluate the TMM with respect to the organization of the levels, the distribution of the maturity goals over the levels, and the appropriateness of the ATRs. The usefulness and effectiveness of the TMM for large-scale test process assessment and improvement must also be evaluated. The authors are now engaged in planning for these experiments and identifying organizations that are willing to participate in case studies.
TMM EVALUATION AND FEEDBACK
Throughout the development of the TMM the authors received feedback from software engineers, software testers, managers, and software quality professionals from more than 35 organizations around the world. Comments confirmed the need for a TMM since most correspondents believe that existing process improvement models do not adequately support the concept of testing process maturity and do not sufficiently address the special issues relevant to testing process assessment and improvement. An important issue for many practitioners was integration of maturity models and process assessments that would result in: 1) a common architecture and vocabulary; 2) common training requirements; and 3) support for performance of parallel assessments in multiple process areas. Fulfilling these requirements would ensure effective use of organizational resources, both for the assessment and the process improvement efforts.
Initially the authors viewed the TMM as a complement to the SW-CMM. They believed it would simplify parallel process improvement drives in industry if both the SW-CMM and TMM had corresponding levels and goals. In addition, they believed (and still believe), that test process maturity is supported by, and benefits from, general process maturity. Therefore, as part of the initial TMM development effort they identified relationships between TMM/SW-CMM levels and supporting key process areas. A matrix showing these relationships is shown in Figure 4. . (Burnstein, Suwanassart, and Carlson 1996b).
In the course of their research, however, the authors realized that maturity model integration issues and intermodel support relationships are more complex than simple-level correspondences. Meeting industry requirements for maturity model integration required focusing research efforts in a new direction. These efforts have resulted in the development of a framework for building and integrating process maturity models for software development subdomains such as design and testing. The framework includes procedures, templates, and checklists to support maturity model development and integration in a systematic way (Saxena 1999). A publication on the work accomplished in this project is currently being prepared.
The authors have been developing a TMM to help organizations assess and improve their software testing processes. Feedback from industry concerning the TMM shows a need for a specific focus on testing process maturity and a need for a specialized test process assessment and improvement model.
Now that the complete model has been developed and trial tested, there must be wider industrial application of the TMM-AM. This will provide the additional data necessary to further evaluate and adapt the TMM so that it becomes an accepted and effective tool for test process improvement. Plans for these case studies are now being developed.
The authors future plans also include the development of a testers workbench that will recommend testing tools to support achievement of the maturity goals at each TMM level, as well as refinement of the TMM to include additional testing process concepts, such as certification of components. Research on integration mapping of the TMM with other maturity models also continues. The latter is especially important since success in this area will allow organizations to carry out parallel assessment and improvement efforts in multiple process domains, thus making optimal use of organizational resources.
The structure of the TMM is such that each maturity goal is supported by several maturity subgoals, which are achieved by a set of activities, tasks, and responsibilities (ATRs) assigned to the three groups that play key roles in the testing process: the managers, developers/testers, and user/client groups. Managers and developers/testers (including software quality professionals) are responsible for development, implementation, and organizational adaptation of the policies, plans, standards, practices, and organizational structures associated with the testing process. They receive support and/or consensus for these tasks and responsibilities from user/client groups. The following paragraphs describe an example of one set of ATRs. The complete set is described in (Suwanassart 1996).
This example comes from level 2 of the TMM, Phase Definition. One of the maturity goals at this level is Initiate a test-planning process. Examples of maturity subgoals associated with this goal are:
The Managers View
Beizer, B. 1990. Software system testing techniques, second edition.New York: Van Nostrand Reinhold.
Bicego, A., and D. Kuvaja. 1993. Bootstrap, Europes assessment method. IEEE Software 10, no. 3: 93-95.
Burnstein, I, T. Suwanassart, and C. Carlson. 1996a. The development of a testing maturity model. In Proceedings of the Ninth International Quality Week Conference. San Francisco: The Software Research Institute.
. 1996b. Developing a testing maturity model: Part 1. CrossTalk, Journal of Defense Software Engineering 9, no. 8: 21-24.
. 1996c. Developing a testing maturity model: Part 2. CrossTalk, Journal of Defense Software Engineering 9, no. 9: 19-26.
Coallier, F. 1994. How ISO 9001 fits into the software world. IEEE Software 11, no. 1: 98-100.
Durant, J. 1993. Software testing practices survey report (TR5-93). Software Practices Research Center.
El Emam, K., D. Goldenson, L. Briand, and P. Marshall. 1996. Interrater agreement in SPICE-based assessments: Some preliminary reports. In Proceedings of the Fourth International Conference on the Software Process. Los Alamitos, Calif.: IEEE Computer Society Press.
Gelperin, D., and B. Hetzel. 1988. The growth of software testing. Communications of the Association of Computing Machinery 31, no. 6: 687-695.
Gelperin, D., and A. Hayashi. 1996. How to support better software testing. Application Trends (May): 42-48.
Gelperin, D. 1996. Whats your testability maturity? Application Trends (May): 50-53.
Grom, R. 1998. Report on a TMM assessment support tool. Chicago: Illinois Institute of Technology.
Hearns, J., and S. Garcia. 1998. Automated test team management it works! In Proceedings of the 10th Software Engineering Process Group Conference. Pittsburgh: Software Engineering Institute.
Homyen, A. 1998. An assessment model to determine test process maturity. Ph.D. thesis, Illinois Institute of Technology.
International Organization for Standardization. 1995. ISO/IEC Software process assessment working draft-Part 3: Rating processes, version 1.0, Part 5: Construction, selection and use of assessment instruments and tools, version 1.0, Part 7: Guide for use in process improvement, version 1.0. Geneva, Switzerland: International Organization for Standardization.
Koomen, T., and M. Pol. 1998. Improvement of the test process using TPI. Available at www.iquip.nl.
Masters, S., and C. Bothwell. 1995. A CMM appraisal framework, version 1.0 (CMU/SEI-95-TR-001). Pittsburgh: Software Engineering Institute, Carnegie Mellon University.
Paulk, M., C. Weber, B. Curtis, and M. Chrissis. 1995 The capability maturity model guideline for improving the software process. Reading, Mass.: Addison-Wesley.
Paulk, M., and M. Konrad. 1994. An overview of ISOs SPICE project. American Programmer 7, no. 2: 16-20.
Paulk, M., B. Curtis, M. Chrissis, and C. Weber. 1993a. Capability maturity model, version 1.1. IEEE Software 10, no. 4: 18-27.
Paulk, M., C. Weber, S. Garcia, M. Chrissis, and M. Bush. 1993b. Key practices of the capability maturity model, version 1.1 (CMU/SEI-93-TR-25). Pittsburgh: Software Engineering Institute, Carnegie Mellon University.
Puffer, J., and A. Litter. 1997. Action planning. IEEE Software Engineering Technical Council Newsletter 15, no. 2: 7-10.
Saxena, G. 1999. A framework for building and evaluating software process maturity models. Ph.D. thesis, Illinois Institute of Technology.
Suwanassart, T. 1996. Towards the development of a testing maturity model. Ph.D. thesis, Illinois Institute of Technology.
Thayer, R., ed. 1998. Software Engineering Project Management, second edition. Los Alamitos, Calif.: IEEE Computer Society Press.
Walton, G., J. Poore, and C. Trammel. 1995. Statistical testing of software based on a usage model. SoftwarePractice and Experience 25, no. 1: 97-108.
Zubrow, D., W. Hayes, J. Siegel, and D. Goldenson. 1994. Maturity questionnaire (CMU/SEI-94-SR-7). Pittsburgh: Software Engineering Institute, Carnegie Mellon University.
* The CMM and SW-CMM are service marks of Carnegie Mellon University
Ilene Burnstein is an associate professor of computer science at the Illinois Institute of Technology. She teaches both undergraduate and graduate courses in software engineering. Her research interests include: software process engineering, software testing techniques and methods, automated program recognition and debugging, and software engineering education. Burnstein has a doctorate from the Illinois Institute of Technology. She can be reached at Illinois Institute of Technology, Computer Science Department, 10 West 31st St., Chicago, IL 60616 or e-mail at firstname.lastname@example.org.
Ariya Homyen holds a research position at the Ministry of Science, Technology, and Energy in Thailand. She has a doctorate in computer science from the Illinois Institute of Technology. Her research interests include: test process improvement, test management, and process reuse. Taratip Suwanassart is a faculty member at Chulalongkom University in Thailand. She has a doctorate in computer science from the Illinois Institute of Technology. Her research interests include: test management, test process improvement, software metrics, and data modeling.
Gary Saxena is a member of the technical staff in the Telematics Communications Group at Motorola. He has a doctorate in computer science from the Illinois Institute of Technology. His research interests include: software architecture, software development and system development processes, and software process maturity modeling.
Robert Grom is manager of data collection for SAFCO Technologies. He has worked as a hardware engineer and now designs software. He has a masters degree from the Illinois Institute of Technology. Groms research interests include software testing and test process improvement.
(0) Member Reviews