A Process Model of Software Quality Assurance/Software Quality Engineering

September 2000
Volume 2 • Number 4

Contents

A Process Model of Software Quality Assurance/Software Quality Engineering


A generic model of the software quality assurance (SQA) process may be used by an SQA organization as a training tool, an aid to estimation, or a means to select or develop a set of standards and templates for documentation. Junior quality engineers can be shown how and when SQA activities are performed throughout the life cycle and become productive in helping develop sections of a project SQA plan. The process model can provide a tool for constructing a work breakdown structure to use in estimating the cost (and schedule) for SQA activities on a proposed project. The process model can also be used as a guide to effective implementation of process improvement.

Key words: input-process-output (IPO) diagrams, planning, process improvement, project management, standards, training, work breakdown structure (WBS)

by Roger Drabick, dakota imaging, inc.

INTRODUCTION

All managers and personnel who have worked in a software quality assurance (SQA) or software quality engineering (SQE) organization have a process that represents the way an SQA/SQE organization operates. Some mature software organizations have an SQA process that is defined by a set of standards for documentation and operations. Other smaller firms may have a less well-defined and more ad-hoc process; the primary repository for information regarding this process is in the minds of its SQA personnel. In both cases, a process does exist. Lacking any better definition, the process is “the way SQA is done here.”

An SQA/SQE process can be modeled graphically, using input-process-output (IPO) techniques. This article explains how these techniques were used on one project to produce an SQA/SQE process model that has general application for the software industry. The SQA process model was used as the basis of the SQA plan, which successfully guided and defined the activities of the small SQA group, during a yearlong process to elicit requirements for a large development project for a major corporation in the transportation industry. The activities of the SQA group, guided by this model, contributed greatly to the successful conclusion of that project. Estimates for follow-on work were then based on a work breakdown structure derived from the process model. The SQA process, and this process model, is based on the author’s 19 years of experience in this field, and on IEEE-Standard-730 and -1061 (IEEE 1989; 1992).

The top level of the SQA process is shown in Figure 1. This particular process model has eight subprocesses:

  1. Review program/project-level plans
  2. Develop quality assurance (QA) plan
  3. Coordinate metrics
  4. Coordinate risk program
  5. Perform audits
  6. Coordinate review meetings
  7. Facilitate process improvement
  8. Monitor test program

These subprocesses encompass a wider role for SQA than has been assumed in classical texts. The author believes that such a model is vital for effectively identifying, estimating, and performing SQA tasks. The key to effective SQA is the development of an SQA plan to serve as a contract between the project team and the SQA organization. The SQA plan must be based on a process model that identifies the specific tasks SQA needs to perform to effectively support the project team. As can be seen from Figure 2, some of the process elements are “classic” SQA (for example, perform audits), but others are wider in scope (for example, facilitate process improvement). The author has specifically included the process to facilitate process improvement because, like many organizations, this particular project had no formally chartered software engineering process group (SEPG). In addition, the author believes that facilitating process improvement is one of the best ways for an SQA group to provide a return on investment.

If a new SQA function is being created in an organization, this process model can be reviewed to identify the activities that organization should perform. The process model can then be tailored to fit that environment. A cautionary note: Do not try to do everything in this model if an SQA organization is just getting started. Begin, perhaps, by reviewing the project-level plans, and then develop an SQA plan to serve as a contract between SQA and the development organization. Next, provide support to peer review meetings, develop and report on significant metrics (Kaner 2000), and monitor the test program. Readers might also want to help management start a risk management program. For additional reading on risk management see (Jones 1994).

THE PROCESS MODEL

A technique to model processes is described in chapter 13 of Watts Humphrey’s book, Managing the Software Process (1989). The technique involves the use of IPO diagrams as pioneered at IBM.

Before embarking on a modeling effort, it is worthwhile to ask, “What would an organization do with a process model for software quality assurance?” Some possible uses include:

  • A tool to train new SQA engineers
  • A model to use in estimating (Drabick 1993)
  • A model to use in the process improvement process
  • A model to use to select or develop a set of standards and templates for SQA documentation
  • A tool to show the scope of SQA activities to new developers
  • A graphic for showing managers and developers the tasks SQA engineers perform

Figure 1 is a top-level IPO diagram of the SQA process. In classic Yourdon Structured Analysis terms, this can be compared to a “context diagram” or “level 0 data flow diagram.” This diagram provides a top-level overview of the SQA process, with its associated significant inputs and outputs. Note that this diagram is “top level” as far as inputs and outputs go. Detailed inputs and outputs will be found in lower-level diagrams.

Significant points in Figure 1 are:

  • The SQA process is based on relationships, interfaces, and tasks that are defined in other project plans (for example, project management plan, configuration management plan).
  • A significant task to be addressed by SQA is coordination of risk management to assist project management. This involves scheduling and convening regular meetings to address risk management, maintaining lists of active risks, and tracking status of risk mitigation plans for project management.
  • SQA must be actively involved in review meetings (including, but not limited to, peer reviews and formal design reviews with the customer).
  • The SQA organization must be actively involved in monitoring the test program on the project.
  • A number of significant formally controlled documents (for example, SQA plan, risk management plan, metrics plan) result from performance of the SQA process.

The top-level diagram shown in Figure 1 is fairly simple. Figure 2 shows what the author believes are the essential elements of a formal SQA program on a medium (greater than 20 thousand lines of source code [KSLOC] but less than 100 KSLOC) or large (greater than 100 KSLOC) development program or project.

The significant process elements identified in Figure 2 (level 1 IPO diagram) include:

  1. Review program/project-level plans
  2. Develop QA plan
  3. Coordinate metrics
  4. Coordinate risk program
  5. Perform audits
  6. Coordinate review meetings
  7. Facilitate process improvement
  8. Monitor test program An IPO diagram can be developed for each of these eight processes.

THE SIGNIFICANT PROCESS ELEMENTS

Review Project Plans

Figure 3 shows the top-level IPO diagram for review project-level plans. (A detailed breakdown of each level 1 IPO diagram is provided here.) As one will see in subsequent discussion, review program/project-level plans and develop QA plan are closely linked. The project-level plans must be reviewed prior to publication. Since publication of the SQA plan occurs in the same time frame as publication of these other project-level plans, review of the project-level plans is necessary to determine that they are consistent, correct, and of acceptable quality. This review also identifies top-level tasks that SQA is tasked to perform (specified in these other plans), and identifies the interfaces between SQA and other groups in the development organization.

In Figure 3, significant inputs include:

  • Project plans
    1. Configuration management (CM) plan
    2. Project management plan
    3. Software development plan
  • CM system
  • Standards/templates

The test plan is not reviewed at this point in the project; it is not normally written until after the requirements are generated. Thus, the test plan is developed later than the document review that is taking place in this process.

There are three subprocesses that are components of the review project-level plans process. They include:

  1. Review the project management plan
  2. Review the CM plan
  3. Review the software development plan

Members of the SQA organization should review each of these plans and report on the results of the review. The review should determine that the project plans are complete, consistent, and correct. “Correctness” should concentrate on determining that the individual plans define interfaces to SQA functions. The results of the review can be conveyed in terms of redlines to the existing document, a formal report listing the issues/areas of concern, or comments that can be presented at an inspection/walkthrough of the document.

Outputs from this process are simple:

  • Documented list of issues found while reviewing the various plans
  • SQA-approved project plans

Develop QA Plan

Figure 4 shows the IPO diagram for the develop QA plan process. As a result of this process, SQA tasks to be performed on the program or project are formally documented. The SQA plan provides a road map to the activities that the SQA organization will perform. Essentially, the SQA plan serves as a formal contract between the SQA organization and the other groups on the project, specifying what tasks will be performed, according to what schedule.

Required inputs include:

  • Software development life cycle, as defined in the project management plan
  • Project plans
    1. Project management plan
    2. CM plan
    3. Software development plan
  • Schedules
  • Standards (specifically, IEEE-Standard-730, Standard for Software Quality Assurance Plans)

There are 13 subprocesses associated with SQA planning. The first 11 subprocesses define the activities to develop 11 sections that make up an SQA plan, as defined in IEEE-Standard-730. While the first four subprocesses can be considered “boilerplate,” the next seven (through define records approach) contain the key elements in the SQA plan and program. The final two subprocesses identify the tasks required to document, assemble, and review the SQA plan.

  1. Develop QA plan introduction. Develop QA plan introduction identifies the task of writing the introduction section of the SQA plan. The plan will contain the purpose and scope of the SQA plan, and will list the software item(s) covered by the plan and their intended use. Further details of the contents of this section can be found in IEEE-Standard-730.
  2. Create management section. Create management section identifies the task of writing the management section of the SQA plan. This section will describe organization, tasks, and management responsibilities. Further details of the contents of this section can be found in IEEE-Standard-730.
  3. Identify documentation requirements. Identify documentation requirements defines the task of writing the documentation requirements section of the SQA plan. This section will identify the minimum documentation requirements for the development, validation, verification, use, and maintenance of the software. Further details of the contents of this section can be found in IEEE-Standard-730.
  4. Identify standards. Identify standards identifies the task of writing the standards section of the SQA plan; this section will identify the standards, practices, and conventions used on the project, and the method of verifying compliance with those standards, practices, and conventions. Further details of the contents of this section can be found in IEEE-Standard-730 (IEEE 1989). Obviously, IEEE-Standard-730 (IEEE 1989) and IEEE-Standard-1061 (IEEE 1992) should be listed in this section.
  5. Specify reviews and audits. Specify reviews and audits is a critical part of an SQA effort. This process shows project personnel what formal review meetings will be held; it shows what the role of SQA personnel will be in those reviews and audits. This process also shows at what points in the development schedule reviews will be conducted (that is, the system requirements review meeting is conducted at the conclusion of the requirements elicitation process but prior to start of formal design). This subprocess will identify requirements for peer reviews, including inspections and walkthroughs. Also, this subprocess identifies the formal and informal audits that will be conducted by SQA. Note that the SQA plan should be consistent with the project management plan and the software development plan, relative to the reviews and audits. That is why the review project-level plans process is so critical.
  6. Review CM interface. Review CM interface is important because configuration management is part of what the author calls the quality support infrastructure. CM must make sure that project documents (such as the project management plan, the SQA plan, and the test plan) are maintained under change control. SQA should perform regular but random audits of the CM database and CM process to help assure management that CM is doing its job.
  7. Review defect reporting. Review defect reporting is necessary because SQA must provide or participate in a defect reporting process that allows for tracking and correcting defects found on the project. This portion of the process must specify if defects identified in peer reviews (inspections, walkthroughs, and so on) will be tracked and reported on in the same manner as defects found during formal testing. This process must also specify if defects found in developer testing (unit and integration testing) will be formally tracked as well.
  8. Develop metrics strategy. Develop metrics strategy identifies the strategy that will drive development of a metrics plan (a separate document) for the project. Once again, this strategy should be published so all members of the project management team know what metrics data will be collected, analyzed, and reported on.
  9. Identify tools and techniques. Identify tools and techniques is intended to acquaint project personnel and managers with the tools and techniques that SQA will use to provide support to the project. Tools can include databases and requirements evaluation tools; techniques can include those for requirements traceability, basis path analysis, and so on.
  10. Define supplier control. Define supplier control identifies the methods that will be used to control the quality of products (hardware and software) received from suppliers. These can include, but are not limited to, supplier surveys and audits, incoming inspections, and witnessing of acceptance tests at the supplier facility.
  11. Define records approach. Define records approach identifies the strategy for maintaining quality records. This can include use of an SQA database, as well as defining for what length of time SQA records will be maintained. This section could contain links to the project CM plan.
  12. Document SQA plan. Document SQA plan defines the activities for writing the SQA plan, once the various sections are created in the previous subprocesses. The document should be internally reviewed within the SQA/SQE group before it is submitted to other groups on the project for review.
  13. Review and approve SQA plan. Review and approve SQA plan identifies the activities for reviewing the SQA plan by project management and other groups on the project. This review can take place as a desk check or can be conducted as a peer review. Obviously, this gives those managers the opportunity to perform the “inverse” of the earlier review of their plans by SQA (process 1.1 in Figure 2). Finally, the SQA plan should be submitted to the CM group for processing and archiving.

Use of the Process Model in Estimating SQA Activities on a Program


Once a set of tasks for SQA activities is identified, the SQA group can develop cost and schedule estimates to perform each task. If the organization has maintained metrics data on its expenditures for each task over the years at a sufficiently detailed level, these data can be used to provide confidence in the estimates. Otherwise, engineering judgment will have to be used to develop cost and schedule estimates for each task. Though estimates based on engineering judgment have lower confidence, both the manager doing the estimates and the customer can be positive that the complete set of tasks has been estimated.

Note that the reason the testing process model (Drabick 1995) was initially developed was to support an estimate on a large development program for a government customer in 1992.


Coordinate Metrics

This subprocess defines the activities and tasks that SQA performs while coordinating and collecting metrics. Figure 5 shows the IPO diagram for the coordinate metrics process. Inputs include:

  • Project management plan
  • Defect tracking system
  • Metrics database
  • Product metrics data
  • Process metrics data
  • Standards

There are nine subprocesses associated with coordinate metrics. The nine subprocesses within the scope of the coordinate metrics process involve developing a metrics strategy; defining a schema for the metrics database (possibly as a subsection of a fully featured SQA database); developing and documenting a metrics plan; and collecting, analyzing, and reporting on metrics data. Both process and product metrics should be addressed. Metrics reports must be published in a timely manner so that any course corrections indicated by the metrics can be made quickly.

  1. Develop metrics strategy. Develop metrics strategy should be based on the metrics approach identified in the SQA plan. The strategy should add detail to the metrics approach; the strategy should also be linked to information from the project management plan, the CM plan, and the test plan. Proposed process and product metrics should be identified. IEEE-Standard-1061 (IEEE 1992), Standard for a Software Quality Metrics Methodology, can be used as a guide in developing the metrics strategy.
  2. Create metrics database schema. Create metrics database schema is necessary to create a repository for metrics and measurement data. The metrics database may end up as a component of a project or enterprisewide QA database.
  3. Document metrics plan. Document metrics plan identifies the process that will be followed to publish a documented metrics plan for the project. This metrics plan will be based on the metrics strategy. The process and product measurement data that will be collected are defined in the metrics plan.
  4. Review metrics plan. Review metrics plan defines the process that will be used to review the metrics plan prior to its publication. This review can involve functions of a change control board, or can be performed as an inspection or walkthrough.
  5. Collect measurement data. Collect measurement data identifies the processes that will be followed to collect the raw measurement data from which metrics will be calculated. The data to be collected are defined in the metrics plan document.
  6. Compute metrics. Compute metrics defines the processes that will be used to calculate the process and product metrics from the process and product measurement data collected in the previous subprocess.
  7. Evaluate trends. Evaluate trends identifies the process that will be followed to review the metrics values over time to see if corrective action by management is necessary. This subprocess may involve creating graphical displays of metrics data on a daily, weekly, or monthly cycle to identify trends.
  8. Issue metrics report. Issue metrics report defines the process used to periodically report on metrics data and trends. The timing and contents of the report are identified in this subprocess. These reports should be issued regularly throughout the project life cycle. The distribution list of project and nonproject personnel are also identified in this subprocess.
  9. Update metrics process and plans. Update metrics process and plans identifies the process that will be followed to determine that required modifications are made to the metrics process and plans. One of the primary purposes of an SQA group is to monitor processes (including its own) and improve those processes when required.

Outputs from the process include:

  • Approved metrics plan
  • Metrics database
  • Regular metrics status reports

A significant activity performed by an SQA organization is coordination of a formal risk management program on the project. One must remember that ownership of risk management belongs to upper-level project management. SQA can provide invaluable assistance to management, however, through coordination and day-to-day administration of risk management. This coordination can involve a number of activities, including but not limited to:

  • Scheduling regular meetings to address risk issues
  • Chairing the risk meetings
  • Performing surveys of the development team (including the SQA and test engineering team) to identify areas of risk
  • Maintaining a database of risk issues
  • Collecting and tracking risk reduction plans

Coordinate Risk Program

The level 2 IPO diagram for coordinate risk program is shown in Figure 6. Inputs include:

  • Project-level plans
    1. Project management plan
    2. CM plan
    3. Software QA plan
    4. Test plan
  • Process risk
  • Product risks
  • Risk database

Risk types are defined as follows:

  • Known unknowns. Risks due to conditions that the team “knows” it does not have complete information about (for example, an accurate schedule to develop software to drive a new type of printer, where the software/hardware interface is undefined)
  • Unknown knowns. Risks associated with conditions that the development team believes it has full information about, but the information is not correct (for example, a vendor professes to have a critical software component fully developed, but that component is actually “vaporware”)
  • Unknown unknowns. Risks due to conditions that the development team does not know it does not know about (for example, a user requirement that the users said nothing about when the requirements set was being developed; without implementation of this requirement, the system will be unsatisfactory)

Note that “unknown unknowns” are the most difficult type of risks to identify and deal with. Not only does the team not know about these risks, but the team does not even know there is a risk. These types of risks can result in unpleasant surprises for project managers and the team.

The coordinate risk program process contains nine subprocesses.

  1. Develop risk plan. Develop risk plan and review risk plan define the process for creating, documenting, and reviewing a plan to deal with project risks. The risk plan should identify the agencies responsible for coordinating risk management and the interfaces between those agencies. SQA should have a prominent role in coordinating risk management; this role should be identified in the risk plan. Methods of identifying risks and developing plans to mitigate those risks should be identified. Regular management team meetings to review and manage risks should be defined.
  2. Review risk plan. Review risk plan defines the process that will be used to review the risk plan before it is published.
  3. Evaluate plans, schedules for risk. Evaluate plans, schedules for risk is necessary to implement the risk plan for identifying risks. Risks can be categorized as process and product risks. These can arise from three sources: 1) known unknowns; 2) unknown knowns; and 3) unknown unknowns. Three factors are necessary for a risk to exist: 1) future event; 2) probability of event; and 3) probability of loss. These sources and risk factors must be evaluated regularly throughout the project. Risks change as the project progresses. For example, in the early stages of a project, test engineering might identify a risk due to lack of a dedicated test environment. Once such a test environment is obtained and activated, that risk has been eliminated.
  4. Collect process and product risks. Collect process and product risks is required to identify the risks facing the project. The risk plan will identify the means of collecting these risks. This will be an ongoing process throughout the project.
  5. Establish risk database. Establish risk database is required to create a repository for risk identification and mitigation data, including risk mitigation plans. The risk database may end up as a component of a project or enterprisewide SQA database.
  6. Perform risk assessment. Perform risk assessment identifies the activities performed to identify risks, analyze those risks, and prioritize them. The management risk team repeats this process on a regular basis, for the duration of the project.
  7. Coordinate risk control. Coordinate risk control identifies the process performed by SQA to coordinate activities to control the risks that were assessed earlier. Thus, these activities involve risk management planning, risk resolution, and risk monitoring. A risk control plan can be documented for each risk identified. During the regular risk meetings, progress of each plan for controlling the risks will be assessed and evaluated. Priorities for completing these plans can be expected to vary as the project progresses.
  8. Coordinate risk meetings. Coordinate risk meetings defines how SQA supports regular risk meetings on the project. These meetings should involve a cross-functional group of managers (that is, the project manager, the development manager(s), the SQA manager, the test engineering manager, the configuration manager, and so on) who can effectively discuss the risks and the actions being taken to control those risks. As the list of risks is updated, these managers will reprioritize the risks and assign actions for risk control plans.
  9. Issue risk reports. Issue risk reports identifies how risks are reported on the project. These reports can document the results of risk meetings, and can summarize data from the risk database and the results of risk control plans.

Outputs from the coordinate risk program process include:

  • Risk process
  • Risk status
  • Risk management plan
  • Minutes from risk meetings
  • Risk action-item lists

Perform Audits

The process for perform audits is one of the few activities remaining in modern quality assurance that can be described as a classic “policeman” activity. The level 2 IPO diagram for perform audits is shown in Figure 7. Inputs include:

  • Audit materials (including requirements, criteria, checklists)
  • CM system
  • Defect tracking system
  • Process templates
  • Project plans, including the test plan(s)
  • Risks
  • Standards and templates

SQA as a Policeman


When originally established, most QA/QC organizations were chartered with a primary focus on quality control; QC personnel acted as “policemen” responsible for seeing that manufactured items/computer software met specifications. More recently, the QC function has been shrinking in importance, while the importance of quality assurance has been growing. The two police functions remaining to QA in this model are shown in coordinate audits and coordinate review meetings.

Remember that early in this article it was mentioned that the process being described is appropriate for medium- and large-scale projects/programs. Any projects that are being performed for a government customer (for example, NASA) will have requirements for a physical configuration audit (PCA) and a functional configuration audit (FCA), prior to delivery of the system.

A fully engaged SQA organization will also perform a number of less formal audits. These should be performed according to a documented and approved audit plan (so that no one is surprised), and will include process and product audits. Process audits can include audits of the CM database. Product audits can include review of operator manuals. The perform audits process contains nine subprocesses.

  1. Review project plans. Review project plans identifies the activities conducted to review the project management plan, the CM plan, the SQA plan, and the test plan to determine what audits have been or should be specified based on these plans. These audits will then be identified and documented in the audit plan.
  2. Develop audit plan. Develop audit plan and review audit plan define the process for creating, documenting, and reviewing a plan to identify and perform audits on the project. The audit plan should identify the formal and informal, scheduled and randomly occurring audits that will be performed. There will be a specific section for the physical and functional configuration audits, if these are required by contract. SQA is generally the agency responsible for conducting audits. The process of conducting audits should be specified in this plan.
  3. Review audit plan. Review audit plan defines the process that will be used to review the audit plan prior to its publication.
  4. Establish audit database. Establish audit database is required to create a repository for audit information. The audit database may end up as a component of a project or enterprisewide QA database.
  5. Perform process audits. Perform process audits defines the activities that will be performed when conducting a formal or informal process audit. An example of such a process audit is an audit to verify that test engineering is performing the steps documented in the test procedure during system testing. Another example is process audits of code inspections. An audit report should be issued as a result of each process audit performed, noting areas where the process is being adequately performed, and identifying areas for process improvement.
  6. Perform product audits. Perform product audits identifies the activities that will be performed when conducting a formal or informal product audit. An example of such a product audit is an audit of a software requirements specification. An audit report should be issued as a result of each product audit performed, noting areas where the product is acceptable and identifying areas where the product is deficient.
  7. Perform PCA. Perform physical configuration audit defines the activities that will be performed when conducting a formal PCA. A PCA is a formal audit, usually performed in cooperation with a (government) customer. The intent of a PCA is to verify that a configuration item, as built, conforms to the technical documentation that defines that configuration item. An audit report should be issued at the conclusion of the PCA.
  8. Perform FCA. Perform functional configuration audit identifies the activities that will be performed when conducting a formal FCA. An FCA is a formal audit, usually performed in cooperation with a (government) customer. The intent of an FCA is to verify that development of a configuration item has been completed satisfactorily, and that the configuration item has achieved the performance and functional characteristics specified. Thus, successful completion of a system acceptance test could be considered an element of an FCA. An audit report should be issued at completion of the FCA.
  9. Update audit process. Update audit process identifies the process that will be followed to determine that required modifications are made to the audit process and associated plans. One of the primary purposes of an SQA group is to monitor processes (including their own) and improve those processes when required.

Outputs from this process include:

  • Audit checklists
  • Audit issues
  • Audit reports
  • Action items
  • Updated audit process

Coordinate Review Meetings

An SQA organization is usually chartered with coordinating review meetings, on a project/program where these meetings are formally controlled. Figure 8 shows the IPO diagram for the process for coordinate review meetings. During reviews, a specific task for SQA personnel is to evaluate the process used in performing the reviews and report on that evaluation to management.

Inputs include:

  • Checklists
  • Project management plan
  • Test plan
  • Review data packs
  • Review schedules
  • Standards
  • Templates

The coordinate review meetings process contains nine subprocesses.

  1. Verify peer review schedule. Verify peer review schedule identifies the activities performed to review the schedule for peer reviews (for example, inspections) on the project. These reviews may be performed on requirements, design information, code, test plans, test designs, test cases, and other deliverables. The peer review schedule may be published in the project management plan, the software development plan, and/or in a scheduling package (for example, Microsoft Project).
  2. Develop (peer review) agenda template. Develop agenda template defines the process for creating and documenting an agenda or set of agendas for peer reviews. The agendas can vary for peer reviews held at different points in the software development life cycle (for example, requirements, design, code). These agendas should specify the contents of the peer review data package, the required attendees at the peer review meetings, and the events that must occur during the peer review.
  3. Support peer review meetings. Support peer review meetings defines the activities performed by the SQA personnel at the peer review meetings. Note that peer reviews can include a variety of meetings, including requirements walkthroughs/inspections, design walkthroughs/inspections, code walkthroughs/inspections, test documentation reviews, and others. In addition to being a reviewer, SQA personnel can serve in the role of recorder or standards-enforcer. Note that in no case should one person serve two roles at the same time. For example, it is extremely difficult, if not impossible, for a person to be both a recorder and reviewer. A significant role for SQA personnel during this process is to “evaluate the process” being performed during the review. The results of this evaluation should be contained in the minutes that are issued following the meeting(s).
  4. Track (peer review) action items. Track action items identifies the tasks to be performed subsequent to the peer review meeting to track (to completion) action items resulting from the meeting. Regular status reports on action item status should be published until the final action item is completed.
  5. Verify design review schedule. Verify design review schedule identifies the activities performed to review the schedule for design review meetings (for example, critical design review) on the project. These reviews may be performed at the conclusion of the requirements analysis phase, design phase, and/or coding phase of the project life cycle. The design review schedule should be published in the project management plan, and/or in a scheduling package.
  6. Develop design review agenda template. Develop design review agenda template defines the process for creating and documenting an agenda or set of agendas for design review meetings. These agendas should specify the contents of the design review data package, the required attendees at the design review meetings, and the events that must occur during the design review.
  7. Coordinate design review data packs. Coordinate design review data packs identifies the activities performed by SQA in coordinating assembly of the material to be reviewed at the design review meetings (for example, design review data packs). Often on government contracts, the material required for a specific design review meeting (for example, software requirements review meeting) is defined in the contract and/or statement of work. SQA has the responsibility to see that this material is assembled and reproduced, and to verify that the appropriate number of copies is provided to customer personnel. Distribution of the data pack must take place in a contract-specified number of days before the date the meeting is held.
  8. Support design review meetings. Support design review meetings defines the activities performed by the SQA personnel at the design review meetings. As well as being a reviewer, SQA personnel generally serve in the role of recorder. A lead SQA engineer often serves as chairperson of the formal design review meeting. A significant role for SQA personnel during this process is to “evaluate the process” being performed during the review. The results of this evaluation should be contained in the design review report that is issued following the meeting(s).
  9. Track design review action items. Track design review action items identifies the tasks to be performed subsequent to the design review meeting to track (to completion) action items resulting from the meeting. Regular status reports on action item status should be published until the final action item is completed.

Outputs from this process include:

  • Action items
  • Review agendas (for both peer reviews and design reviews)
  • Review status reports (for both peer reviews and design reviews)

Facilitate Process Improvement

A critical function of SQA organizations in the current age is to facilitate process improvement. Figure 9 shows the IPO diagram for facilitate process improvement. This subprocess is only applicable for an SQA group when the program/project organization does not have, or does not have access to, an SEPG. Even if a corporate-level SEPG exists, it can be advantageous for the program/project dedicated SQA group to be responsible for some of the process improvement functions. That way, more general corporate-level initiatives can be tailored and adjusted to the needs of specific projects.

Inputs include:

  • Assessment checklists
  • Current processes
  • ISO 9000
  • SEI (Software Engineering Institute) Capability Maturity Model (CMM)
  • Information Technology Business Group (ITBG) Testing Capability Maturity Model©
  • Software Process Improvement Capability Determination (SPICE) model

The facilitate process improvement process contains eight subprocesses.

  1. Review project plans. Review project plans identifies the activities conducted to review the project management plan, the software development plan, the CM plan, the SQA plan, and the test plan to determine the various processes in use on the project. These processes will then be reviewed to determine the opportunity for process improvement.
  2. Identify process improvement opportunities. Identify process improvement opportunities defines the activities that will take place to identify opportunities for process improvement on the project. Techniques/templates to be used may include the SEI CMM©, the ISO 9000 series of standards, SPICE, and/or the ITBG Testing Capability Maturity Model©. Meetings should be held with managers and technical contributors from various teams to understand where each team sees opportunities for improvement.
  3. Develop process improvement plan. Develop process improvement plan identifies the process followed to prepare a process improvement plan for the opportunities developed in the previous subprocess, and get management approval. For each process selected, the process improvement plan should state:
    • The process to be evaluated
    • Assessment of the process
    • The events to be followed to implement the process improvements
    • The method of monitoring the events
  4. Prepare for assessment. Prepare for assessment defines the activities that will take place to prepare for the assessment of each process. Guidance in this process can be obtained by consulting publications from the SEI (SEI 2000). This should include, as a minimum:
    • Development of an assessment checklist
    • Development of interview questions
    • Selection of an assessment team
    • One or more meetings of the assessment team, prior to the assessment
    • Definition of the format of the final assessment report
    • Preparation of an in-briefing (for managers and team to be assessed)
  5. Perform assessment. Perform assessment identifies the activities conducted to perform the assessment. In most cases, completion of an assessment will take one week of elapsed time, with additional time required to prepare and submit the final assessment report. Guidance in this process can be obtained by consulting publications from the SEI (SEI 2000). Assessment activities will include:
    • Introductory briefing to project managers
    • Introductory briefing to team members being assessed
    • Completing assessment checklist by team members being assessed
    • Quick-look processing of checklist data by assessment team
    • Interviews of selected team members being assessed
    • Quick-look review of interview data
    • Preparation of quick-look report (including action plans)
    • Outbriefing to team members being assessed
    • Quick-look outbriefing to managers

The activities labeled “quick-look” will be carried out within the time window of the assessment. Detailed analysis of the assessment data will take place after the assessment has been completed. This process is described in the following paragraph.

  1. Process assessment result. Process assessment result defines the activities that will take place to process the assessment data. This is a two-phase process. Some data processing will occur in real-time, so that a quick-look outbriefing can be prepared for the team members being assessed and their managers, before the closure of the assessment time window. The more lengthy part of the processing will occur while the final assessment report is being prepared, and will also include preparation of this report. Normally, the final assessment report will be submitted to management within 30 days of completing the assessment.
  2. Monitor action plan progress. Monitor action plan progress identifies the process to regularly review status of progress of the various process improvement teams against the action plans specified in the final assessment report. This status should be reported to both management and to the teams. Note that a number of action plans may be developed in each area where process improvement is being performed. In no case should an action plan contain more than three actions to be completed. This will minimize disruption to the team working on the action plan, since there will be a limited number of actions to address at the same time. When one action is completed, another can be initiated until the entire set of actions identified in the assessment report is implemented.
  3. Update action plans. Update action plans defines the activities that will take place to update action plans. As actions are completed, the next action on the list in the assessment report will be added to the action plan. If metrics indicate that an action is having a negative effect, that action should be backed out and the plan examined and corrections made.

Outputs from this process include:

  • Action items
  • Assessment reports
  • Process improvement recommendations
  • Periodic status reports

Monitor Test Program

The final subprocess within the QA process is monitor test program. This subprocess is designed around an SQA group that does not have responsibility for testing. See (Drabick 1995) for more information on a process model for formal testing. It is logical for the SQA organization to take responsibility for assuring that the testing process is performed according to the documented and approved test documentation (test plans, test designs, test cases, and test procedures). Figure 10 shows the subprocesses involved in the process for monitor test program.

Inputs include:

  • Standards
  • Test documentation
    1. Test plan
    2. Test design
    3. Test cases
    4. Test procedures (if applicable)
    5. Test summary report
  • Test output data
  • Test process

The monitor test program process contains eight subprocesses.

  1. Establish test metrics database. Establish test metrics database is required to create a repository for metrics pertaining to testing information. The test metrics database may end up as a component of a project or enterprisewide QA database.
  2. Collect test metrics. Collect test metrics identifies the activities to identify and collect measurements and metrics from testing on a regular basis; these data will then be stored in the test metrics database. Measurement data can be as simple as number of defects or number and status of test cases executed, or as complex as time and cost to repair defects.
  3. Report test metrics. Report test metrics defines the process to regularly report the test metrics information collected in the previous subprocess. These data should be reported to the test team, as well as the testing manager, project manager, and SQA manager.
  4. Review test documentation. Review test documentation identifies the activities to read and critique the test documentation, including test plans, test designs, test cases, test procedures, and test summary reports. This review can occur either as a “desk check” or as part of a formal peer review. These activities will be performed at various times within the testing life cycle.
  5. Monitor test execution. Monitor test execution defines the process to oversee execution of the test cases or test procedures. This oversight is a bit of a policeman role for SQA. It is intended to provide an objective witness that the test engineering group is following the test documentation while performing the test, and is properly documenting defects/ failures encountered during the test.
  6. Plan test process improvement. Plan test process improvement identifies the activities to evaluate the testing process, and assist the test engineering group in improving its process so it can test more effectively. Essentially, this subprocess is a subset of the facilitate process improvement process, focused on the test engineering process.
  7. Assess test process. Assess test process defines the activities required to assess the testing process used by test engineering. This process should mirror that defined in perform assessment and process assessment result, focused on the test engineering process.
  8. Develop test assessment report. Develop test assessment report identifies the activities to analyze data and compile the final assessment report on the testing process. This report should include a set of recommended process improvements, which can be turned into a series of action plans.

Outputs from this process include:

  • Test assessment report (including process improvement recommendations)
  • Issues

CONCLUSIONS

This top-level model of the SQA/SQE process provides a generic model. A specific SQA organization should tailor this model to fit its own environment and organization. Future work for readers could include developing level 3 IPO diagrams for each of the level 2 subprocesses.

Use of this SQA process model depends on the maturity of a specific SQA group. For a mature SQA group that has a fully documented functional process, this article could be used as a sanity check. If there is anything in this process that the SQA group is not doing but which seems valuable, the group should add that process to its operations. For a mature group with a functional process that is not documented, these diagrams could be tailored to match the existing process. Then the process could be formally documented.

As mentioned earlier, this model has a number of significant uses. Using the SQA process model as a training aid for junior SQA engineers is one application. The graphics allow experienced SQA managers or lead engineers to illustrate how and when SQA activities are performed throughout the life cycle. Using the SQA process model in conjunction with IEEE-Standard-730 (IEEE 1989), experienced team members can work together with junior engineers in developing sections of the SQA plan for the project.

The SQA process model provides a tool for constructing a work breakdown structure to use in estimating the cost (and schedule) for SQA activities on a proposed project. The model establishes the framework and list of SQA tasks. Then completing the estimate becomes an exercise in using existing metrics data or engineering judgment to determine the elapsed time and staffing levels to complete each task. Finally, each task can be entered into a schedule tracking package so activities can be scheduled and tracked on an ongoing basis.

The SQA process model can be used as a guide to effective implementation of process improvement. The process model as presented in this article can serve as the “ideal”; a process model can also be developed that represents the current process in use by an SQA organization. Comparing the current model against the ideal provides a graphic illustration of potential areas to explore for process improvement.

Note: A more detailed version of Figures 1 through 10 are provided here.

REFERENCES

Drabick, R. 1993. A process model tool for estimating software costs. American Programmer 6, no. 6.

Drabick, R. 1995. Modeling the formal testing process. Software QA Magazine 2, no. 2.

Humphrey, W. 1989. Managing the software process. Reading, Mass.: Addison-Wesley.

IEEE. 1989. IEEE Standard for Software Quality Assurance Plans, IEEE-Standard-730-1989. New York: IEEE.

IEEE. 1992. IEEE Standard for Software Quality Metrics Methodology, IEEE-Standard-1061-1992. New York: IEEE.

Jones, C. 1994. Assessment and control of software risks. Englewood Cliffs, N. J.: Yourdon Press Computing Series.

Kaner, C. 2000. Rethinking software metrics. Software Testing and Quality Engineering Magazine 2, no. 2.

Software Engineering Institute (SEI). 2000. Conducting SEI-assisted software process assessments. Pittsburgh: Software Engineering Institute, Carnegie Mellon University.

BIOGRAPHY

Rodger Drabick is a senior test engineer at dakota imaging, inc. He is responsible for testing imaging applications and process improvement. Previously, he was a consultant specializing in configuration management, software quality assurance, and test engineering. He worked on contracts at Amtrak, Defense Investigative Services, the FAA, and other assignments in the Baltimore-Washington area. Drabick is co-author of the ITI Testing Capability Maturity Model. He is retired from Eastman Kodak Company. While at Kodak, he founded the first formal SQA and testing organization within Kodak’s Federal Systems Division, and coordinated the first SEI self-assessment performed at Kodak.

Drabick has authored articles in technical publications, including American Programmer, Software QA Magazine, and Software Testing and Quality Engineering. He holds the CQA and CSTE certifications from the Quality Assurance Institute. Drabick can be reached at dakota imaging, inc., 7130 Minstrel Way, Columbia, MD 21045-5243, or be e-mail at rodgerd@dakotaimaging.com.

Featured advertisers


Excellent framework for constructing a software quality program within an overall software development methodology. I recommend integrating this with your PMBOK (tm) process for an effective plan.
--Stephen Ruger, 06-10-2015



--L. Luster, 04-01-2014



--, 04-01-2014



--Donell Welch, 04-26-2013

Article
Rating

(3) Member Reviews

Featured advertisers





ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.