Optimizing Software Inspections with Statistical Quality Techniques - ASQ

Optimizing Software Inspections with Statistical Quality Techniques

Contents

Download the Article (PDF, 130 KB)

Ellen George and Stephen Janiszewski, Software Six Sigma

Software inspections can be an extremely effective method for removing defects early in the software life cycle while the removal cost is still relatively low, but an open-loop inspection process that is not measured and proactively managed could actually cost more than it saves.

By measuring effort, product size, and defects, it becomes possible to manage the quality of the inspection process and the quality of the product being inspected. It is possible to forecast the number of defects found in downstream test processes and to compare the predictions to actuals using statistical techniques. Defect data from test can be fed back into the inspection checklist resulting in a self-optimizing closed-loop process. Use of a structured defect prevention process in tandem with the inspection process engages the developers in process improvement and greatly improves the quality of defect data and ultimately of process yield.

Key words: appraisals, defect prevention, inspections, Fagan inspections, Gilb inspections, quality management, Six Sigma, software process improvement, statistical process control

INTRODUCTION
Software inspections can be a good method for removing defects early in the software life cycle while the removal cost is still relatively low. Many organizations don’t understand how to measure the effectiveness of their inspection process and therefore don’t manage their inspection process effectively. An open-loop inspection process that is not measured and proactively managed is unlikely to perform as well as it could. In fact, it could actually cost more than it saves.

This article summarizes the authors’ experience implementing, managing, and optimizing an inspection process at a large aerospace organization. It presents performance data from large and small inspection teams as well as from single-person bench checks, including the correlation between inspection rate and inspection yield, number of defects identified per thousand lines of code (KLOC) inspected, and relative cost of inspection and test.

INSPECTION GOALS AND MEASUREMENTS
Prior to beginning any software process improvement effort, members of an organization should have a clear understanding of the problems they are trying to solve and the goals they are trying to achieve. Without explicit linkage to business objectives, it becomes very difficult to maintain active senior management support of the initiative.

The organization’s employees were given an explicit objective to improve productivity. Their goal was to get a 7 percent productivity improvement on all new development activities. They wanted to select a process improvement initiative that would provide a quick, visible return on investment, in order to solidify management sponsorship. The company chose to use inspections to remove defects early at a lower overall cost, with the expectation that they would reduce integration and test time.

Since integration and system test accounted for approximately 30 percent of the company’s life-cycle costs, a 7 percent productivity improvement could be achieved by reducing the number of defects that escaped into integration and system test by approximately 25 percent. Employees had an additional constraint to recover the process improvement cost within one year.

The company decided to measure the improvement in productivity by measuring the overall decrease in cost of quality for the software development process. Cost of quality is the sum of three terms: failure cost of quality, appraisal cost of quality, and prevention cost of quality. Failure cost of quality is the percentage of total project effort expended correcting defects found in test. Appraisal cost of quality is the percentage of total project effort expended performing bench checks and inspections. Prevention cost of quality is the percentage of total project effort expended in training and defect prevention activities. A net decrease in cost of quality corresponds to more effort expended generating product and less effort wasted correcting defects. A comparison of the changes in the appraisal and failure cost of quality allowed the company to measure its progress against its goal.

MEASUREMENTS
There are only three types of measurements required to manage the inspection process: effort, size, and defects. Effort is the individual effort expended by each inspection participant to prepare for, hold, and fix the defects found in the inspection. Size is the size of the work product inspected, which is often measured in lines of code (LOC). Defects is the quantity, type, effort required to fix and verify, point of injection, point of removal, and description of the defect. Although a defect measurement consists of several pieces of data, it is collected as a single measurement at a discrete point in time.

These three measurements are simple and economical to record in process with an automated tool. A typical effort or size measurement requires about 15 seconds to perform. A defect measurement requires about 30 seconds. All other metrics are derived from these three measurements. Figure 1 provides a list of the derived metrics used to fully characterize the inspection process.

THE INSPECTION PROCESS

Types of Reviews
The organization used two types of reviews to optimize its inspection process: bench checks and team inspections.

Bench checks
Bench checks are used to clear the majority of the simpler defects prior to team inspection.
Bench checks are personal inspections conducted by the author immediately after completing a work product or a portion of a work product. They are checklist based. The checklist is generated from a list of defects found by the author in downstream activities such as code and unit test. Once the author has completed a product development phase such as design or code, he or she inspects the product against his or her personal checklist, systematically proving that each application question is not a problem. Bench checks are based on PSP(SM) personal reviews (Humphrey 1995).

Team inspections
The company’s team inspections were based on the standard Fagan process (Fagan 1976). A small team conducts the inspection. The team includes the author, reviewers, a moderator, and a recorder.

Open-Loop Inspection Process
The team introduced the organization to inspections by training the developers in standard Fagan inspections. One cannot emphasize enough the importance of providing the staff with formal training in order to rapidly move the organization to a repeatable, stable process. Honeywell’s initial inspection process is depicted in Figure 2.

The author submits material for inspection. When there are approximately 1000 LOC to be inspected, the moderator calls for an inspection.

The moderator gathers the material and invites the inspection team members to a preliminary meeting. The reviewers are given the source and product material; told how it fits into the overall project; provided the checklist, standards, and procedures; and told when the inspection meeting will be held. Each reviewer is expected to review the material at his or her desk against the checklist and come into the meeting with defects identified. The product size, time spent in the preliminary meeting, and time spent reviewing the material are recorded.

Initially the team used the same standard checklists across all projects. The checklist questions were based on typical “industry data.”

The moderator facilitates the inspection meeting. Beginning with the first question on the checklist, the moderator asks each reviewer, starting with the author, to identify any issues they logged. Issues are identified and discussed objectively. All issues are recorded. The moderator moves the group through the remaining checklist questions, always starting with the author. Time spent in the inspection meeting and issues identified in each product are recorded.

It is left to each author and his or her supervisor to determine how each issue is to be resolved. They could decide that an issue is a defect that needs to be fixed immediately, a defect that will be scheduled for fix in the future, or that the issue is not a defect at all. Discussion time between the author and the supervisor is recorded.

The author fixes the agreed-upon defects and creates problem reports for those that were deferred. For each defect, the author records the defect fix time, defect type, life-cycle phase in which it was introduced, life-cycle phase in which it was removed, and a description of the defect. When the repair work is completed, the author moves on to the next life-cycle phase.

In unit test, and again in each subsequent test step, the author records time spent in phase and defects identified and fixed. For the most part, test phase time is composed of the time spent finding and fixing defects.

The organization was clearly saving money by finding and fixing defects earlier in the life cycle when they were less expensive, but as yet they had not started quantifying their savings as part of normal activities. It was apparent that there was ample opportunity to improve the process.

The Software Engineering Process Group (SEPG) reviewed and analyzed the metric data. The data analysis report was being provided to the development organization on a monthly basis, but the developers never had the time to look at it.

The developers’ objective was to hurry up and get their code tested and delivered. Since they rarely took time to look at the data, they were typically unaware of how much time the inspection process was saving for them in test. They thought inspections were good in a vague way because they provided a learning opportunity and “improved quality.” They could not quantify either the cost or benefits of the inspection process. As such, they continued to do what they were comfortable with, rushing through their inspections so that they could get their product into test.

Closed-Loop Inspection Process
The solution to this problem was to involve the developers in the analysis of the inspection data. They were trained in basic Six Sigma tools such as run charts, Pareto analysis, root-cause analysis diagrams, and in how to calculate return on investment (ROI).

The inspection process was modified to give the developers responsibility for performing the data analysis step and for implementing the continuous process improvement resulting from the data analysis (see Figure 3). Successful completion of data analysis was treated as a “phase gate,” that is, it became an inspection process exit criteria. The development team members learned to calculate their own ROI, and were able to convince themselves that the time in inspections was worth spending.

The development teams on each project took over responsibility for creating and managing their own checklists. Checklists were now based on local defect data from test and were designed to prevent repeated escapes of the same defect type. Each development team set quantitative improvement goals to improve the defect containment rate of their inspections. It was the team’s responsibility to suggest and pilot improvements to the process and to take actions that would result in higher product quality and higher inspection process effectiveness.

The SEPG served as a consultant to the inspection teams, mentoring them on data analysis techniques and helping them implement improvement recommendations. The SEPG fostered communication between teams by sponsoring a bimonthly meeting at which the development teams shared the results of their inspection improvement activities.

DATA ANALYSIS

Measuring Process Performance with Run Charts
After some preliminary analysis, the inspection tram chose to characterize the inspection process by two measurements: inspection rate (LOCs/person-hour) and density of defects identified during the inspection (defects/KLOC). For products of similar quality, density of defects identified at inspection should be related to yield. The advantage of using density of defects identified at inspection is that it is not a lagging variable, that is, one doesn’t have to wait until the project is over to measure it. Defect density identified was selected to characterize the effectiveness of the inspection process; inspection rate was selected to characterize the actual execution of the process.

Process measurements always include some random variation. In order to recognize trends or shift in process performance, one must put measurements in a temporal context by plotting them as a time series. This will also allow one to see the range of normal process variation, so that he or she can test apparent changes in process performance for statistical significance before using measurements as inputs to a decision-making process. Otherwise one runs the risks of taking management action in response to normal statistical variation in the data. At best this can waste a lot of effort, at worst it is likely to be counterproductive.

For inspections, the team characterized process performance with individual XmR charts, as depicted in Figure 4. The X chart is a run chart plotting the history of the values of a variable over time. An average of the data values is calculated and plotted on the chart. Upper control limits (UCL) and lower control limits (LCL) are calculated and plotted as well. The UCL and the LCL are typically drawn three standard deviations (3) above and below the mean. When the LCL is less than zero, it is generally not drawn. The value of sigma is estimated from the average of the moving range.

The X chart is used to identify data points that fall outside the natural limits of process variation. These points can then be analyzed for assignable causes. The difference between upper and lower control limits defines the normal variation of the process. The mean defines average process performance. The moving range (mR) chart, is used to monitor process stability. This chart shows a moving range of the data values, or the absolute difference between each pair of values plotted on the X chart. When the range chart is out of control, it is generally not possible to draw conclusions about the X chart.

Once a process is stable, shifts in process performance can be detected using the following tests:

  • At least two out of three successive values fall on the same side of, and more than two sigma units away from, the centerline.
  • At least four out of five successive values fall on the same side of, and more than one sigma unit away from, the centerline.
  • At least eight successive values fall on the same side of the centerline.

The team used run charts of the defect density identified and inspection rate to monitor process performance.

The open-loop process was basically stable throughout most of this project, although it had considerable variability. The last 10 or so inspections show a trend toward increasing inspection rate and a corresponding trend to decreasing defect density identified. One didn’t need to look far for an assignable cause to realize that the inspection teams were rushing against a looming project deadline. In fact, the previous peak in the inspection rate data that broke through the three-sigma limit represented an inspection done just before a holiday shutdown

The defect density identified chart also shows a few out-of-control points early on. These represent the first products through inspection when the inspection process was deployed in the organization. Once the developers realized that inspections were finding a lot of defects, they began to clean up the code before submitting it to inspection, and the defect density identified rapidly fell off to a lower range.

Control Variables
As the organization gained experience with Fagan inspections, employees started to look for a control variable that could be used to manage the inspection process.

The outputs of a process “y” are usually a function of one or more control variables “x,” that is, y= f(x) + , where is a noise term that accounts for intrinsic process variation and where f is some unknown, possibly nonlinear, function. The y’s are not directly controllable, but they can be controlled indirectly through the functional relation with x.

In this case, they wanted higher defect density identified (their y). There is a lot of literature that indicates a correlation between inspection rate and defect density identified. All things being equal, one would expect a quick inspection to find fewer defects and a slower more methodical inspection of the same material to find more. This leads to the hypothesis that one should directly manage inspection rate and indirectly manage inspection yield to an improved value.

The team used their data to investigate if there was really a correlation between inspection rate and the number of defects per KLOC found during inspection. Figure 5 shows a scatter plot of defect density identified vs. inspection rate and a curve fit by a nonlinear function. The fit has a correlation coefficient = 0.67. While not a particularly high value, it is good enough to be useful. These data suggested a target inspection rate in the range of 100 to 200 LOC/hour. This is far slower than normal reading speed, and it takes some training, practice, and reinforcement for reviewers to be able to consistently operate at this rate.

In the closed-loop inspection process the team set a target inspection rate of 100 to 200 LOC/hour and took corrective action when they missed the target. The average inspection rate was used as an entry criterion to the inspection meeting. If the rate was too high, the meeting was called off and the material given back to the reviewers.

The target inspection rate should be set based on actual data for a given application and project team. The team has seen some applications, typically safety critical code implemented in assembly language, where the target inspection rate was in the range of 50 LOC/hour.

Figure 6 shows the effect that proactively managing inspection rate has on the inspection run charts. When compared with the open-loop charts in Figure 4, it is obvious that the average inspection rate is lower and the variation in inspection rate is much lower. This is to be expected owing to the closed-loop nature of the control on rate. The improvement in average defect density identified is larger than expected based on Figure 5, and the improvement in defect removal rate shown in Figure 7 (6 defects/hour to 15 defects/hour) is even more dramatic. The shift in the process mean is obvious in the run chart. A t-test yields t = 2.93 allowing one to reject the null hypothesis that there is no significant difference in the process means at a significance level of 0.01. The reason that the improvement in defect rate is so dramatic is that the team introduced a number of other changes as they shifted to the closed-loop process.

OPTIMIZATION STRATEGIES
Having established and institutionalized a stable inspection process, the team found that there were a number of opportunities to optimize their process.

The developers found that they tended to repeat the same mistakes over and over again. Furthermore, they found that the types of mistakes that one developer repeatedly injected were different from the types of mistakes that another developer tended to inject. Finally, when a product went to team inspection, the team spent valuable time finding mistakes that the developers could have easily found or avoided themselves. The development process was modified to include a personal bench check prior to submitting a product to team inspection. The purpose of the bench check was for the author to remove all the errors that they were likely to make at the lowest possible inspection cost. The cost of performing a personal bench check is 5 hours/KLOC, a fraction of the team inspection cost. The personal bench check uses a checklist derived from the author’s own list of historical compilation and test defects. These checklist questions flag areas of high risk where the author has a history of making mistakes. By removing these defects prior to team inspection, they are removed at a lower cost, and the team inspection can focus on finding defects that the author would have been less likely to find.

Each inspection in the development process had a different objective. The checklists for personal bench checks were focused around the types of design and code defects that the author had a history of making. The team inspection checklists focused on interface- and requirements-related issues that cannot easily be found in the personal bench checks.

The team also found that their inspection process was creating a lag between the completion of the design or code phase and the subsequent phases. They had been holding off scheduling inspections until there was approximately 1000 LOC to be reviewed. It took up to a week after a product was submitted before the inspection was completed and the product returned to the author. The team implemented shorter, more frequent team inspections. Reviewers were assigned a day that they were “on duty.” As developers submitted their product for inspection, it went to the team on duty for the day. Size of the inspections was limited to a few hundred lines at a time, reducing preparation time to one to two hours. The inspection team reviewed the material in the morning, the inspection meeting was held after lunch, and the reviewed product was back in the hands of the author within two to three hours of submitting it. This eliminated the lags in the system, removing the temptation for the author to move forward into test before the inspection had taken place.

Reducing the size of the product inspected also made it easier to control the inspection rate. Inspection teams tend to rush with larger products. In fact, the correlation between product size and defects per KLOC found in inspection is as strong as that with inspection rate as shown in Figure 8. The plot has = 0.68 and clearly shows a nonlinear relationship between product size and number of defects per KLOC found in inspection. Investigation of these data show a correlation between product size and inspection rate, larger products having a tendency to be rushed through inspection.

Inspection teams typically had an author and four reviewers. The size of the inspection teams was reduced to the author plus two reviewers. This reduced the cost of the inspection by 40 percent without significantly diminishing the inspection yield. Whenever possible, the internal “customer” for the product was included on the inspection team, helping to eliminate product handoffs between groups (that is, requirements to development).

Periodic defect prevention meetings provided the development team with an opportunity to review their data and define approaches to detect defects earlier or prevent them entirely. They found that defect prevention could be implemented in a CMM level 2 organization that is performing inspections and collecting defect data. Each development team became a defect prevention team, setting and managing to their own goal. The goals were aligned with the overall business objectives. The developers use their own defect data, captured during inspections and testing, as the basis for their defect prevention activities. The defects were analyzed using Pareto charts to identify most expensive, most frequent, and so on. The team determined what actions to take to prevent a chosen defect type from recurring in the future. Typical actions included modifying checklists, or changing coding and design standards. The team members convinced themselves of the value of the inspection and defect prevention activities by calculating their own ROI. The role of the SEPG became one of mentoring the defect prevention teams. Lessons learned were shared periodically with other defect prevention teams.

A key lesson learned through the introduction of defect prevention into the organization was that the people collecting the data must regularly use them; otherwise, they will stop collecting them! During defect prevention training, the team asked each developer to bring a dozen of their recently logged defects to the class. In one of the exercises, the developers were asked to read their defect descriptions and group the defects. The defect descriptions were so poorly written that the developers were unable to read their own defects! They had been collecting the data for someone else to use. They thought their job was to get to code and test. Now that their job included analyzing their own defect data, the quality of the defect reporting improved dramatically.

MEASURING RETURN ON INVESTMENT
The ROI of the inspection process can be directly measured. A cost analysis of the development process is captured prior to introducing the process improvement activity. Each step in the development process either injects defects into the product or removes defects from the product. Through measurement and/or anecdotal evidence, it is possible to baseline the rate at which defects are injected into the product, the rate at which defects are removed from the product, and the cost of finding and fixing a defect at each defect removal step in the life cycle. These data can then be used to fill in a quality plan. By multiplying the number of defects to be removed at each life-cycle phase by the cost of removing them in that life-cycle phase, and then adding up the costs, the total cost of finding and fixing defects can be quantified. To determine the savings relative to implementing inspections, the baseline cost of defect removal is first calculated without inspections (see Figure 9).

On average, developers injected 40 defects/KLOC in design and 60 defects/KLOC in code. None of the defects were removed in inspection, and between 35 percent and 50 percent of the defects were removed during each testing activity. When multiplied out, company employees found that they were spending about 267 hours/KLOC to remove defects. They also found that with this scenario, they still expected to have 10 defects/KLOC in the product post system test.

The cost of removing defects with inspections is calculated next, as captured in Figure 10.

New defect injection rates during design and code remained unchanged at 40 defects/KLOC in design and 60 defects/KLOC in code. Defect removal costs and yields were measured and inserted into the plan for bench check and inspection activities. Yields for testing activities remained unchanged. The total cost of defect removal was added up. With inspections, the company expected to spend a total of 37 hours fixing defects per KLOC of developed code. Additionally, they now expected to find less than one defect per KLOC post system test.

The baseline cost to find and fix defects per KLOC without inspections was 267 hours. The cost to fix those same defects when using bench checks and inspections was 37 hours. The difference between the two costs represents the savings realized per KLOC developed as a result of implementing inspections.

Before completing the ROI calculations, one must also consider the cost of performing the inspections. The predominant cost is the inspection preparation time and the meeting time. For a 1000 LOC inspection, a team of four developers each spent approximately half an hour in the kickoff meeting, 10 hours reviewing the material at 100 LOC/hour, and one hour in the inspection meeting. In addition, there was time spent by the author to categorize and fix the defects plus time spent by the inspection moderator to verify the defects had been fixed and to close the inspection meeting. In all, the cost of holding an inspection of 1000 LOC was about 50 hours. Since there is a design inspection and a code inspection, one must double this figure to get a total cost of about 100 hours. The design and code bench checks will add another 20 hours, for a total appraisal cost of about 120 hours.

The total cost of defect removal for 1000 LOC without inspections was 267 hours. With inspections, this cost drops to 37 hours, a savings of 230 hours. The cost of holding all of the inspections is about 120 hours. This indicates that the net savings, or ROI, for inspections was 267 – (37 + 120) = 110 hours/KLOC. In addition, one could expect a 10x improvement in delivered quality to the customer.

So the ongoing savings are about 110 hours/KLOC. The company still needs to consider the startup costs. They used one and a half days of inspection training and two days of defect prevention training per person. Assume a class size of 10, the total cost of labor for the instructor and the students will be about 310 hours. Since the savings per KLOC are about 110 hours, it will require the development of at least 3 KLOC to recover the training cost. This number is almost certainly too optimistic. The first inspections will be relatively ineffective because the team will be inexperienced. As the team gains experience, the quality of the inspections will improve dramatically. So a more realistic assumption is that training costs for 10 people can be recovered after the development of 6 to 10 KLOC.

The cost of defect removal in integration and system test drives the economics of the inspection process. In this case, the cost of defect removal in integration and system test was 18 hours/defect. The company was developing mission-critical embedded software, a situation where this number is fairly high. For other applications, with lower defect removal costs, the savings per KLOC would be smaller and the time to recover training costs would be longer.

Inspections by their very nature are more predictable and controllable than testing. One can budget 120 hours with a high level of confidence for every 1000 LOC to be inspected. The time required to test, however, remains highly unpredictable. While one can use average defect fix times in test for his or her calculations, the range of time required to fix any given defect can be very large. The more defects that have to be fixed in test, the less accurate the estimate for completion will be. Since appraisal cost is more predictable than testing cost, estimating accuracy will improve as a result of implementing inspections.

Finally, the number of defects removed in unit test is an indicator of the number of defects remaining in the software. Unit tests removed approximately 50 percent of the existing defects in the product. This means that the number of defects remaining in the product as it enters integration is about the same as the number of defects found in unit test. By introducing inspections, the number of defects leaked from unit test to integration test dropped from 25 per KLOC to around 2 per KLOC. Fewer defects in integration and system test results in the largest component of the cost savings from implementing inspections.

QUALITY IS FREE
Earlier on in our careers, when we first introduced inspections, we frequently met managers who said something like this: “I know inspections are a good thing to do and that they will get our quality up but we don’t have the time (or budget) to do them right now.” Once one understands the dynamics of an effective inspection process, it becomes obvious that this argument is a fallacy. It assumes that inspections add cost without realizing a corresponding savings in test. If the savings in test is larger than the cost of the inspections, then one can cut costs and improve project cycle time by doing more inspections not fewer.

Actually there is some truth in both points of view. Ineffective inspections are likely to add more costs than they save. Well-managed closed-loop inspections produce a net savings. In order to understand how quickly one could recover the inspection cost, the team did some analysis on personal bench check data.

They looked at productivity from design through unit test before implementing bench checks and after implementing bench checks. They found that there was no significant difference in the productivity, even though they were taking time to perform the inspections. As appraisal costs increased and defects were found earlier, unit test costs decreased. In fact, the total cost of quality (appraisal plus testing) remained unchanged between design and unit test. This showed that there is no net cost to performing appraisals. Furthermore, the cost is recovered before the product exits unit test. Reductions in defects in integration and system level test go directly to the project bottom line.

Figure 11 summarizes the results. There is no correlation between appraisal COQ and productivity, that is, spending a longer time in inspection does not degrade productivity. There is a linear relationship between appraisal COQ and failure COQ, that is, increasing appraisal COQ tends to decrease failure COQ. As long as the slope is larger in absolute magnitude than –1 (in this case it is –1.22), the inspections result in a net cost decrease. But be warned, without active management, it is possible for the slope to go the wrong way!

RESULTS
Over a five-year period the organization gradually implemented the optimization strategies described previously. Through this time, inspection yields increased from 60 percent to 80 percent and defects into integration reduced from 10 per KLOC to 3 per KLOC. At the same time, cost of performing inspections decreased by 40 percent, as they reduced the size of the inspection teams. The results are shown in Figure 12.

SUMMARY
The organization’s experience has shown that inspections can be characterized by a relatively small number of measurements: effort, product size, and defects. When these characteristics are measured consistently, it becomes possible to manage the quality of the inspection process and the quality of the product under inspection. This entails tracking product quality costs across the software life cycle and understanding and quantitatively managing the correlation between inspection rate and inspection yield.

The organization has seen that it is possible to forecast the number of defects found in downstream test processes and to compare the predictions to actuals using statistical techniques. If a product shows an unexpectedly high number of defects in test, it can be returned to inspection.

Defect data from test can be fed back into the inspection checklist resulting in a self-optimizing closed-loop process. Use of a structured defect prevention process in tandem with the inspection process engages the developers in process improvement and greatly improved the quality of defect data and ultimately of process yield.

Several optimization strategies were identified, including the use of bench checks prior to team inspections, optimizing the size of the inspection team, inspection meeting duration, and inspection frequency. Frequent short inspections covering a relatively small number of products can have a significant performance advantage over less frequent inspections that attempt to cover larger work products.

The key to managing inspections is gathering and using data. Well-managed inspections can produce outstanding results, but when inspections are not managed the results can range from mediocre to poor.

Article Sidebar: Six Sigma Fundamentals

REFERENCES
Fagan, Michael. 1976. Design and code inspections to reduce errors in program development. IBM Systems Journal 15, no. 3.

Gilb, Tom, and Dorothy Graham. 1993. Software inspections. Reading Mass.: Addison-Wesley.

Humphrey, Watts. 1995. A discipline for software engineering. Reading, Mass.: Addison-Wesley.

BIOGRAPHIES
Ellen George has 20 years of experience in software development, software process improvement, and project management. She has held positions ranging from software developer to software manager of large-scale embedded development programs to manager of the Software Engineering Process Group (SEPG). George headed the SEPG at AlliedSignal’s Teterboro, N.J., site, helping the site to become the first CMM® Level 4 site within the corporation. She later became the software project manager of AlliedSignal’s first PSP(SM)/TSP(SM) project. George has since become an SEI authorized PSP(SM) instructor and TSP(SM) launch coach. More recently, she was a member of Honeywell’s corporate Software Six Sigma organization, helping to lead the deployment of software process improvement initiatives in more than 100 sites worldwide. She has been active in training and management consulting on the application of software process. George holds a bachelor’s degree in math from Fordham University, and master’s degrees in computer science and technology management from the Stevens Institute of Technology.

Steve Janiszewski has 30 years of experience in all phases of software development, management, and process improvement. As manager of AlliedSignal’s Software Engineering Department in Teterboro, N.J., he led the organization to become the first CMM® Level 4 site within AlliedSignal Corporation. Beginning 1997, Janiszewski championed the introduction of PSP(SM) and TSP(SM) at AlliedSignal. He chaired the AlliedSignal Software Process Improvement Council and he was also active in consulting on the application of software process to diverse domains including aerospace, medical instrumentation, industrial automation, automotive system, and financial systems. Prior to joining PS&J, Janiszewski was the director of the Honeywell corporate system and Software Six Sigma organization. At Honeywell, Janiszewski’s responsibilities included providing process assessments, training, management consulting, and improvement planning assistance throughout the corporation, providing service to more than 6000 software engineers in more than 100 locations worldwide. Janiszewski is an SEI authorized PSP(SM) instructor and TSP(SM) Launch Coach. He holds a master’s degree and doctorate in theoretical physics from NYU. He can be reached at stevejaniszewski@softwaresixsigma.com .

Return to top

Featured advertisers


ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.