Software Quality Professional Resource Reviews - September 2001 - ASQ

Software Quality Professional Resource Reviews - September 2001

Contents

PROJECT MANAGEMENT

How to Run Successful Projects in WEB Time.

Fergus O’Connell. 2000. Boston, London: Artech House. 241 pages.
ISBN 1580531652

(CSQE Body of Knowledge areas: Software Project Management, Software Processes)

Reviewed by David Walker

This book reinforces the theory that countless other project management books preach: Putting more time into project planning shortens and cheapens the development time. Throughout the book, the author uses the film-making industry as an analogy. In the preproduction phase, planning and estimating is carefully performed to reduce time and cost in the production phase. The author contends that the problem of estimating has to a large part been solved in the film-making industry.

Readers should not be fooled into thinking that this book targets Internet projects or the Web development industry specifically. There are no groundbreaking approaches portrayed, but the author makes many good points and suggestions.

The rubber hits the road in chapters 10 and 11, which introduce a tool equivalent to a movie strip board. The strip board applied to the software development project can be constructed from a Gantt chart and is used to reduce the critical path. The strip board is used continuously throughout the project to optimize the critical path.

O’Connell uses a fictitious project as an example of implementation, enabling him to provide specific clarification as necessary. He makes some interesting points in the last two chapters, which address issues of working on multiple simultaneous projects and implementing these techniques in one’s own organization.

This book could serve as a software project management reference for someone who does not already have one. The real value of this book, however, comes from the good points made throughout such as “prioritizing instead of multitasking,” “10 percent project management effort as a rule-of-thumb estimate,” “making the journey in your head” during work breakdown structuring, “identifying the right plan,” and many more.

David Walker (dwwalker@trilogyusa.com) has a master’s degree in computer science from Northwestern University and is an ASQ Certified Software Quality Engineer with 16 years of software engineering experience in the communications and health care industries. He is currently a consultant with Trilogy Consulting Corporation.

Web Project Management.

Ashley Friedlein. 2001. San Francisco: Morgan Kaufmann Publishers. 324 pages.
ISBN 1-55860-678-5

(CSQE Body of Knowledge areas: Project Management, General Knowledge)

Reviewed by Rufus Turpin


Those about to embark on a Web-building endeavor will find great value in this book. The author addresses the issues and challenges of building large-scale Web sites from the project manager’s viewpoint. If one is looking for a step-by-step guide to building a Web site that addresses all the tools, techniques, and technical stuff, this is not the book. If, however, one is looking for a book with a minimum of technical jargon, the issues and challenges neatly and clearly presented, and a method for dealing with them, then this is the book.

One of the strengths of this book is the lack of jargon. The language is not of the IS/IT world but that of business with the appropriate Web-isms included and explained in simple language. Additionally, each chapter includes a summary of key points and questions that can easily form good checklists.

The focus of this book is building large-scale Web sites of value to the client and end users, not just technically excellent sites. While the many roles and responsibilities involved in Web building are covered, this book is about project management—that is, managing Web projects. Throughout the book the author discusses and demonstrates what needs to be done and practical ways of doing it.

The book is divided into four sections. Part 1 presents Web project management and the Web project manager. Part 2 walks readers through a very usable method (Web development life cycle). Part 3 presents a case study where the method was applied and lessons learned, and part 4 is appendices of resources and suggested reading.

Part 2 gets down to the meat of building a large Web site. The method presented is a straightforward Web development life cycle consisting of four primary phases with eight work stages.

The phases and work stages are easily mapped to general software development life cycles.


• Preproduction

- Project clarification
- Solution definition
- Project specification

• Production

- Content
- Design and construction
- Testing
- Launch and hand-over

• Maintenance

- Maintenance

• Evaluation

- Review and evaluation


In each section readers are presented with the main activities and tasks, and key deliverables that need to be completed. Key points and issues are discussed, and considerations and warnings based on experience are provided.

Throughout the book much good sense is dispensed. While many points may at first appear theoretical, they are actually based on practical experience. Many of these points are reinforced in the attached case study. The author clearly demonstrates the value and importance of active client/end-user participation throughout the delivery process.

A key point that is often overlooked by many in the Web business is that the job is never really finished. Ongoing improvement and maintenance is vital and must be planned for and managed. Lessons learned in one development iteration must be applied in future iterations to ensure ongoing value. The author refers to this as the “virtuous Web development cycle.”

Another key point that is often underplayed by IS/IT types is that content is an integral component of the Web site, its development, and ongoing improvement and maintenance. Content is critical and must be planned, managed, and controlled.

This book will appeal mainly to non-IS/IT readers. There is good value, however, for the IS/IT Web project manager, as the book provides much good advice, which will help improve developer-client communication and participation.

Rufus Turpin (rufust@attcanada.ca) is an independent management consultant with more than 20 years of experience in the software quality disciplines. Turpin works with clients in both the public and private sectors improving the performance of their quality systems. A past chair of the ASQ Ottawa Valley Section, Turpin is a Senior member of ASQ and is currently serving as the Software Division marketing chair. He is an ASQ Certified Software Quality Engineer and ASQ Certified Quality Auditor.

Best Practices Series: Project Management.

Edited by Paul C. Tinnirello. 2000. New York: Auerbach. 491 pages.
ISBN 0-8493-9998-X

(CSQE Body of Knowledge areas: Software Project Management)

Reviewed by Eric Patel

This book is part of the Best Practices series from Auerbach on project management. Each chapter is written by a different author who specializes in that topic area. Forty-one chapters comprise five sections:

1. Project management essentials
2. Managing business relationships
3. Effectively managing outsourced projects
4. Managing special projects
5. Measuring and improving project management success

The main benefit of this book is the diverse project management subjects that are covered, as well as the diverse subject-matter experts. Software quality practitioners will recognize such authors as Roger Pressman and James Ward. Balancing these consultants are more than 15 university professors who (at times) offer their theoretical vs. hands-on project management advice. The breadth of issues covered in this book—from risk management to outsourcing to teamwork—is impressive, thus avoiding a “one-size-fits-all” approach to project management.

In chapter 7, “A Model for Estimating Small-Scale Software Development,” Abbas Heart argues that both lines of code (LOC) and function points (FP) are not “well adapted to today’s small business environment.” Instead, the REIO model is proposed based on data from the following 35 projects:


• Less than 3000 LOC
• 4GL database programming language
• One to three team members


The main benefits of the REIO model are that the data stores relationships and the number of external input-output data flows can be computed early and easily (even before data dictionary and user requirements specifications are completed). Balancing the author’s convincing argument are his stated limitations, including the results not being able to be generalized to large applications and the REIO model’s dependency on data-strong applications.

In chapter 11, “Partnership: The Key to Successful Systems Development in a TQM Company,” Christine Tayntor describes the total quality management (TQM) challenge for the information services (IS) department: heightened expectations, increased workload, and pressure to improve services while decreasing costs. The solution? A symbiotic partnership with customers and suppliers. The author proposes numerous guidelines to assist IS professionals in becoming effective team members and partners as well as forging supplier partnerships. She also stresses new skills that are required, such as persuasion and negotiation. Those who are familiar with and practice TQM will find this chapter of minimal value.

In chapter 23, “Certification of Externally Developed Software,” Craig A. Schiller addresses the “shrink-wrapped syndrome” and the unwarranted and assumed trust (and quality assumption) that software engineers place with commercial off-the-shelf (COTS) software. Here the author presents a body of collectable knowledge that can be used to select tests to mitigate the threat from externally developed software. First described is the Federal Information Processing Standard (FIPS) 102 publication, followed by ISO 9000, and the Software Engineering Institute’s Capability Maturity Model (CMM). He identifies several “assurances,” including product, personnel, financial, operations, risk, and so on. The security professional is left to determine the type and degree of certification by examining product use and intended product use for the type and severity of potential loss. The last section contains a useful checklist when certifying systems-related products, a lot of which are accomplished by doing standard risk management techniques.

In chapter 28, Ralph L. Klien presents some hands-on advice in “Keeping Client/Server (C/S) Projects on Track with Project Management.” He paints a picture of the problems associated with C/S projects and proposes ways in which project management can help. He describes four project management processes that keep C/S projects on track: 1) leading; 2) planning; 3) organizing; and 4) controlling.

In the Leading section, such topics as a statement of work, schedule, budget, roles, and responsibilities are discussed. In the Planning section, the work breakdown structure is presented as the primary tool. In the Organizing section, such tools as a responsibility matrix, a project wall, and meeting are mentioned. Finally, variances are discussed in the Controlling section. Most of the suggestions mentioned should be familiar to experienced project management professionals. Additionally, the majority of the principles also apply to non-C/S projects. Nevertheless, the chapter is one of the more comprehensive and useful in the book.

Chapter 40, “Assessing and Improving the Software Development Process,” presents process assessment models. Roger S. Pressman advocates using such an assessment as the first part of a software engineering transition cycle that includes:

• Process assessment
• Education
• Selection of procedures, methods, and CASE tools
• Justification
• Installation
• Evaluation


The process assessment helps to make informed decisions regarding software process improvement and makes use of qualitative, Boolean, and quantitative questions, examples of which are presented in the chapter. Pressman then reviews a process assessment model. Ultimately, he proposes an alternative: the process advisor assessment model that benefits from a self-directed assessment without incurring a substantial initial expense. To accurately assess the eight process attributes he suggests, Pressman recommends a course of action that assigns a letter grade (A through E) for each of the five (1 to 5) ranges for each attribute.

With more than 40 different authors, the length and quality of each chapter varies. The primary benefits are the multitude of authors, the subject matter, the expertise, and the advice. The main drawback is the inconsistency of the chapter’s contents. In all, this book is a great initial investment in one’s project management library, if he or she believes that it is better to know a little about a lot of things as opposed to a lot about a few things.

Eric Patel (eric.patel@nokia.com) is a quality assurance and test manager at Nokia where he leads a team that tests digital communications solutions for the home. He is co-founder of the Nokia Quality Forum (NQF), an ASQ Certified Quality Manager and Certified Software Quality Engineer (CSQE), and teaches the CSQE Test Preparation Course for the ASQ Boston and Merrimack Valley Sections. Patel is also on the editorial review board of The Journal of Software Testing Professionals.

SOFTWARE PROCESSES

Customer-Centered Products: Creating Successful Products Through Smart Requirements Management.

Ivy F. Hooks and Kristin A. Farry. 2000. New York: Amacom. 272 pages.
ISBN 0-8144-0568-1

(CSQE Body of Knowledge areas: Software Processes, Software Project Management)

Reviewed by Carol A. Dekkers


After separate careers and a lifetime of experiences teaching, consulting, and implementing effective requirements management processes, Ivy Hooks and Kristin Farry collaborated on Customer-Centered Products. Their goal is to spread the word about the benefits of requirements management by appealing to and reaching the very segment of companies that controls product development budgets and schedules, and that often overlooks the importance of requirements—managers.

This book achieves and surpasses its goal, because not only are the authors’ recommendations critical for management to heed, they provide needed wisdom to technical professionals as well. The book is well written and packed full of workable solutions and models to combat requirements challenges, anecdotes, and case histories to vividly illustrate concepts being explained. It also has concrete advice about the manager’s role in each step of the requirements process. While some requirements books preach about additional tasks to be done during the system development process, Hooks and Farry provide a streamlined solution to requirements management that removes the fat from process steps and reduces rework by getting the product correct the first time.

One of the most appealing aspects of this book is that the product requirements are not specific to software—they are equally applicable to all types of product development. Requirements mismanagement can be as costly in other industries as it is in software development—and management in any industry will find value in this book. This effectively eliminates the ability of management to say, “This is a book about requirements management of development—it doesn’t apply to my industry.” It is an important book for managers everywhere who set deadlines, budgets, and schedules without necessarily understanding the importance of requirements management.

Hooks and Farry use common sense coupled with solid statistics to bring reality to their words of wisdom. When discussing the reasons for weak or ill-defined requirements processes, the authors examine how some elements of American culture may be at the root of the problem. It is clear that some of the American strengths such as “insistence on choice” and our “urge to improvise” built this great country, and Hooks and Farry examine how these “strengths” can actually serve to undermine and even sabotage projects. Simply being aware of the downside of strong attributes provides competitive insights that readers can turn around in their own companies.

Chapter 3 introduces a realistic and straightforward model for companies that lack a solid “requirement definition process,” and addresses the question, “Why adopt a process?” The model consists of nine clearly defined steps, each of which is examined in its own chapter. This book is easy to digest because of the subdivision of chapters through the use of pointed management questions including: “How much effort should you invest in …?” and “What is the manager’s role in …?” Each chapter concludes with a sanity check, providing questions to ensure that the implementation of a particular step is going to be an effective and contributory part of the requirements management process.

While reading through Customer-Centered Products, I tabbed the pages with notations I had made for inclusion in this review. After running out of “sticky flags,” I knew this book was a winner. Some of the flagged pages contained “motherhood” statements, which will seem elementary to those long immersed in the requirements business, but for managers typically unaware of the enormous impact that requirements management has on product development, this book is a goldmine of management-level recommendations. A few of the notable excerpts include:


• In the introduction, “Managers and Requirements,” “Imagine producing, delivering, or buying a product for 50 percent of what your competition spends. What could your company do with such a large competitive advantage? …You could achieve that 50 percent cost reduction by changing your approach to defining the requirement for what you are producing or procuring. Smart requirement management offers the potential of eliminating rework, which consumes half of a typical project’s resources!” What a powerful introductory statement—and the statistics and references cited throughout support this statement.

• From chapter 1, “Requirements: Structure for Success,” “Bell Labs and IBM studies have determined that 80 percent of all product defects are inserted in the requirement definition stage of product development, the stage when you should define a product’s needs and uses. In the 1970s and early 1980s experts were reporting that 45 to 56 percent of all software product defects are inserted during requirement definition. Are we getting worse at defining requirements? No. We are getting better at everything else!”

• Chapter 7, “Be Careful What You Ask For—Writing Good Requirements” is valuable because writing requirements is not the first step in the model. The actual writing tasks follow three critical prerequisite steps: 1) scoping the product, 2) developing operational concepts, and 3) identifying interfaces. It should be common sense to scope a product and know what type of product it is expected to be—before writing its detailed requirements. In practice, however, the quest for anything to be delivered quickly often results in behaviors governed by the need for speed.

Customer-Centered Products examines how to set priorities for requirements, how to automate aspects of the requirements process, managing change, and measuring requirement quality. The final words sum up this manual succinctly: “There is no magic in good requirement engineering. No be-all-end-all requirement engineering tools or analysis models exist…You, the manager, must set the pace by taking personal responsibility for the requirements…You must read your requirements and understand them every step of the way, throughout the entire product life cycle.”

If one develops software under imposed, unrealistic schedules or knows that there must be a way to tell management how important the requirements process really is to product development— this book is the answer. With today’s uncertain economic conditions, management needs answers that will create a better bottom line, minimize expensive product rework, and deliver high-quality products the first time. This book provides answers by pointing to the beginning of the product life cycle—the requirements management process. Now is the time and the place to do things right the first time, and Hooks and Farry’s book will show management how.

Carol A. Dekkers (dekkers@qualityplustech.com), an SQP editorial board member, is a past IFPUG president and is president of Quality Plus Technologies, Inc., specializing in function point analysis training and software measurement consulting. Dekkers earned her bachelor’s degree in mechanical engineering from the University of Calgary, and is a certified management consultant, a certified function point specialist, and a professional engineer (Canada). She is the host of a weekly IT radio show available over the Internet, “Quality Plus e-Talk with Carol Dekkers,” and is a regional councilor for ASQ’s Software Division.

The Pragmatic Programmer: From Journeyman to Master.

Andrew Hunt and David Thomas. 2000. Reading, Mass.: Addison-Wesley. 346 pages. ISBN 0-201-61622-X

(CSQE Body of Knowledge areas: Software Processes, General Knowledge)

Reviewed by Scott Duncan


I generally like to review a book using its own words. Ward Cunningham, associated with the eXtreme programming movement, writes in the foreword that the authors “tell us how to program…in a way that we can follow.” The authors, in their preface, state that the book is about doing. Cunningham also says the book offers a pattern language, that is, a “system of solutions.” The preface sets the tone of the book, emphasizing professional characteristics and behaviors and stating that, while building software “should be an engineering discipline…this doesn’t preclude individual craftsmanship.” It mentions the Japanese term kaizen and its relationship to “continuously making many small improvements.” Thus, the book is about how individual software development professionals can, on a day-to-day basis, improve their effectiveness and the effectiveness of their projects.

Even if one is not an active programmer, much of the advice can be applied to other software development areas. Indeed, many of the 70 tips provided can be applied to any domain since they encourage basic practices in professional behavior, attitudes, and technology application. The tips, and 46 titled sections into which they are divided, are collected into eight chapters. The first two chapters discuss the pragmatic philosophy and approach at the basis of the book. Others discuss tools, design, coding practice, specification, and overall project issues.

The first chapter, “A Pragmatic Philosophy,” covers what the authors call attitude or style. Fundamentally, this means taking responsibility and not watching “projects fall apart through neglect.” This covers how individuals prepare and improve themselves as well as how they treat projects—their own and those they come into contact with. While the authors advise against assuming responsibility “for an impossible situation, or one in which the risks are too great,” they say one must “provide options,” that is, bring peers or management the best suggestion for dealing with problems, not just bad news about why things will not work.

A second main concept is “entropy” in software, that is, deterioration of software over time unless specific effort is devoted to preventing it. The authors effectively use an analogy based on urban decay: the trigger mechanism in building deterioration is the first broken window. Once the first sign of neglect appears, a climate of neglect is established and more damage occurs. The advice for software is not to leave “bad designs, wrong decisions, or poor code” around. They should be fixed up, or “boarded up” with comments, dummy data, and so on, “as soon as…discovered.”

The authors then address the idea of “good enough software,” which has been expounded, perhaps most thoroughly, by James Bach in a number of articles over the years. Fundamentally, this book advocates asking customers “how good they want their software to be” by making quality a requirements issue. One might find this view problematic for organizations where quality is some assumed “goodness” that a customer would claim should be defect-free. Bach and the authors of this book, however, advocate simply being overt about discussing quality expectations and the tradeoffs in cost and time. The authors here claim “many users would rather use software with some rough edges today than wait a year.” (It is the formal quantification of “some” that makes the critical difference in practice.)

The next section covers building and managing one’s “knowledge portfolio,” or how one keeps current in his or her career area(s). One does this by growing knowledge regularly in “different” related areas, staying ahead of emerging technology by taking some risks as to what will or will not become vital later, and brushing up on older knowledge that still seems relevant. The authors offer several suggestions for doing this.

Though a lot of effort, approaching this day-by-day rather than in sporadic massive bursts will be less discouraging. The main point is not the specific knowledge one learns but how the “process of learning will expand your thinking, opening you to new possibilities and new ways of doing things.”

The chapter ends by encouraging technical professionals to learn to communicate better by “knowing what you want to say” and “knowing your audience.” There is specific advice on when to approach people, what style to use, being sure presentation materials are of good quality, involving the audience, being a good listener, and making sure to “get back to people.” On the latter point, the authors encourage responding to people as immediately as possible, even if just to say they will “get back to them later.”

Under the topic of “the evils of duplication,” the authors state that because “knowledge isn’t stable” and “our understanding changes day by day,” people must realize that “maintenance is a routine part of the entire development process.” Thus, having a single representation for “every piece of knowledge…within a system” is “one of the most important tools.”

One way to do this is through “orthogonality” or the independence of elements in a system from one another. Orthogonal systems “increase productivity and reduce risk.” The authors even apply this concept to individual interactions on a development team where people can work effectively but independently, because there is “well-defined responsibility and minimal overlap.” Other applications of orthogonality are discussed as elements of design, toolkits and libraries, coding, testing, and documentation. (The authors often do not immediately use otherwise standard terms such as “decoupling,” since they feel some have low-level associations that they want to avoid.)

Another principle is “reversibility,” that is, facing the fact that “critical decisions aren’t easily reversible” unless one uses other advice in the book to avoid making such decisions. In general, the “mistake lies in assuming that any decision is cast in stone—and in not preparing for the contingencies that might arise.” It is interesting to compare this to eXtreme Programming Explained where the opposite view seems to be advocated, that is, not to design anything into the system that is not immediately needed.

The next section discusses rapid feature exploration compared to extensive up-front specification. The term “prototyping” is not used because the authors claim it suggests a throwaway approach. Instead, “tracer code” is used, which the authors state addresses building an intact framework as features are added and shown to a customer. “Tracer code” comes after prototype exploration, if the latter is done, since prototyping “gives up the details” and tracer code is needed when one cannot do that. The authors immediately follow this section with one on prototyping, to explain when to prototype and clarify the fine line drawn between terms. After prototyping comes a discussion of domain languages since “computer languages influence how you think about a problem and how you think about communicating.”

The chapter ends with a discussion of estimating. The discussion begins with assumptions people make about estimate accuracy based on the units of measure used. For example, stating that a project will take about 180 calendar days encourages others to think an estimate is specific, while stating it will take about six months will more likely lead them to assume the estimate is not as specific. This is because the implied margin of error in the first case is days, while, in the latter, it is months.

The third chapter, “The Basic Tools,” emphasizes the craftsman aspect of software development, stating “tools amplify your talent.” Hence, the chapter’s point is to advise programmers about “investing in [their] own basic toolbox.” The authors start with a statement about raw materials and the “workbench” (where a programmer will do the work). In addressing these, the authors show a decidedly Unix® preference stating that “the best format for storing knowledge persistently is plain text” and that, therefore, the most appropriate workbench for plain text “is the command shell.”

In their comments about “power editing,” the authors continue their preference for a command-line approach over cut-and-paste, wysiwyg, and GUI tools. They advocate finding one “powerful” (that is, feature-rich) editor and learning it well.

On the subject of debugging the authors state “it’s a given” since “no one writes perfect software” and one should “embrace the fact that debugging is just problem solving, and attack it as such.” Three pieces of advice given are:


• Use tools (like compiler diagnostics) rather than one’s own effort where possible.
• Be prepared to speak to (even watch) the user who reported the problem to get more detail.
• “Brutally test both boundary conditions and realistic end-user usage patterns.”


In chapter 4, “Pragmatic Paranoia,” defensive driving in design is covered through:


• Design by contract, which addresses the interface agreements, which need to exist between modules “to ensure correctness.”
• Avoiding the “it can’t happen” mentality by defensive programming.
• Assertive programming so that, if “it can’t happen,” check for it to “ensure that it won’t.”
• Use of exception language constructs when they exist (especially to reduce ugly looking error checking/handling code).
• Balancing resources by having explicit allocation and deallocation plans for memory, transactions, threads, files, timers, and so on.


The next chapter, “Bend or Break,” pursues the same defensive approach since “life doesn’t stand still” and systems will change. Topics covered include:


• Decoupling techniques
• Metaprogramming
• Managing changes in state between modules
• “Blackboarding” to “decouple…objects from each other…providing a forum where knowledge consumers and producers can exchange data anonymously and asynchronously.”


Chapter 6, “While You Are Coding,” returns to “craftsman” issues for “deliberate programming,” including algorithm performance, code refactoring, easy to test coding, and avoidance of “wizard code,” that is, code one does not understand how and why it gets generated.

Chapter 7, “Before the Project,” covers requirements gathering, analysis, and specification practices. The authors start by stating “a requirement is a statement of something that needs to be accomplished” and that “good” requirements avoid embedding business policy in them (which should be treated as metadata). For example, a requirement that says, “Only personnel can review an employee record” may result in explicit departmental test coding throughout the system where a requirement stating, “only authorized users” can do so would more likely lead to a more flexible access control system for data records.

The final chapter, “Pragmatic Projects,” “moves away from individual philosophy and coding to talk about larger, project-sized” issues. Sections discuss “establishing some ground rules and delegating,” projectwide testing philosophy and tools, making the “chore” of documenting “less painful and more productive,” dealing with the “perception of success,” and “taking pride in what you do” by “signing your work.” The authors state “the single most important factor in making project-level activities work consistently and reliably is to automate your procedures.” To do this, they recommend “appointing one or more team members as tool builders to construct and deploy the tools that automate the project drudgery.” (How to do this, and get project work done, with very small teams is not specifically addressed in the book, though.)

I recommend this book. It gets across ideas without appearing too preachy or dogmatic. As with eXtreme Programming Explained, it puts clear, specific stakes in the ground. Readers will find things they disagree with, but the effort invested when they disagree is worthwhile. (Readers can also go to the authors’ Web site for the book at www.pragmaticprogrammer.com.)

Scott Duncan (softqual@mindspring.com) brings more than 28 years of experience in all facets of internal and external product software development with commercial and government organizations. For the last eight years he has been an internal/external consultant helping software organizations achieve international standard registration and various national software quality capability assessment goals. He is the current standards chair for ASQ’s Software Division and is a member of the U. S. Technical Advisory Group for ISO/IEC JTC1/SC7 standards in software engineering.


A Practical Guide to Information Systems Process Improvement.

Anita Cassidy and Keith Guggenberger. 2000. Boca Raton, Fla.: St. Lucie Press. 269 pages.
ISBN 1-57444-281-3

(CSQE Body of Knowledge areas: Software Processes, Software Project Management)

Reviewed by Gordon W. Skelton

A commitment to quality requires continuous process improvement. Cassidy and Guggenberger recognize the importance of such a commitment. They provide readers with a guidebook that focuses on the many faceted aspects of information systems management and how one can approach process improvement in that context.

The primary direction of the book is on the importance of properly managing information services (IS) processes. Emphasis is placed on identifying the current state of affairs and then identifying those areas where process improvement can have the greatest impact. Understanding the IS process arena and then documenting the current state of affairs begins the road to improvement. Understanding IS processes includes both the internal organization and its functionality, as well as the external environment of the organization.

The authors agree with W. Edwards Deming when they point out that “typically, the root cause of failure or defects is due to process and organizational failure, not failure of the people.” For this reason it is crucial that one first understand and document the existing IS process and then identify those areas that can be impacted by process improvement.

To implement a continuous process improvement effort within the IS function of an organization, it is imperative to have a proper management commitment, and that individuals involved in the documentation and actual improvement effort have adequate training and access to the latest tools used in process improvement and measurement.

Because of its practical nature, the book provides readers with a number of helpful summaries, reminders, and places to record thoughts and the current state of the IS process. In addition, the authors provide checklists and questionnaires in the appendices that can easily be used either as they are presented or adapted to the individual IS department’s needs.

Overall, I recommend this book as a beginning tool for evaluating and improving IS processes. Because of the concise writing, readers can quickly gain important insight and recognition of the IS process and how one’s process can be documented and plans for improvement can be implemented.

Gordon Skelton (gwskelton@mvt.com) is vice president for information services for Mississippi Valley Title Insurance Company in Jackson, Miss. In addition, Skelton is on the faculty of the University of Mississippi, Jackson Engineering Graduate Program. He is an ASQ Certified Software Quality Engineer and is a member of the IEEE Computer Society, ACM, and AITP. Skelton’s professional areas of interest are software quality assurance, software engineering, process improvement, and software testing.

 

OPEN Modeling with UML.

Brian Henderson-Sellers and Bhuvan Unhelkar. 2000. London: Addison-Wesley.
245 pages.

ISBN 0-202-67512-9

(CSQE Body of Knowledge area: Software Processes)

Reviewed by David Kingsbery

 

OPEN Modeling with UML presents a methodology to provide the muscle and skin of processes to the skeletal framework provided by modeling languages. The authors explain the OPEN (Object-oriented Process, Environment, and Notation) methodology as "a third-generation, full life cycle, process-focused, methodological approach that is especially suited for component-based, object-oriented (OO) and Internet-based software developments, as well as for business modeling and systems modeling." This book is designed to support a multiday training class, but it is helpful to anyone interested in learning about UML and a processes methodology to support it.

According to the introduction, OMG is considering OPEN as the standard development process methodology. Many development shops may be using a modeling language/tool without well-defined processes. These environments could improve their process repeatability and thereby their quality by learning and implementing the OPEN methodology, and this book can help that happen.

The authors wrote this book to provide process and modeling-related information to a wide range of audiences, but their primary target is at an "introductory industrial level or a senior academic level." The book’s title, OPEN Modeling with UML, describes its focus, but as someone who has been only on the periphery of OO development I can attest that it provides much more.

The book is written around a pair of case studies (chapters 4 and 5), one more complex than the other. Before the authors present the case studies, they provide a solid foundation in the earlier chapters. Chapter 1 gives a brief overview of the OPEN process and UML. In chapter 2 they describe the key elements of the UML, and in chapter 3 they examine how OPEN supports modeling through a number of activities, tasks, and techniques. This structure works well, but if one is not familiar with UML and the OO vocabulary, he or she might find it helpful to read chapter 3 before chapter 2. One of this book’s strengths is its definitions and explanations of OO terms and concepts, and most of these are in chapter 3.

The book is also full of references (many as recent as 1999) and explanatory footnotes. Its bibliography and Internet references make it easy to study either OPEN or UML further, and its index is robust and accurate. The one additional appendix that could be very helpful is a listing of the UML notation standards. The book describes the UML notation in context but it could help the novice understand all the sample diagrams better if the notation were also listed together with a reference index.

The techniques presented take UML off the tool-shop wall and bring it to the project at hand. As a process methodology, OPEN describes how the tools of UML can be used, but OPEN Modeling with UML also tempers one’s expectations to understand some of UML’s limitations.

Since this book was designed as the basis for a two-day training workshop, it uses techniques to demonstrate mistakes that might be made. It takes the student down an apparently logical path, only to stop and describe the potential problems with it and develop a better solution. This technique and its real-world attitude make this book an educational tool, not just a theoretical proclamation. If one is reading OPEN Modeling with UML as an introduction to the subject, he or she may need to read this book and then review it again to get its full benefit. If one is an OO and UML expert, he or she will easily cull the OPEN methodology kernels.

David Kingsbery (David.Kingsbery@FirstUnion.com) has more than 20 years’ experience in all phases of software development. For the last six years his focus has been on testing and quality assurance for client server, mainframe, and Web development projects at First Union National Bank. Kingsbery is an active member of the Charlotte IT Quality Assurance Association (CITQAA) and the Software Process Improvement Network (SPIN).


Introduction to Software Engineering.

Ronald J. Leach. 2000. Boca Raton, Fla.: CRC Press. 428 pages.
ISBN 0849314453

(CSQE Body of Knowledge area: Software Processes)

Reviewed by Milt Boyd


“This book is intended for juniors and seniors majoring in computer science. The students will have used a modern programming language (C, C++, Java, Ada, or Pascal), on a project larger than a few hundred lines of code. They will have had a course in data structures. The goal is to take students from an educational situation and move them to an understanding of the development of software systems that are more complex by several orders of magnitude. There is a strong emphasis on approaches in current use, with examples of actual industry practices.” [extracted from the preface]

Leach says software engineering is the term to describe software development according to accepted industry practice, with good quality control, adherence to standards, and in an efficient and timely manner. This term refers to a systematic procedure used in the context of a generally accepted set of goals for the analysis, design, implementation, testing, and maintenance of software.

The approach is practical throughout, with heavy emphasis on team projects, using the Internet as a resource, with discussion of tools in common use.

Six chapters form the heart of the book. They treat activities in the software life cycle: requirements, design, coding, testing, delivery and documentation, and maintenance. An introduction and material on project management, at the beginning, and research issues in software engineering at the end, complete the book. Each chapter ends with a summary, further reading, and a number of exercises. There is an appendix on command-line arguments, so students are better prepared to understand the interaction of program and operating system. There are almost 20 pages of references, covering the classic literature of the industry up to 1999, and an index.

The book promises much but delivers somewhat less. Leach presents a collection of anecdotes in section 1.7 to convince readers that software engineering is an engineering discipline. Unfortunately, his examples make software engineering seem quite comparable to furniture design, for example, as both have systematic, organized, structured processes requiring extensive skill and knowledge. As presented, software engineering appears to be a highly disciplined craft.

The preface claims that a software engineer adheres to standards. The only standards discussed at length, however, are coding standards. Of the IEEE software engineering standards, he references only ANSI/ IEEE 729-1991 on terminology. The introduction says that, for example, the nuclear power industry needs safe and reliable software, but there is no mention that it has specific standards on software engineering to achieve that goal, as do some other industries. There is no mention that regulatory bodies (such as the FDA, FAA, and so on) can affect the design and development of software products. There is no discussion that, or how, ISO 9000 (or other international standards) might influence the practice of software engineering.

Similarly, the preface claims that a software engineer produces good quality software in a timely manner. However, process initiatives such as the Software Engineering Institute’s Capability Maturity Model (CMM) are mentioned briefly but not described in detail. They are considered to be in the domain of project management. There was no mention of SPICE. It is not that the book gets anything wrong; rather, it presents a very limited picture of software engineering.

The discussions are pitched to undergraduates lacking practical experience. They lack sufficient depth to be useful to a practicing professional. A consequence of the book’s structure is that everything (except project management) is assigned to one or another part of the software life cycle. Many people prefer to consider some topics as parallel and independent tracks, bearing on many activities in the life cycle.

Thus, the topics of configuration management (and change control), and reviews and inspections, are presented in the chapter on coding. But change control and configuration management issues are pervasive on a software project, from requirements through to maintenance. Similarly, the topic of quality assurance is presented briefly in the chapter on testing and integration. Quality assurance (QA) is mentioned in other sections, but always with the idea that it is the job of the QA team.

In the chapter on requirements, I missed a discussion of quality function deployment, a powerful technique to capture and compare customer requirements. It does not present Kano’s diagram, a useful way to distinguish various kinds of requirements for many students.

The discussion of ethics in section 3.10 focuses on “fatal flaws” in requirements: impossible computations, impossible performance demands, and inconsistent interactions (internal and external). There is no explicit discussion of conflict of interest or of product liability. It appears that ethics is purely internal, promising more than can be delivered to the project management. There is no discussion of outside stakeholders.

There is little reference to professional societies for software engineers. The book does refer to the IEEE in the discussion of professional ethics, and for definitions of terms. Chapter 9 identifies the IEEE and the ACM, and names a number of journals of interest. It may seem parochial, but the book displays no awareness of ASQ’s Software Division.

The index is disappointing. It was not carefully edited and revised in synchronization with the body of the book. Readers must be alert for typographical and editorial errors.

This book sets forth the challenging variety of goals that good software must satisfy; it may be useful to undergraduates as an introduction to some current good practices in industry used to meet those goals. However, it is not a satisfying introduction to engineering, defined as “the application of science or abstract knowledge in the design, planning, construction, and maintenance of useful objects.” It promotes the practice of good (even excellent) craftsmanship, but not engineering.

For the software quality engineer preparing for certification, this book provides some general material, but little of specific helpfulness.

Milt Boyd (miltboyd@aol.com) is a member of ASQ’s Software, Reliability, and Quality Management Divisions. He is certified as an ASQ Quality Manager, and is certificated by IRCA as a lead auditor of quality management systems. He is currently employed as a system engineer by Avidyne Co. of Massachusetts.

TESTING

Software Testing and Continuous Quality Improvement.

William E. Lewis. 2000. Boca Raton, Fla.: Auerbach/CRC Press. 656 pages.
ISBN 0-8493-9833-9

(CSQE Body of Knowledge area: Software Testing)

Reviewed by Pieter Botman

For software engineers, there is no doubt that testing is an important part of the software development process. Aside from the economic/investment aspects, software personnel recognize the importance of testing because of its close association with software product quality—failure to test properly (with all that this entails) will have a direct impact on the defects found post-delivery.

The author states in the introduction that this book is intended to tie various aspects of quality to the testing process, that is, “to provide a continuous quality improvement (CQI) approach to promote effective testing methods and provide tips, techniques, and alternatives from which the (reader) can choose.” This is an honest representation of the book’s content—the primary thrust of the book is testing, more so than CQI.

Veteran software quality practitioners will recognize the interesting convergence of two large topic areas—testing and quality improvement. The former topic might be considered quite technical, narrow, and deep, while quality remains a broad topic, with fundamental principles applying to processes in any domain. The author chooses to discuss testing largely from an organizational and management standpoint, not really delving into technical design or architectural issues related to testing. But the control and organization of testing steps and procedures can be viewed as processes, which can in fact be improved, and so the author is largely attempting to apply the Deming/Shewhart process improvement principles to testing.

In the first section, the author introduces important high-level quality concepts, such as the cost of quality and verification vs. validation. He then expands upon verification within software development and relates that “traditionally, software testing has been considered a validation process.” But verification occurs at every stage of development, and the author seems to merge testing and verification: “When verification is incorporated into testing, the testing occurs throughout the life cycle.” I had to think long and hard about this statement, and I had some difficulty in reconciling my own use of the term “testing” with the author’s. This point crops up repeatedly throughout the book (an example: “testing the requirements with technical reviews”), so I recommend that readers get past this and focus on absorbing the substance of what the author is saying.

The author begins to expand upon the plan-do-check-act (PDCA) cycle, as it applies to software testing.

“The PDCA approach…is a control mechanism used to control…a system. [It] defines the objectives of a process, and checks to determine if the anticipated results are achieved. If they are not achieved, the plan is modified to fulfill the objectives.”

Specifically with respect to testing, he relates the “plan” step of the PDCA cycle to test planning, a major part of which is the software test plan. He relates the “do” step of the cycle to the design and execution of the respective tests defined in the test plan. The “check” step of the cycle is related to the measurement of the progress of test execution. Finally, the “act” step of the cycle is related to handling “work not performed according to the plan, or results that were not anticipated in the plan.”

While I believe strongly in the PDCA cycle, its application to testing by the author left me a bit uneasy. In the largest sense, “testing” can be viewed as a process, including many tasks and activities, so planning, measurement, and feedback (PDCA) can certainly be used to improve the process at a high level. On the other hand, a distinction should be made between the more detailed testing tasks (which can be quite project- and design-specific) and the higher level testing process. These lower-level testing tasks are not standardized and performed repeatedly, and are not the target of process improvement per se.

In large measure, PDCA has generally been explained and promoted as a process improvement or quality management technique, not a lower-level technique for controlling the quality of output of one (more specific) task. In software engineering, it is appropriate for any given task to be planned, the results evaluated, and rework undertaken if needed.

The author certainly understands and explains the concepts of verification. In the second section, he describes life-cycle phases (requirements, logical design, physical design, program unit design, and coding), and introduces verification methods and techniques, such as reviews/ walkthroughs/inspections, as they apply to each phase. In Appendix F, the author provides a checklist of possible “defect categories” for each of the phases, which are useful as starting points for discussions among project personnel.

In the third section, the author introduces the spiral development methodology. He provides some comparisons with the waterfall development model, and provides relative advantages and disadvantages of each. He mentions the iterative methodology briefly as a variation of the spiral methodology, one in which “the development team is forced to reach a point where the system will be implemented.” The PDCA cycle is mapped to the testing-related activities in one spiral iteration.

The subsequent chapters break down the testing activities further, addressing planning, designing, executing, managing, and measuring tests as they are executed within a given spiral iteration. The author provides many checklists and plan breakdowns, many of which are organizational (as opposed to technical) in nature. He goes to great lengths to introduce many different types of tests, particularly tests he groups under the category of “system tests.” These various system tests were thought provoking, but perhaps not for the reason intended by the author. Some of these were clearly subsystem functional tests, others were operational tests—perhaps there could have been a better way to organize and categorize these tests.

The next section contains general guidance about testing tools and a short summary and comparison of some 21 popular testing tools from six major vendors such as Mercury Interactive and Rational. While useful for beginning testers unfamiliar with any tools, this summary and comparison will be rapidly dated, and more space is necessary to adequately compare even this small set of tools.

There is a similar chapter on maintenance tools, which runs the gamut from HTML validation tools to Web regression tools to problem management tools to Java code metric and code coverage tools. The author includes a brief summary of tool features in most cases. While the lists were interesting and potentially valuable to some readers, I was left wondering about the choices made in categorizing these tools, and the overlap with testing tools associated with other parts of the book.

The author includes a section on testing in the maintenance environment, which includes an overview of the management issues related to testing, configuration management, and release control for a software product that is already in production. Here again the author relates the PDCA philosophy to small maintenance cycles, in which the tester plans tests and improvements based on various criteria (severity level of the identified problem, current list of desired improvements, and so on). But the author also stresses the larger aspects of the maintenance cycle that is evaluating, modifying, and reexecuting system-level tests as appropriate. Once again the author is strong on organizational tools and checklists, and introduces metrics in relation to defect analysis.

The book contains eight appendices. In some cases, these are near repetitions of content in the main sections of the book, while in other cases the author expands greatly. Most notable and useful perhaps are the appendices relating the author’s test templates, checklists, and a summary of software testing techniques. While the large lists of “testing techniques” are impressive, I found them to be not strictly classified—included in the list are items such as histograms, JADs, Pareto analysis, and structured walkthroughs. Capers Jones refers to these as quality control methods, or defect removal techniques.

Much of the material in this book is worthwhile, although not all of it was evenly and logically presented. Sometimes, worthwhile information is provided but out of context. In other cases, the author overemphasizes the generalities (for example, repeatedly restating the PDCA philosophy).

The author has bravely attempted to tackle two large subjects (testing and continuous improvement), and the relationship between them. I’m not certain that this work has succeeded in doing so, because there is a need to address many aspects of testing in the small before addressing the larger testing, and projectwide aspects of continuous improvement. I agree with the author that testing can be viewed as a process, and thus is ripe for the application of continuous improvement strategies. As more projects adopt flexible, spiral, or iterative development methods, however, each cycle involves many aspects of development, not simply testing alone. This should set the stage for discussion of CQI as applied to an integrated development process, not merely the software testing process.

While this book might be useful to software engineers with some experience and insight into testing and the integrated software development process, I would recommend a different approach for those seeking a clear foundation for these two topics. For a better and more thorough grounding I recommend classic books on testing, such as Principles of Software Engineering Management by Tom Gilb, and Managing the Software Process and A Discipline for Software Engineering by Watts Humphrey.

Pieter Botman (p.botman@ieee.org) is a professional engineer registered in the province of British Columbia. With more than 20 years of software engineering experience, he is currently an independent consultant, assisting companies in the areas of software process assessment/improvement, project management, quality management, and product management.

 

QUALITY MANAGEMENT


Project Retrospectives: A Handbook for Team Reviews.

Norman L. Kerth. 2001. New York: Dorset House Publishing. 268 pages.
ISBN 0-932633-44-7

(CSQE Body of Knowledge areas: General Knowledge, Software Quality Management, Software Processes)

Reviewed by Linda Westfall


On the cover of Project Retrospectives: A Handbook for Team Reviews there is a quote from Gerald Weinberg: “This book belongs in the library of every project manager, for any kind of project, everywhere.” After having read this excellent “how to” book, I not only agree with this statement, but I would expand it to include every software quality engineer as well. Whether you use the term postmortem, post-project review, retrospective, or something else to refer to these lessons-learned gathering sessions held at the end of a project, they are some of the primary tools for software process improvement.

One of the strongest features of this book is a list of detailed, step-by-step exercises that facilitators can select from to tailor their retrospectives to the specific needs of each project. Kerth defines the purpose of each exercise, when to use it, and its typical duration. He then includes easy-to-understand steps for executing the exercise. For many exercises, the author also includes true stories of his personal experiences using the exercise in practice, additional information about the background and theory behind the exercise, and/or references for additional reading.

Kerth includes two lists of exercises. The first list is for use in any retrospective. One of my favorites from this group is the “Create Safety” exercise, which is used at the beginning of the retrospective. This exercise is designed for use when retrospectives are a new practice and when the facilitator’s preparation interviews have indicated that people may have fears that prevent them from fully participating in the retrospective. This exercise provides a mechanism for those people to feel empowered by allowing them to discuss what can be done to increase safety and then taking their suggestions and incorporating them into ground rules for the retrospective. Other favorites include:

  • The “Artifact Contest,” which looks like a fun way of getting people to talk about the project and sharing their stories.
  • “Develop a Time Line” in which the teams build a chronology of the project and its significant events and then “mines” that timeline to creates lists of what worked well, what was learned, what should be done differently next time, what is still puzzling, and what needs to be discussed in greater detail.
  • “Offering Appreciation,” which provides an opportunity to give recognition to project members.
  • “Change the Paper,” which focuses the teams in on making changes as a result of retrospective findings.

The second list includes special supplementary exercises for use during what Kerth calls “postmortems,” retrospectives for projects that have failed. This list follows a chapter that discusses the behaviors typically exhibited by people who think they have failed and how to lead postmortems. In this list, I particularly liked the “CEO/VP Interview” exercise where company leaders are interviewed about a significant career failure, what they learned from that failure, and how it affected their career.

No matter what type of facilitation one does, Kerth’s excellent chapter on “Becoming a Skilled Retrospective Facilitator” is a must read. This chapter includes six lessons learned that outline the fundamental skills of facilitation. The chapter then details a set of more advanced facilitation procedures that provide a wealth of practical knowledge and additional references to enhance one’s facilitation skills.

While this book has the detail one needs to understand and apply specific exercises and facilitation skills, it also has the breadth to address a wide range of topics, including how to prepare for a retrospective, how to sell a retrospective, handling retrospectives in the light of legal issues, collecting project data, where to hold retrospectives, and creating a community. This book does an excellent job of discussing the people issue involved in retrospectives and provides many useful suggestions on how to deal with those issues. It even includes a checklist to help keep readers from forgetting the details, including everything from flip charts, markers and masking tape, to a box of tissues.

I found this book insightful, interesting, and easy to read. But most important, it is full of ideas and techniques that I intend to put to use.

Linda Westfall (westfall@idt.net), current chair of the ASQ Software Division, has 20 years of experience in software engineering, quality, and metrics. Prior to starting her own business, The Westfall Team, Westfall was the senior manager of quality metrics and analysis at DSC where she designed and implemented its corporatewide software metrics program. She is an ASQ Certified Software Quality Engineer, ASQ Certified Quality Auditor, and a certified manager from the Institute of Professional Managers.

 

Enterprise Knowledge Management: The Data Quality Approach.

David Loshin. 2001. San Francisco: Morgan Kaufman/Academic Press.
500 pages.
ISBN 0124558402

(CSQE Body of Knowledge area: Software Quality Management)

Reviewed by John Horch


How does one know if he or she has enterprise knowledge that has to be managed? If those who use information to run or manage their business, be it financial, production, sales, customer, marketing, supplier, or whatever, have knowledge to be developed and managed? In his preface, Loshin states, “Every business process that uses data has some inherent assumptions and expectations about the data. And these assumptions and expectations can be expressed in a formal way, and this formality can expose much more knowledge than simple database schema and Cobol programs.”

That said, the goal of Loshin’s book is to “demonstrate that data quality is not an esoteric notion but something that can be quantified, measured, and improved.” And, no, Loshin is not talking about managing data in the same context as “managed news.”

This is a long book, with 500 pages and 18 chapters. But the many chapters allow the author to explore various data concepts such as ownership and data types, quality, metadata, cleansing, and improvement. Each chapter could be read independently of others, but there is an intended flow from chapter to chapter. The flow steadily increases the reader’s understanding of data, their transformation into knowledge, the need for clean data, and the ultimate application of managed knowledge to the better management of the enterprise.

It is not surprising that, as I read more and more of the book, I felt as though I was reading about requirements or product quality management. In many places the terms “data” or “knowledge” could have been replaced by “requirements” or “product” with little or no loss of context.

One might argue that Loshin has tried to oversimplify his topic, and perhaps he does take readers to an extra level of detail. In his defense, this extra detail will be of value to newcomers to the field of data rules, modeling, warehousing, and mining. As such a newcomer, I found that my understanding grew significantly. As a long-term software requirements bigot, I found I could skip much of the detail that looked like requirements management.

Overall, this is a good investment for anyone who needs to understand the application of techniques for data quantification, measurement, and improvement. People in these fields should buy this book.

John Horch (j.horch@ieee.org) is director of quality engineering for the COLSA Corporation in Huntsville, Ala. His more than 40 years of experience includes administration, education, and management, involving large-scale computer programming, customer interface, software control, verification and validation, and the management of corporate facilities.

Software Safety and Reliability: Techniques, Approaches, and Standards of Key Industrial Sectors.

Debra S. Herrmann. 1999. Los Alamitos, Calif.: IEEE Computer Society Press.
500 pages.
ISBN 0-7695-0299-7

(CSQE Body of Knowledge areas: General Knowledge, Software Quality Management)

(See “Short Takes” review in SQP vol. 3, issue 3, June 2001.)

Reviewed by Joel Glazer


Recently, I was listening to a radio news report about the cost of keeping “safe” arsenic levels in drinking water. Alarms went off in my head—what is a “safe” level of arsenic? Does each person have a tolerance for a different level of arsenic? Who sets the “safe level”? How does one establish a safe level and at what cost does one maintain that level? Is $1 billion per saved life a cutoff point, as was suggested by the reporter? Is software used to test the level of arsenic in the water? If so, how would one validate such software? Presently, safe levels are such that only a few people might die each year from arsenic in public drinking water. To get the levels of arsenic down to zero would cost society at least
$1 billion per resident per year. This figure is prohibitive to society—so the government sets up a cutoff point based on a dollar figure.

The author dedicates this book to the “Concept of Pikuach Nefesh.” This Jewish concept in a nutshell states that human life is sacred and is to be placed above all other values, including the holiest values and ideals in the Jewish faith—the sanctification of the Sabbath. Thus, preserving or preventing the loss of human lives is paramount. But does that also mean at all cost to the individual or society? Based on the previously mentioned news report, and on many other situations, when the cost of providing the safety level is pitted against the potential loss of lives, society does arrive at a cost model that it is willing to tolerate. Society allows this to occur daily—witness the transportation and automotive system, and the public health system—these systems have a risk level associated with them that society as a whole is willing to endure, and lives are lost daily. The author does not provide the answer to the cost question. She states that prior to 1993 between 1000 to 3000 people were killed by failures in computer systems. No figures or statistics are provided subsequent to 1993 (p. 5).

What Herrmann does provide in this book is a rich set of resources to guide interested readers in finding the way through a host of standards, guidelines, and regulations related to software safety and reliability (SW S&R) in general and in specific industries. Because of potential misuse of the information, however, the author begins the book with a disclaimer, which points out the risks one takes in using information blindly. Many companies that provide systems containing software do the same. In this litigious society, one must protect oneself from any information that can be misused.

The goal of this book is to raise the consciousness and sensitivities of engineers, managers, the public, and readers in the subject, and to provide practical information in one place. The book will not create experts in the field of software safety and reliability, but will provide enough food for thought and resources to look to the experts for help and to know when to call them in.

The book is organized into four sections with 12 chapters. The first section provides readers with an overview and the basic understating of safety and reliability. It will not make readers instant experts on the subject, but will give them an appreciation for the task those engineers face. Section 2, “Approaches Promoted by Key Industrial Sectors to Software Safety and Reliability,” deals with five critical industries: transportation, aerospace, defense, nuclear, and biomedical. Section 3, “Approaches Promoted by Non-Industry Specific Software Safety and Reliability,” deals with international and national standards and guidelines that have been developed to date that address issues of S&R for software directly or by implication. In all, 19 standards are presented in one book, for quick reference. Section 4, “Software Safety and Reliability Techniques, Approaches, and Standards: Observation and Conclusions,” provides 10 summary themes derived from the previous sections:

  1. Software safety is a component of system safety.
  2. Software reliability is a component of system reliability.
  3. High integrity, high consequence, and mission critical systems need to be both safe and reliable.
  4. A “good” software engineering process is insufficient by itself to produce safe and reliable software.
  5. To achieve software safety and reliability certain planning, design, analysis, and verification activities must take place.
  6. The achievement of software safety and reliability should be measured throughout the life cycle by a combination of product, process, and people/resource metrics, both quantitative and qualitative.
  7. Software safety and software reliability are engineering specialties that require specialized knowledge, skills, and experience.
  8. Some safety and reliability concerns are the same across industrial sectors and technologies, while some are unique.
  9. Everyday examples should not be overlooked when classifying systems as safety critical or safety related.
  10. A layered approach to standards is the most effective way to achieve both software and system safety and reliability.

Each of the 12 chapters concludes with a summary, discussion problems, and additional resources.

Two annexes provide readers with a list of 20 organizations involved in software safety and reliability standards, and a list of 30 commercial products available to assist in performing software safety and reliability analyses.

In chapter 2 the author provides a rundown of the safety and reliability basics, differences between hardware and software issues, errors of omission, commission, as well as operational. In section 2.4 she explains methodology that helps achieve and assess safety and reliability, namely, design criteria, development and operational criteria, performance criteria, reuse and COTS, selection criteria, and verification techniques. A key point Herrmann makes is that “traditional testing and other dynamic analysis techniques are best for uncovering functional errors,” whereas “static testing techniques are best for highlighting safety and reliability problems” (p. 40). The author quotes a Northern Telecom report from 1995 that says “One defect was found by traditional testing for every seven defects found by static analysis techniques.” Not stated or known, however, is the effort involved in either method—so one cannot conclude that this is a universal ratio, or that one method is superior to another.

The author suggests that there are no “latent defects” (p. 23)—that all software failures are systematic not random. The “time element” is not a factor in software failures (p. 24), unlike hardware—where the time element is a major contributor to hardware failures.

The book is full of references, quotes, and tools to be used wisely and appropriately. Readers of the book will not rise as instant “reliability and safety engineers, but will have a good appreciation for the complexity of the task facing these engineers.”

Joel Glazer (joelglazer@ieee.org) is the current ASQ Region 5 Software Division councilor. He has more than 30 years, experience in the aerospace engineering, software engineering, and software quality fields. He has master’s degrees from The Johns Hopkins University in computer sciences and management sciences. He is a member of IEEE and a Senior member of ASQ. He is an ASQ Certified Software Quality Engineer, Auditor, and Quality Manager. Glazer is also a Fellow engineer in the Software Quality Engineering Section at Northrop Grumman ESSS in Baltimore.


Note: The video series “Essential Software Engineering,” reviewed in the last issue, will no longer be available after December 31, 2001. A revised series will be available at a later date and will be reviewed in this column.

 

Featured advertisers


ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.