Software Quality Professional Resource Reviews - June 2003 - ASQ

Software Quality Professional Resource Reviews - June 2003


From the Resource Reviews Editor:

This will be my last collection of resource reviews. I am turning over the reins to Milt Boyd. You’ve seen his reviews for years, and I’m sure he will continue to bring a variety of resource reviews to SQP readers. If you would like to become a resource reviewer, please contact Milt at

This issue brings reviews of books covering four areas of the CSQE Body of Knowledge. One of the books is reviewed by two reviewers. There are good insights from both. I would also like to welcome one new reviewer—and the return of some reviewers we haven’t heard from in a while.

Enjoy—and thank you for your interest over the last three years.

–Sue Carroll


The People Capability Maturity Model, Guidelines for Improving the Workforce
By Bill Curtis, William E. Hefley, and Sally A. Miller


Six Sigma Software Development
By Christine B. Tayntor

Managing High-Intensity Internet Projects
By Edward Yourdon

Writing Better Requirements
By Ian F. Alexander and Richard Stevens


The Adaptive Enterprise, IT Infrastructure Strategies to Manage Change and Enable Growth
By Bruce Robertson and Valentin Sribar

Executable UML: A Foundation for Model Driven Architecture
By Stephen J. Mellor and Marc J. Balcer

The UML Profile for Framework Architectures
By Marcus Fontoura, Wolfgang Pree, and Bernhard Rumple

Facts and Fallacies of Software Engineering
By Robert L. Glass

Component-Based Development: Principles and Planning for Business Systems
By Katherine Whitehead


Building a Project-Driven Enterprise: How to Slash Waste and Boost Profits Through Lean Project Management
By Ronald Mascitelli

Understanding Open Source Software Development
By Joseph Feller and Brian Fitzgerald

Managing Open Source Projects
By Jan Sandred


The People Capability Maturity Model, Guidelines for Improving the Workforce

Bill Curtis, William E. Hefley, and Sally A. Miller. 2002. Boston: Addison-Wesley.
587 pages. ISBN 0201604450.

(CSQE Body of Knowledge areas: General Knowledge, Conduct, and Ethics)

Reviewed by David Walker

This book provides an excellent introduction to the People Capability Maturity Model (P-CMM) and provides much insight into its implementation. There are many books on human resources in the information technology domain, but this one leads readers through the road map, five levels of maturity that lead to excellence in the work force. The first printing of this book was in December 2001, five months after the publication of P-CMM version 2. Other than the differences noted in this review, this book mirrors version 2 of the P-CMM, which can be downloaded from the Software Engineering Institute (SEI) Web site at: .

The book goes into great detail on the P-CMM and an approach to implementing it. Divided into three main parts, it provides the necessary background information, detailed discussion of the structure, and resources for the practitioner.

The first part includes an overview of the content and structure, interpretation issues, implementation issues, assessment methods, and case studies. One thing that stood out in the first chapter was the list of 10 principles that summarize the philosophy implicit in the P-CMM. This part is very thorough in discussing the issues an organization would encounter in implementing the P-CMM.

The next part describes in detail the practices that are part of the managed (2), defined (3), predictable (4), and optimizing (5) levels. The goals, commitments, abilities, practices, measurements, and verifications are reproduced here and appear to be an exact copy of P-CMM.

There are four appendices: References, Acronyms, Glossary of Terms, and Practice-to-Goal Mapping for the People CMM Process Areas.

It is always good to have an introduction, history, and overview of a large subject before diving in. I believe this book accomplishes this goal, and it’s a lot cheaper than a training course. I highly recommend it to those with human resource responsibility.

David Walker ( has a master’s degree in computer science from Northwestern University and is an ASQ Certified Software Quality Engineer with 18 years of software engineering experience in the communications and health care industry sectors. He is currently a senior information scientist at Pharmacia Corporation.



Six Sigma Software Development

Christine B. Tayntor. 2003. Boca Raton: Auerbach Publications. 322 pages. ISBN 0-8493-41193-4.

(CSQE Body of Knowledge areas: Software Quality Management and Software Engineering Processes)

Reviewed by Jayesh G. Dalal

These days it is rare to find a quality management or a business management publication that does not mention Six Sigma (SS). Christine Tayntor observed that many information technology (IT) departments do not use SS methods. She wrote this book to demonstrate how SS can be used to improve IT functions, including software development.

The book is a combination SS primer and a how-to guide for applying SS methods to many IT department functions. For an IT professional seeking to understand SS and its application to IT functions, this book is a good introduction. Throughout the book Tayntor stresses the SS tenets of defect prevention, variation reduction, customer focus, fact-based decisions, and teamwork. Her presentation of the SS tools and techniques is cursory and, at times, imprecise. Even the suggested reading at the end of the book is lean with respect to SS tools and techniques. This book may generate interest in the SS method among IT professionals but, by itself, it will not enable readers to apply the SS method effectively. For that readers will need a greater understanding of the SS method.

The first third of the book provides an overview of the SS method. For this presentation, Tayntor uses a case study from a fictitious company. She describes activities of an SS project team and introduces various SS tools used by the team during the life of the project. In the middle third of the book, potential use of SS in the system development life cycle (SDLC) is described. The presentation is about equally divided between the explanation of the SDLC and the opportunities for using the SS method. Most of the presentation focuses on the Waterfall approach, however, the use of SS with the rapid application development, prototyping, and spiral development approaches is briefly addressed. In addition, the use of SS to legacy system support and system maintenance functions is described. In the last third of the book the packaged software implementation and outsourcing functions are discussed and use of SS for these functions is presented. Some additional information about a few of the SS tools mentioned in the book is provided in the appendix.

Tayntor states her goal for the book as, “to remove the mystique surrounding Six Sigma and to demonstrate how Six Sigma tools and concepts can be used to enhance the system development process.” She has accomplished her goal. The book is easy to read and will make readers aware of the SS method and its potential use by IT departments. Personally, however, I expect to get more out of a 322-page book.

Dr. Jayesh G. Dalal ( is an ASQ Fellow and Certified Software Quality Engineer, and past chair of the ASQ Software Division. He is an independent consultant and provides consulting and training services for designing and deploying effective management systems and processes. Dalal was an internal consultant and trainer for more than 30 years at large U.S. corporations with manufacturing, software, and/or service operations. He has served on the Board of Examiners for the National Baldrige Award and the New Jersey Quality Achievement Award.


Managing High-Intensity Internet Projects

Edward Yourdon. 2002. Upper Saddle River, N.J. Prentice Hall. 218 pages.
ISBN 0-13-062110-2.

(CSQE Body of Knowledge areas: Software Quality Management, Program and Project Management)

Reviewed by David Kingsbery

When I selected this book to read, I had one major question on my mind: “Are these projects really all that different?” Apparently Edward Yourdon had the same question when he wrote this book. In the first sentence he asks, “What’s different?”

As the book relates to quality assurance (QA) and testing, I was both relieved and disappointed that there really was not much new. The practices that Yourdon and others have been promoting for some time still apply in the high-intensity world of Internet projects, only now they are even more important. The one new activity that is presented based on anecdotal information is having pairs of developers work together on small and medium projects, as promoted by eXtreme Programming (XP).

This book chronicles all the increased risks associated with Internet development projects these days: high customer expectations, reduced staffs, very high visibility, short time frames, and increased technical risks. Haven’t I seen these before? Each time we approach new technologies, the technologists oversell and customers expect even more. This is a natural cycle, and whoever expects things to change is being naïve.

Just because this is déjà vu all over again does not mean this book doesn’t say anything. With this new technology comes a new generation of customers and technologists who need to hear the old lessons wrapped up in new examples. These newcomers have such grand expectations of the new that they have the tendency to throw away the old and think it doesn’t apply. This book erases that fallacy and emphasizes how the old methods are even more important in the pressure cooker of the new world.

What does this book emphasize relating to QA and testing? As I mentioned earlier, there is anecdotal encouragement of the XP concept of pair programming. This provides a built-in code review process. There is a complete chapter on managing the testing process, where Yourdon affirms the ongoing QA processes with “the process of testing an Internet system is essentially the same as testing any other kind of IT system.” It continues by describing new categories “but the step-by-step process of testing…is likely to be much the same.” Here there is the emphasis on the fact that testing is an ongoing process, not limited to after coding has finished. Here also, the book wraps the long-standing ideals of QA professionals in new examples and newly improved tools.

What QA processes and principles are emphasized? Test early and continuously, derive test cases from requirements, automate the testing activity, perform regression testing, use a daily build approach, and track and monitor the defect closure rate. These are all processes the QA professional is familiar with.

What’s new are variations of the testing categories that one must focus on. In and of themselves they are not new, but they gain a greater importance in the new technology. These categories are: code-level testing, compatibility, navigation testing, usability and accessibility, performance testing, reliability and availability testing, and network testing. These categories have always existed in some fashion, but with the scale and complexity of the new Internet projects, they become even more critical than in the past.

For the experienced QA professional, this book helps one realize he or she is not a dinosaur in this new development world. One’s skills become even more important in these high-intensity Internet projects, they just need to be more compact and more focused. For the QA newbie, this book introduces long-standing best practices and recognizes the value of QA writings of the past that continue to shape the software quality profession. Use this and other older resources as stepping stones.

David Kingsbery (david.kingsbery@ has more than 20 years’ experience in all phases of software development. For much of the last seven years his focus has been on testing and quality assurance for client server, mainframe, and Web development projects at Wachovia National Bank. Kingsbery is an active member of the Charlotte IT Quality Assurance Association (CITQAA) and the Software Process Improvement Network (SPIN).


Writing Better Requirements

Ian F. Alexander and Richard Stevens. 2002. London: Addison-Wesley. 154 pages. ISBN 0-321-13163-0.

(CSQE Body of Knowledge areas: Software Quality Management, Software Engineering Processes)

Reviewed by Eva Freund

I have been searching for years for a good book on how to write a good requirement. I am still looking. Each time I see a book purporting to be the definitive approach to requirements gathering and identification I become hopeful. The book I envision would be written for both the novice and the experienced requirements engineer. It would not only define the art of requirements identification but would also demonstrate how requirements impact the subsequent phases of the development life cycle.

As stated in the book’s summary, “This book describes writing requirements as part of a process of dialog and negotiation between requirements engineers and stakeholders…requirements must bridge the gap between the people who have a problem that a system might solve, and the people who can build such a system.” The author completes the summary by reminding the reader that the book is “intended to be a practical introduction to the field for students and new practitioners.”

In keeping with that intent, each chapter contains simple exercises that illustrate the key points in the chapter, hints for completing the exercises, and suggested answers. The authors have provided an appendix with the desired answers and some modicum of explanation for the answers.

If I had known when I began to read this book that it is for students and beginners I would have approached my task differently. For example, I would have looked for answers to these questions:

  • In what context has the author placed the need for gathering and identifying requirements?
  • Has the author referenced any professional standards (for example, IEEE, ISO) for developing system-level requirements?
  • Has the author described the potential impact of system requirements that are not well specified?
  • Has the author described how requirements are used during the subsequent phases of the development life cycle?
  • Has the author described the need to control the system requirements once they have been identified (even as they evolve in the draft stage)?

In the preface, the authors state that this book is designed as a short, convenient overview for those who need to write requirements. They describe how to identify stakeholders and capture requirements, how to write good requirements, how to structure and organize requirements for system developers, and how to review requirements.

In their attempt to keep the language nontechnical and the concepts simple, the authors use examples from multiple arenas and applications. This multiplicity is confusing and unsettling. It would have been better had the authors provided, in each chapter, the primary concepts, followed by illustrations or applications of the primary concepts, and then a solvable problem that continues the case study. Thus the case study provides the common thread from the data gathering, to the requirements identification, and through the requirements refinement stage.

As an experienced practitioner, my first read of this book led me to believe that it is overly simplistic. In my view, this book seems to focus on a singular goal—to enable the reader to derive a well-written requirement for eventual insertion in a requirements document or a requirements tool. I would rather the book describe several goals, including deriving requirements, that would lead to a system that meets user needs and would enable the user to verify that the requirements had been satisfied by the developed system. In my second read of this book (through the eyes of a student), I was left with the feeling that the book was overly simplistic and void of essential relationships to the other aspects of system and software engineering.

In the back of this book there is an advertisement for another book, Software Requirements: Styles and Techniques by Soren Lauesen. Maybe that will be the definitive book I have been seeking.

Eva Freund ( is an independent verification and validation (IV&V) consultant with 20 years of experience in software testing, standards, and project management. She offers IV&V and software process improvement services to private- and public-sector organizations. She is an ASQ Certified Software Quality Engineer and a Certified Software Development Professional from the IEEE Computer Society.


Reviewed by Milt Boyd

This book is for those involved in the system engineering process in any company, regardless of product. It promises readers will learn how to write simple, clear requirements (so you get what you want), to organize requirements as scenarios (so everyone else understands what you want), and to review requirements (so you ask for the right things).

The authors attempt to provide a practical guide for those who endeavor to satisfy customers. Their approach is based on lessons learned in “the school of practical experience.” The preface quotes G.K. Chesterton (one of my favorite authors): “It isn’t that they can’t see the solution. It is that they can’t see the problem.” Most people feel pressure to start writing code, to produce working software, to show results, generally to be busy, before they fully understand the real customer needs.

This slim book has eight chapters, with a summary, index, answers to exercises, glossary, an example of requirements for a burglar alarm, and further reading. It can be thought of in two parts. The first starts with an introduction, then proceeds to identify stakeholders and discuss gathering requirements from stakeholders, and others. The second part discusses structuring, organizing, writing, checking, and reviewing requirements.

The material is presented in a reasonable order, with lots of details and practical suggestions. (I’ve used almost all of their methods and can support their value.) Diagrams help to clarify a number of points. The exercises provide a good chance to practice what one has read. The authors’ answers are helpful, although the reader’s answers may be different. The authors also recognize and deal with the fact that requirements have both a technical and a personal side. Neither can be neglected, if one wants to succeed.

The authors show just one commercial tool, rather than examples of several. They do not simplify its use. In their discussion on structuring requirements, the authors omit one tool that I would like to recommend. Affinity diagrams have often been useful (in my experience) as a way for customers and users to identify requirements that “go together” and that ought to be examined as a whole.

The book is focused, as indicated by its title. It doesn’t provide much information on how requirements fit into the overall software life cycle. Readers may want more information of how writing requirements fit into the larger scheme of things. A brief discussion of quality function deployment might have provided a connection between customer benefits and needs, and engineering features, but that would be an extension of the book, not a completion. For those who want more, the resources in “Further Reading” are very good.

This book provides useful information for the Certified Software Quality Engineer Body of Knowledge, Section II.A.5 (Customer Requirements), and Sections III.C.1 and 2 (Requirement Types and Requirements Elicitation).

For those who are new to the task of gathering and writing requirements, this is a useful book to read and to follow. The examples and exercises provide useful guidance, so readers can acquire good skills. If readers want to increase their skills, then it may help them recognize areas of improvement. It probably will not be of much of interest, however, to readers who are very experienced.

Milt Boyd ( is a member of ASQ’s Software, Reliability, and Quality Management Divisions. He has been certificated by IRCA as a lead auditor of quality management systems. He is currently project manager for Software Process Improvement, at Instrumentation Laboratory of Massachusetts.



The Adaptive Enterprise, IT Infrastructure Strategies to Manage Change and Enable Growth

Bruce Robertson and Valentin Sribar. 2001. Boston: Addison-Wesley. 290 pages. ISBN 0-201-76736-8.

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Milt Boyd

To quote, “The strategies and process described…will give you clear and practical ways to guide your company through Internet-induced change.” The “you” refers to executives and information technology (IT) professionals of large and small companies. More specifically, the target companies are described as distributed e-businesses that use a variety of multivendor configurations to provide customer-centric services.

The objective is captured in the subtitle: to provide IT infrastructure strategies to manage change and enable growth. The approach is divided into five activities: plan an infrastructure; design an adaptive infrastructure; execute a reuse strategy; address people, process, and technology; and balance immediate needs with long-term goals. Unfortunately, this is not exactly the six-bullet solution offered on p. 9, and it does not map nicely onto the seven chapters of the book.

The authors make a convincing case for adaptive infrastructure, then lay a foundation of physical/functional/interface components, discuss identifying and using patterns and creating an infrastructure portfolio, consider how to develop adaptive services, identify a starter kit for services, consider processes and methods, and finish with a discussion of packaging and people.

Each chapter ends with a short summary. The book also has an appendix, glossary, and index. The book is self-contained, with no references, and no suggested reading list, except for other books in the IT Best Practices Series.

To the authors, IT infrastructure refers to hardware (for example, servers, storage, network), and some software (for example, operating systems and data bases), but not to application software (for example, Enterprise Resource Planning, Customer Relationship Management, Supply Chain Management).

Management consultants Cap Gemini Ernst & Young have a slightly different take on the topic of adaptive IT. They identify five dimensions of activity: infrastructure, information, applications, processes, and organizational structure. They describe technology infrastructure as a set of standards-based services driven by business needs. They include applications, with utilities, engines, and user interfaces. They also emphasize information assets, which Robertson and Sribar cover in just one page, as part of process management. This is clearly a topic with fuzzy boundaries, and practitioners differ.

The book as a whole would benefit from more editing. The summaries and outlines do not always expand into the actual content. Some cross references don’t match up. For example, Figure 6.4, uses an acronym not found in the index, nor the glossary, and not expanded in the previous three pages.

The appendix lists, in alphabetical order, infrastructure components “discussed in detail in chapters 5 through 8[sic],” and provides vendors and products. Given the pace of technological change, this will likely require a new edition very soon.

The book is pitched more to IT professionals than to executives. For example, which group has need for a section entitled “speaking the language of the CFO”? There is no comparable section of speaking like a CIO. In part, this is almost inevitable, as IT infrastructure is invisible to most executives.

This book provides some information relevant to section III.A.2 (System Architecture) of the Certified Software Quality Engineer (CSQE) Body of Knowledge. However, the book is not directed to the usual CSQE candidate.

If readers (or their companies) need the information in the book, then they probably cannot accomplish their goals with this book alone. The book can, however, prepare them to work with experts, analysts, and consultants.

Executable UML: A Foundation for Model Driven Architecture

Stephen J. Mellor and Marc J. Balcer. 2002. Boston: Addison-Wesley. 359 pages. ISBN 0-201-74804-5.

(CSQE Body of Knowledge areas: Software Engineering Processes and Software Verification and Validation)

Reviewed by Becky Winant

Executable models have been successfully used in projects for more than a decade. Unfortunately, if readers did not learn directly from those who knew the process and had experience with these models, they had only a few white papers as a resource. This is one of a handful of books that are beginning to reduce that deficit.

A key concept the authors introduce at the start is executable models are a higher-level language. Their goal is to explain how executable UML differs from UML, the elements that constitute an executable model, and guidelines for building one. They cite at least three incentives for building such a model: early and unambiguous testing of problem subject matter, use in model compiling to produce 100 percent of your code, and reuse of the models with other model compilers or integrated with other requirements.

The authors point out that executable UML uses a subset of the full UML, has rules for execution, and can demonstrate explicitly whether model detail produces a satisfactory result. Much of the book defines and illustrates the UML diagrams used, what exactly an executable component is, and the rules. My italics reflect the author’s desire for readers to pick up on the importance of precision in executable models, as for any executable language.

The subtitle, “A Foundation of Model-Driven Architecture,” refers to an initiative of the Object Management Group (OMG), a standards group for object modeling. The “foundation” referred to is a platform independent model (PIM), which is a model free of any reference that binds it to a particular implementation or technology. One comprehensive example in the book is a book ordering system. This is used to illustrate the various pieces of an executable model, how to improve a model using their guidelines, and compilable possibilities.

Their primary audience is likely to include software developers, engineers, analysts, architects, and technical managers who are:

  • Searching for information about UML modeling
  • Searching for information about advanced UML modeling practices
  • Looking for innovative new modeling processes
  • Jazzed by model simulation
  • Searching for more clues about compiling models

A secondary audience is people interested in or researching software development, requirements testing, or modeling trends.

The book provides introductory information in the first two chapters, and then digs into the elements needed for an executable model and how to create those elements. A healthy six chapters (9 through 14) cover behavioral issues and system dynamics. These have a wealth of good advice. Chapter 15 covers testing. The challenges of large problem sets are covered in chapter 16. Using these models for building a system with a given set of technology is addressed by chapters 17 and 18. The book also has its own Web site:, where one will find corrections under “Book Bugs.”

Here are two caveats:

  • Prepare for a new vocabulary: For example, domains, bridges, and archetypes. The glossary in the back should help.
  • The concept of models as a higher-order language may seem like a paradox to some. By shifting their thinking to a higher level of abstraction this concept can become clearer.

I am picky about modeling books and liked this book for several reasons:

  • The content scores high on usability. Steve Mellor and Marc Balcer meet a standard for useful books that I latched onto with Mellor’s first book with Sally Shlaer. That book received what I considered high praise from one of my clients (circa 1989). He said, “I bought the book and started building a model as I read the chapters.” My derived standard became: Does the information in the book allow someone to start doing something?
  • Clean, concise coverage of what an executable model is
  • It does not stop at simple explanations. It offers key advice for avoiding certain execution and modeling pitfalls.

Now, when will they publish the next 300 pages on model compilers so people can better understand how to produce 100 percent code from with their executable models?

Becky Winant has been building models of one sort or another since 1979—seven short years after entering the software field. She has worked with early Shlaer-Mellor executable models, with code translation (another name for compiling models), and also with UML and executable UML. Readers can reach her at .

The UML Profile for Framework Architectures

Marcus Fontoura, Wolfgang Pree, and Bernhard Rumple. 2002. Boston: Addison-Wesley. 228 pages. ISBN 0-201-67518-8.

(CSQE Body of Knowledge areas: Specifications and Models; Software Engineering Processes; Life Cycles; Analysis, Design, and Development Methods and Tools)

Reviewed by Gordon W. Skelton

“The aim of the Unified Modeling Language (UML) profile for framework architectures is the definition of a UML subset, enriched with a few UML-compliant extensions, which allows the annotation of such artifacts.” For the authors, the resulting framework is called UML-F, which is used to implement framework technology.

The book is divided into two major sections: 1) a description of framework architectures and framework-related technology with emphasis on the subset of UML used to support framework modeling and annotation, and 2) an examination of UML-F in a practical situation.

A UML profile includes extensions of the UML Standard Language with specific elements providing new notational elements and usually specializes the semantics of some elements. A profile may restrict use of some UML elements. The idea behind creating a profile is to provide for widespread use as a “toolsmith” upon which specific project profiles can be built.

Frameworks provide the mechanism by which one can use components to build applications within a given domain. Frameworks consist of both whitebox and blackbox components from which the developer chooses. Whitebox components allow the user to complete the methods of a particular component, whereas blackbox components are implemented as they exist with changes only occurring in the parameters being passed to the component.

The authors provide an overview of UML as it applies to framework architectures. The primary UML elements they use are the class, object, and sequence diagrams. Collaboration diagrams are also discussed. If the reader is already comfortable with UML then this chapter on UML can be quickly scanned or skipped entirely.

The authors have developed a profile, UML-F, which is directed toward object-oriented frameworks. UML-F profile is intended to describe “the intentions behind the architecture,” not just the architecture and interaction patterns. UML-F uses the extension of UML stereotypes and tagged values to build the model for this new profile. New tags are developed that are used to expand the information included in a UML diagram. Basic or elementary tags are presented first, and then UML-F specific tags are examined in relationship to the five framework construction principles: 1) unification; 2) separation; 3) composite; 4) decorator; and 5) change of responsibility. The last three of these are based upon the work of Gamma, et al. presented in their reference, Design Patterns—Elements of Reusable Object-Oriented Software.

To illustrate UML-F, part II of the book presents its usage with the JUnit testing frame. Following this detailed example, the book’s final chapter provides readers with hints and guidelines for the framework development and adaptation process. This chapter is perhaps one of the most interesting from my vantage point. The chapter, perhaps more abstract in its content, provides a different view of the software development life cycle. Emphasis is placed on understanding the elements of the life cycle and how one can create a system design and then evolve that design into a framework.

After reading this book I am certain that it is not for those who have limited experience in object-oriented software development and design patterns. Reasonable knowledge of UML is definitely required. Understanding how the UML standard supports extensibility is crucial to one’s understanding of the concept of modeling in UML. Framework understanding is additionally important in order to comprehend how UML can be used to model them.

The book should be helpful to someone interested in developing a set of classes, which can be reused. However, unless the reader meets the criteria of: 1) unlimited resources in terms of money, manpower, and time, and 2) a desire for developing a reusable library of classes, then I can only recommend this book as interesting excursion into the use of UML.

Gordon Skelton ( is vice president for information services for Mississippi Valley Title Insurance Company in Jackson, Miss. In addition, Skelton is on the faculty of the University of Mississippi, Jackson Engineering Graduate Program. He is an ASQ Certified Software Quality Engineer and an IEEE Certified Software Development Professional. He is a member of ASQ, IEEE Computer Society, ACM, and AITP. Skelton’s professional areas of interest are software quality assurance, software engineering, process improvement, software testing, and wireless application development.

Facts and Fallacies of Software Engineering

Robert L. Glass. 2003. Boston: Addison-Wesley. 195 pages. ISBN 0-321-11742-5.

(CSQE Body of Knowledge areas: Software Engineering Processes, Software Quality Processes)

Reviewed by Ajit Ghai

The word “fallacies” in Facts and Fallacies of Software Engineering caught my eye. Software literature is replete with facts and best practices, although I wondered about the fallacies. Robert L. Glass has certainly picked a striking title for this collection that lists 55 facts and 5+5 fallacies. I may have been disappointed in my search for true fallacies but was rewarded by the ease of navigation through the book.

The book is not expensive ($29.99 U.S.) and the facts alone assembled in one place with a list of sources are worth the price of this book. That the facts are quite well known amongst most practitioners does not take away from the book’s value. And that Glass claims that many of the facts are controversial (in my view they are not), does not take away from the value of the book either.

Bob Glass’s Facts and Fallacies of Software Engineering is aimed at anyone who has anything to do with software—software developers, their managers, students as well as researchers. The introduction tries to impress readers with the claim that “there’s controversy galore in the book” and then presents the fact that Glass has 45 years as a practitioner and is the author of more than 25 books—more experience practicing software engineering than many of his intended readers have lived.

Part 1 of the book consists of four chapters, each organized into sections. In the first chapter, the sections are called About Management, Estimation, Reuse, and Complexity. Chapter 2 includes groups called Requirements, Design, Coding, Reviews, and Inspections. Part 2 of the book contains the fallacies organized in a fashion similar to the first. Each fact (and fallacy) is organized into a discussion, controversy, and sources.

The first fact in the book is “The most important factor in software work is…the quality of the programmers themselves.” Now this is indeed an important fact and factor in software engineering, one that is ignored according to Glass. Process is given more emphasis than the quality of people developing the software he claims. His controversy section is, in my view, misnamed, as are most of his other controversies on the facts and fallacies presented. I could hardly find any controversy in the book worth its name. The word “controversy” in the title is used mostly to complain that not enough attention is paid to the fact at hand. It should more appropriately be called “why so little is done about this” or something akin to this.

The second fact in the book is closely related to the first. Not only are people important, but some software people are up to 28 times more productive than their counterparts. Again the controversy section is misnamed and even states, “I have never heard anyone doubt the truth of the matter.” Where then, I ask, is the controversy?

Each fact comes with an extensive list of sources and references. And many software practitioners and authors are listed here together with their key works. The list includes Boehm, Brooks, McConnell , Weinberg, DeMarco, Lister, and, of course, Glass himself.

Most of the facts presented are well known. The fallacies presented are few and in a manner similar to the controversies, many of them seem forced. For example: “Fallacy 1: You can’t manage what you can’t measure.” One would think that after listing this as a fallacy, Glass would offer evidence of why this is false; however, all Glass offers is this mild complaint to close this section “...measurement is vitally important to software management, and the fallacy lies in the somewhat cutesy saying we use to capture that.” “Fallacy 4: Tools and Techniques: one size fits all.” I am not sure who has been propagating this fallacy and why it deserves a place in the book.

The list of facts and fallacies is a useful reminder to all software practitioners and the many books referenced a source of excellent further reading. The often repeated reference to “controversy” and “fallacy” where there is hardly any does not take away from the value
of this quick- and easy-reading book.

Ajit Ghai ( has worked in the technology industry for more than 23 years. He is currently director of Delivery Services at AMITA Corporation, an Ottawa technology company. He is a Project Management Professional (PMP) and an ASQ Certified Quality Manager.

Component-Based Development: Principles and Planning for Business Systems

Katherine Whitehead. 2002. Boston: Addison-Wesley. 200 pages. ISBN 0-201-67528-5.

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Carolyn Rodda Lincoln

Component-Based Development is an introduction to the principles and planning of components for business systems. It provides an introduction to the issues surrounding the use of components in new systems and those that are in maintenance mode. The author points out the “sticking points” in using components and then provides suggestions on how to address them.

The book is divided into four parts. Part 1 is an introduction on what components are and why one would want to use them. Part 2 is the process of planning for the use of components, including both technical and organizational issues. Part 3 is how to build and assemble components into a working system. Part 4 is a short case study of a fictional insurance company that is evolving from mainframe to Internet systems. There is also a glossary of component-related terms.

The author assumes that the reader is knowledgeable about systems and technology although not familiar with components. Many technical terms and acronyms are used without explanation or definition, so the book would be inappropriate for a beginner. The author occasionally uses examples to illustrate concepts, but it would have been valuable to have more. The stated purpose of the book is at a high level, that is, systems architecture, but the author does refer to specific implementations such as Enterprise JavaBeans (EJB) and Microsoft’s Component Object Model (COM). The vendors and standards are changing so quickly that those specifics will quickly get out of date; however, the principles will still be applicable.

Component-Based Development is a valuable resource for a software development manager or system architect who needs to understand components. The author provides a balanced viewpoint of the pros and cons of both components and implementation issues surrounding components. The section on how to start using component-based development was particularly helpful. Once the decision is made to use components, the practitioners would have to look elsewhere for training on EJB or COM, but they would have a good background on the questions to ask and the issues to address. The book is highly recommended for anyone in the computer field who is familiar with object-oriented concepts and wants to move on to component-based development.

Carolyn Rodda Lincoln ( is an ASQ certified quality manager and a member of the DC Software Process Improvement Network. She is currently employed as a quality assurance analyst for Titan Corporation at the Environmental Protection Agency in Washington, DC. She holds bachelor’s and master’s degrees in math and was previously a programmer and project manager.


Building a Project-Driven Enterprise: How to Slash Waste and Boost Profits Through Lean Project Management

Ronald Mascitelli. 2002. Northridge, Calif.: Technology Perspectives. 368 pages. ISBN 0-9662697-1-3.

(CSQE Body of Knowledge areas: Program and Project Management)

Reviewed by Dave Zubrow

This is a book that is well worth reading for its practical approach to streamlining many project management practices. Readers from the software world will enjoy many of the examples as the author draws upon his background associated with the development of software intensive systems, even though the book is not strictly about software project management. The book is organized into four parts: principles that drive project efficiency, methods of lean project management, the special case of new product development, and building a project-driven enterprise. The chapters in the first part describe a set of principles for achieving a lean approach to running projects. The ideas here are presented in a straightforward manner with examples that help readers understand why the principles are so important. On the second page of the book, the author drives home the focus of the book by presenting a time log of a worker’s day. The important point of the time log is the assignment of value to various activities. Out of an 8.5 hour workday, only 1.5 hours actually generated value. While this may seem extreme, data from TSPsm teams suggest that many engineers work on engineering tasks less than 50 percent of the time in any given work week.

A fundamental underpinning of the book articulated in the first part is a constant focus on customer value. The focus has to do with learning what the customer values, developing projects that will deliver value, and executing projects in a manner that maximizes value production and minimizes nonvalue producing activities.

The second part of the book describes 12 tips for making one’s projects more efficient, or lean in his parlance. These are not rocket science nor are they things that are likely to be unfamiliar to readers. However, what might be new is the potential they hold for eliminating waste, or conversely, the magnitude of waste one might be incurring if he or she is not using the methods. I think it will be easy for readers to look around their organizations and projects and see opportunities to apply these methods.

Each method is discussed according to a common template (one of the methods is using standard templates) that has the following items:

  • A brief description of the problem being solved
  • Lean countermeasures that help slash waste
  • A step-by-step implementation guidance
  • Identification of what one will need and who one will need
  • How to measure success

As an example, method no. 6, “Staged-Freeze Specifications” addresses the trade off between speed (cycle time) and risk. Although not directly stated, Mascitelli acknowledges the motivation for a waterfall life cycle as the apparent security of having a firm and complete foundation prior to proceeding with further development. In contrast to this, he suggests breaking the large documents and their corresponding phases into smaller portions that can be produced in smaller amounts of time and that allow some concurrent activity to occur. Indeed, the method description suggests a spiral development approach with its focus on risk reduction. If one implements this method, the proposed measures of success are number of months saved in project execution, reduction in high-impact requirements changes, and customer satisfaction.

The last two parts of the book illustrate the methods and thinking discussed in the first two parts. Regarding the special case of new product development, I particularly enjoyed the example illustrating the power of a product-line approach. In this example, an instrumental lesson concerns delaying customization until as late as possible in the production process. He calls the approach “mass customization.”

In summary, this book is written in a straightforward manner. Indeed, the author mentions using his methods to produce the book. It was, after all, a project. The book is filled with illustrations and examples that should resonate with readers; they make clear the problem addressed and the proposed solution (that is, countermeasure). I recommend this book particularly to those who find themselves in organizations where the reasons for certain practices, reports, and activities are no longer known and no one has questioned them recently. This book provides the thinking for asking the reason why and offers suggestions about what to do. On the other hand, those implementing projects with little discipline will find ideas about action that they can take today that will have an immediate payoff.

smTSP is a service mark of Carnegie Mellon University.

Dave Zubrow ( is team leader for the Software Engineering Measurement and Analysis (SEMA) group within the Software Engineering Institute (SEI). His areas of expertise include empirical research methods, data analysis, and data management. Zubrow earned his doctorate in social and decision sciences and has a master’s degree in public policy and management from Carnegie Mellon University. He is an authorized SEI instructor. He is a member of SQP’s Editorial Board and chairs the committee on metrics, measurement, and analytical methods for ASQ’s Software Division. Zubrow also is an ASQ certified software quality engineer.

Understanding Open Source Software Development

Joseph Feller and Brian Fitzgerald. 2002. Great Britain: Addison-Wesley. 220 pages. ISBN 0-201-73496-6.

(CSQE Body of Knowledge areas: Software Engineering Processes)

Reviewed by Scott Duncan

The authors state that they wrote this book because open source software (OSS) has proliferated into mainstream commercial organizations and has been adopted by governments “as an alternate path for software development.” OSS is no longer an academic exercise and the processes used in its development have found their way into “large traditional software houses like Apple, IBM , Netscape, SGI, and Sun Microsystems.” The authors make a point of noting that “software users recognize the quality and stability of the products, the economic advantage of shared cost and shared risk in software ownership, and the technological advantages of building open and modifiable platforms.” So it is not merely that OSS products are available “at little or no cost.” (Even given this latter point, the authors remind readers that “the major portion of the total cost of software development is incurred in the maintenance phase,” so “the sticker price of software is somewhat irrelevant.”)

The book begins with a definition of what makes software qualify to be described as OSS. The open source software definition (OSD), “maintained by the open source initiative (OSI),” while “not a license in itself…is a specification against which a product’s ‘terms of distribution’ can be measured.” Software that complies with this specification can be claimed to be OSS. The various elements of this specification are presented and briefly discussed with examples of “OSD-compliant licenses” and OSS products. This is followed by a history of OSS, which, according to the authors, can be viewed as starting, in some ways, in the 1950s with agreements between the U.S. military and the aviation industry to share code, despite competition among the industry participants. And this, in turn, is followed by a description of the most current companies and products in the OSS field.

Other early chapters address a “framework for analyzing OSS,” resulting in analysis based on characteristics described as: “Qualification (what?),” “Transformation (how?),” “Stakeholders (who),” “Environment (where and when?),” and World-view (why?)” of both OSS products and the OSS process. Each of these is then addressed by a separate chapter, which, in total, occupy the majority of the book’s pages:

  • What defines a software system as open source?
  • How is the open source process organized and managed?
  • Who are the developers and organizations involved?
  • Where and when does open source development take place?
  • What are the motivations behind open source development? (that is, the “why”)

The book ends with a lengthy chapter called “Critical questions and future research,” in which the authors address “the many paradoxes and tensions which continue to surround this phenomenon, and recommend directions for future research.” They discuss such issues as:

  • How successful is OSS?
  • Can OSS processes be sustained and transferred to other software development situations?
  • Despite high-quality claims, can this be empirically validated?
  • Can the “bazaar” metaphor really be used to characterize how OSS development works, or is there more formality to it than seems apparent?
  • Is the OSS approach “collectivist or individual focused,” and what are the implications of each? (And regardless of which one, what is the impact of the “egoist,” rather than “egoless” development style that seems to exist in the OSS community?”
  • Is OSS the “silver bullet” sought in software engineering since it rapidly produces high-quality software at little cost?
  • What are the implications of how OSS work is done for the overall way work can be done, not just in software but also in other “knowledge” areas? (This is my quote, not the authors.’)

I found this book to be a good introduction on the subject of OSS. It is not a book that goes into detail about any of the software or the actual implementation of development processes. Indeed, it is not a “technical” book at all. What it does do is give readers a good overview of the OSS philosophy and community. For those who are not familiar with OSS development, products and organizations, this book would be worthwhile reading.

Scott Duncan ( has 30 years of experience in all facets of internal and external product software development with commercial and government organizations. For the last nine years he has been an internal/external consultant helping software organizations achieve international standard registration and various national software quality capability assessment goals. He is a member of the IEEE-CS, ACM. He is the current standards chair for ASQ’s Software Division, and the division’s representative to the U.S. Technical Advisory Group for ISO/IEC JTC1/SC7 and to the Executive Committee of the IEEE SW Engineering Standards Committee.


Managing Open Source Projects

Jan Sandred. 2001. New York: Wiley Computer Publishing. 180 pages. ISBN 0-471-40396-2

(CSQE Body of Knowledge areas: Project Management)

Reviewed by Eva Freund

Your mission is to develop software with some very specific capabilities and you must do it within a defined period of time. You have no assigned staff but will be able to use any participants whose interest you can provoke. You will be able to institute some rules, but others are free to ignore them, modify them, or substitute their own. Should you choose to accept this assignment you may become famous or you may not. Should you fail, your participation will be disavowed.

That, in essence, is the theme of this book. Unlike the characters of “Mission Impossible,” left to their own devices for completing the assignment, the reader is not left alone. The reader is guided, by way of history and examples, to an understanding of first the construct of “open source” and then the framework that surrounds “open source.” Only after the reader is well acquainted with open source does Sandred introduce the concept of managing this type of project. His focus is that open source is a valid business concept with unique characteristics.

When thinking open source the reader should think Netscape’s Navigator and Communicator, Linux, and Linux applications. Linus Torvalds put some basic functionality out on the Web. Developers saw the source code, some of them found defects and fixed the defects, others decided that some code could be better written, and they rewrote those portions, and others added new code and new functionality. There was no central organization to arbitrate among Linus Torvalds and the contributors. There were no teams with assigned members and there were no assigned tasks. The contributors made the contributions they wanted to make when they wanted to make them.

Out of these and other experiences, the author identifies three basic requirements for the success of any open source project.

  • It is not possible to code from the ground up in open source style. It is possible to test, debug, and improve code.
  • There must be a virtual network that can cooperate. There must be a virtual network of collaborators/contributors. The Internet provides only the technical platform.
  • Open source development builds on volunteers, and the leader must earn the respect of the collaborators/contributors.

Obviously, the traditional hierarchical leadership style prevalent in most corporations will not foster an open source project. The open source project requires a collaborative management style and an organization with weak internal and external boundaries. Essentially, the open source project is distributed. Implementations and decisions are primarily the responsibility of each collaborator/contributor. This type of project, according to Sandred, can be managed only by trust and respect.

And if trust and respect are the methodologies, then groupware, video conferencing, and other collaborative technologies provide the means for these virtual teams to bridge, time, space, and organizational barriers. Sandred believes that the conventional way in which people work is coming unglued.

In his book, Administrative Behavior, Herbert Simon stated that in the post-industrial society, the central problem is how to organize to make decisions. Thinking outside of the box and innovations have driven the modern IT companies. The children of the Nintendo generation are comfortable with both the new media and nonhierarchical ways of working, which are critical to success in an open source environment. They thrive on collaboration. They are driven to innovate, and they have a mindset of immediacy. Most important, their first point of reference is the Internet.

Thus, managing distributed projects and the people attracted to them requires more advanced skills and leadership competencies than are possessed by most American managers. These projects require an incredible focus on people, communication, project leadership, and even on social leadership. Through case studies the author explores:

  • Building, motivating, and managing virtual teams
  • Structuring tasks and meeting deadlines
  • Establishing trust within a team
  • Collecting and communicating information
  • Effectively using the Web to integrate and manage the open source model
  • Maintaining project security

As stated, “If managed the right way, open source can deliver industrial-strength systems that meet users’ needs and provide solutions that are fully extendible and maintainable over long periods of time.”

Because open source software lends itself to collaborative, community-based development and has a low level of start-up costs, open source development offers a unique way for developing countries to build high-value industries, thus leapfrogging older technologies and modes of production. The new type of capitalism that is a resultant creates unique productivity opportunities. For those who want to understand the possibilities and opportunities that open source and the Internet can create, or for those who want to make the right software decisions for their company, this book is a MUST READ.

Return to Top

Featured advertisers

ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.