Software Quality Professional Resource Reviews - September 2003 - ASQ

Software Quality Professional Resource Reviews - September 2003



Roundtable on Technical Leadership: A SHAPE Forum Dialogue


Component Software: Beyond Object-Oriented Programming

Specifying Systems: The TLA+ Language and Tools for Hardware and Software Engineers

Developing Software with UML


Communicating Project Management: The Integrated Vocabulary of Project Management and Systems Engineering

Managing Software Acquisition—Open Systems and COTS Products

Dr. Peeling’s Principles of Management


Automated Web Testing Toolkit: Expert Methods for Testing and Managing Web Applications

Testing Extreme Programming

Testing Embedded Software


Roundtable on Technical Leadership: A SHAPE Forum Dialogue

Gerald M. Weinberg, Marie Benesh, and James Bullock, editors. 2002. New York: Dorset House Publishing. 144 pages. ISBN 0-932633-51-X.

(CSQE Body of Knowledge areas: General, Knowledge, Conduct, and Ethics)

Reviewed by John D. Richards

This is the second in a series of Roundtable books. Each of the book’s discussions or chapters begins with a discussion thread—a question, a set of questions, or a quote that prompts the dialogue. Each chapter ends with a lessons text box that summarizes the high points of the discussion. It is a unique presentation of the material, but one that conveys multiple points of view very effectively. In addition, the book contains short biographies of the discussants, a preface, a preface for the Roundtable series, and a bibliography.

As an example, the thread for the first discussion, “tricks that ignore those who come after,” consists of the following questions:

  1. What’s your favorite clever programmer trick?
  2. Why is it clever?
  3. What can be done to encourage it?
  4. What must be done to keep it from turning into a stupid programmer trick?

The discussion threads that developed include: hard-coding information that will change in the future; failing to clean up temporary code; making comments impossible to ignore; overloading the value of an identifier; overloading the mind of the writer or reader; creating cryptic or cute variable names; using one argument to change the meaning of another; naming with too much or too little English; casting types: Smart or stupid?; building monolithic code; and disregarding the difficulty of maintenance.

The other chapter/discussion topics include: tricks that destroy portability; stupid design tricks; stupid design document tricks; tricks arising from social adjustment; experts and gurus as leaders; the leader as learner; the expert as a teacher; the courage to teach in any direction; and the courage to be yourself.

This is a valuable book for software developers, managers, and instructors. This book addresses the body of knowledge topics of Leadership tools and Skills in detail. It provides keen insights often not addressed in conventional texts, provides multiple perspectives, and is well structured for classroom or work group discussions. I strongly recommend this book.

John D. Richards ( is an account and project manager for SRA International in San Antonio, Texas. He has more than 30 years’ experience as a manager and leader. He is a certified quality engineer and auditor and a Senior member of ASQ. He has a doctorate and an advanced master’s degree in education from the University of Southern California, and master’s and bachelor’s degrees in psychology. He serves as an adjunct professor at the University of the Incarnate Word teaching courses in statistics, quantitative analysis, management, and psychology.


Component Software: Beyond Object-Oriented Programming

Clemens Szyperski with Dominik Gruntz and Stephen Murer. 2002. Reading, Mass.: Addison-Wesley. 600 pages. ISBN 0-201-74572-0.

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Scott Duncan

It is somewhat telling that in the introduction to this book, the author states it is “not about reuse in general, but about the use of software components,” which are defined as “executable units of independent production, acquisition, and deployment that can be composed into a functioning system” all of which require that a component “adheres to a particular component model and targets a particular component platform.” Thus, it is not specifically about reuse from “arbitrary descriptions capturing the results of a design effort.” The executable nature of the definition “rules out many software abstractions, such as type declarations, C macros, C++ templates, or Smalltalk blocks.” And the reason for pursuing components, the author says, is that “all other engineering disciplines introduced components as they became mature,” noting the introduction some 30 years ago of the idea of “software ICs,” which “never truly came to fruition.”

Another telling comment in Chapter 2 is that “imperfect technology in a working market is sustainable; perfect technology without any market will vanish.” This is key because the concept of components, the author says, depends on cooperation of component vendors and component clients who reach some “critical mass” such that there is sufficient interest to produce components because there is sufficient interest in buying and using them. This situation is no different from any technology, as suggested by the general market comment noted previously.

The subject of (interconnection) standards is discussed in Chapter 3, and it is acknowledged that a “standard” is often “just the approach taken by the innovator, the first successful vendor in a new market segment.” Until a “far superior” approach comes along, this initial one remains the standard. But the author states that such “wiring and plumbing” standards are not enough because, “if everything works except for the actual wiring, then people usually find a way around this problem,” which one can call “an adaptor.” (It’s common in electrical devices where plugs, current, or voltages don’t match.)

The next chapter tries to distinguish between “components” and “objects,” which are often used interchangeably and, indeed, mix with modules and classes, which have also been “embraced by the term ‘object.’” The contrast between components and objects is highlighted by stating that a component:

  • “is a unit of independent deployment”
  • “is a unit of third-party composition”
  • “has no (externally) observable state”

while an object:

  • “is a unit of instantiation, it has a unique identity”
  • “may have state and this can be externally observable”
  • “encapsulates its state and behavior.”

Indeed, the author notes that a component, though it may act through objects, may have nothing to do with an object-oriented (OO) approach to implementation and can be entirely based on a functional programming approach, or using assembly language, and so on. Understanding the distinctions the author draws in Section 4.1 seems to be critical to the rest of the book, though Chapter 11, called “What Others Say,” provides many other perspectives on what “software components are or should be.”

Part Three becomes very technology specific since it discusses specific component models and platforms including “wiring standards” (XML), “the OMG way” (CORBA), “the SUN way” (Java, EJB, and so on), and “the Microsoft way” (COM(+), OLE/ActiveX, .NET CLR). Near the end of Chapter17 there is a summary of the differences between the approaches. Part Four continues technology-specific discussion of component architecture and development process (methodologies and languages), but also gets into component acquisition and distribution issues.

Finally, to close out the book, a three-paragraph “epilogue” exists that has another telling focus on how “markets do have a tendency to function even when left unattended” and how they “do not automatically thrive on technical excellence.” (Though, where “everything else is equal,” such excellence will be “the deciding competitive edge.”) For readers who are interested in a book with decided technical content, but that tries to present “an attempt at a unique merger of technology and market aspects driving component software,” then this book should be of interest.

Scott Duncan ( has 30 years of experience in all facets of internal and external product software development with commercial and government organizations. For the last nine years he has been an internal/external consultant helping software organizations achieve international standard registration and various national software quality capability assessment goals. He is a member of the IEEE-CS, ACM. He is the current standards chair for ASQ’s Software Division, and the division’s representative to the U.S. Technical Advisory Group for ISO/IEC JTC1/SC7 and to the Executive Committee of the IEEE SW Engineering Standards Committee.

Specifying Systems: The TLA+ Language and Tools for Hardware and Software Engineers

Leslie Lamport. 2003. Reading, Mass.: Addison-Wesley. 364 pages. ISBN 0-321-14306-X.

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Carolyn Rodda Lincoln

Specifying Systems is a description of a language for specifying hardware and software systems precisely. The language is formal mathematics, that is, prepositional logic, set theory, and predicate logic. It would only be used and understood by people who are designing computer hardware or software such as operating systems. The examples in the book are an asynchronous interface; first in, first out (FIFO); buffer; and caching memory.

“TLA” stands for temporal logic of actions, and TLA+ is a language built on mathematical formal logic. According to the author, “Mathematics is nature’s way of letting you know how sloppy your writing is.” The author designed the language to provide a clear, concise way of communicating design. One advantage of using a specification language is that tools can be used to find errors in the logic (like a compiler does for programming languages). One of the tools described in the book is a model checker.

The book is organized so that most readers can understand enough basics to apply the language by grasping the first 85 pages (Part One). Part Two contains more advanced topics for those who are interested. Parts Three and Four are references. Part Three is a description of the accompanying tools (available on the Web), and Part Four is the language handbook.

Specifying Systems is worthwhile for engineers or researchers who understand formal logic and design hardware or system software. Unless readers have a background in mathematics, however, they will quickly be put off by the concepts and notation. If sentences like “Formula F is true for some x in S if and only if F is not false for all x in S” are totally baffling, this is not the book for you. On the other hand, for those who are interested in finding a more precise way to design engineering systems, this book is a good introduction to the world of specifications that are expressed in terms of formal logic.

Carolyn Rodda Lincoln ( is an ASQ certified quality manager and member of the DC Software Process Improvement Network. She is currently employed as a quality assurance analyst for Titan Corporation at the Environmental Protection Agency in Washington D.C. She holds bachelor’s and master’s degrees in math and was previously a programmer and project manager.

Developing Software with UML

Bernd Oestereich. 2002. Reading, Mass.: Addison-Wesley. 299 pages.
ISBN 0201398265.

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Frank Ginac

In Developing Software with UML, Bernd Oestereich successfully weaves a tutorial about UML into the fabric of a real-world object-oriented analysis and design problem. While many books restate the UML specification, this book provides a practical how-to that most software developers will certainly appreciate.

Oestereich states that the target audience for his book is quite broad and includes anyone with an interest in modern technologies, technology decision makers, object-oriented neophytes, and seasoned object-oriented practitioners. Although it may have been his goal to reach such a broad audience, readers with fewer than 5 to 10 years of experience, a solid grounding in object-oriented concepts, and a familiarity with interactive Web application design patterns may struggle with this book.

The book is organized into five chapters and includes a chapter devoted to introducing many of the key concepts of object-oriented analysis and design. Although this chapter is well written, it is too rudimentary to prepare the uninitiated for the remaining chapters of the book. It is not until Chapter 3 that one reaches the book’s true substance. It begins the process of weaving UML into an object-oriented analysis and design problem. The author does a nice job of laying out the analysis process, beginning with the identification of key stakeholders, followed by the development of the business use cases, then system use cases, and, finally, business classes and interfaces. Although the book is well written, the transitions from one topic to the next were not always clear, perhaps a side effect of the translation from German to English. It could also benefit from a clarification of who within a typical IT organization is responsible for creating the various process deliverables. For example, the business use cases are typically developed by business domain experts, while system use cases are developed by the system architect.

Chapter 4 transitions to design. Again, the author does a nice job of laying out a process for design. He also covers the often-missed topics of developing process-oriented system tests as well as class tests. Finally, Chapter 5 is a UML reference and would be better labeled as an appendix.

I highly recommend this book for the system architect who is looking for more than the specification of UML and would like to learn UML within the context of an object-oriented analysis and design problem. Other prospective readers would benefit from developing a firm grounding in object-oriented analysis and design first and perhaps interactive Web applications design patterns before making the investment.

Frank Ginac ( is the president and COO of The Ginac Group. He provides project management, and product and technology consulting services to start-up and Fortune 500 companies; provides expert opinion and guidance on product and technology strategies; and advises firms on raising venture capital. Over his 17-year career, Ginac has held several executive management and senior engineering positions with many organizations including Hewlett-Packard, BMC Software, Digital Equipment Corporation, The Open Software Foundation, Convex Computer Corporation, and Data General. His business experience spans a broad range of activities in the fields of corporate development, partnerships, venture capital, mergers and acquisitions and emerging companies and technologies. Ginac holds a bachelor’s degree in computer science from the Massachusetts state college at Fitchburg (Fitchburg State College) and is the author of Creating High Performance Software Development Teams and Customer-Oriented Software Quality Assurance published by Prentice Hall.


Communicating Project Management: The Integrated Vocabulary of Project Management and Systems Engineering

Hal Mooz, Kevin Forsberg, and Howard Cotterman. 2003. New York: John Wiley & Sons, Inc. 352 pages. ISBN 0471269247.

(CSQE Body of Knowledge areas: General, Knowledge, Conduct, and Ethics; Program and Project Management)

Reviewed by Poliana Yee-Pagulayan

This book aims to provide an integrated dictionary for project management and systems engineering. While it achieves that basic purpose, readers should not expect innovative presentations on systems engineering, project management, or communication tools. It is best used as a companion to a book by the same authors, Visualizing Project Management.

The first four sections of the book, about 20 percent of the volume, provide general overviews of project management principles and argue the need for a common and well-understood language between project management and systems engineering. There are brief descriptions of project phases and control gates, and various international project management and systems engineering organizations. The information presented is high level and nonspecific. Information on international organizations, for example, is no more insightful than the usual wordage found on respective organizations’ Web sites. Coincidently, the authors did not choose to include Web-page or contact information for the organizations listed, which this reviewer considers an unfortunate oversight.

The visual representation on project management models have many references to this book’s companion so that, without the other text at hand, one may be lost within the figures, trying to determine the differences between multiple Vee+ models, for example.
Since more than 70 percent of the book is dedicated to its proclaimed intention, definitions, and glossary, this reviewer found the descriptions surprisingly common. Many of the definitions are a collection of definitions taken directly from the international organizations. Without an adequate level of knowledge of systems engineering or project management, the one- or two-line imagery commonly found serves little use beyond defining a need to find further information outside the book.

There is a refreshing coverage of control gates. Common project control gates, such as test readiness reviews and system design reviews, are generically described in the initial sections, but surprisingly, no references are provided within the definitions and glossary sections.
Overall, this book provides an adequate synopsis of systems engineering and project management terms, which is admittedly a lacking resource. However, it should only be used as a reference to be added to one’s library of other works with more in-depth discussions of systems engineering concepts or project management principles. Don’t expect this book to provide information or knowledge on systems engineering or project management.

Poliana Yee-Pagulayan ( is a requirements and system engineer at Instrumentation Laboratory in Massachusetts. She was previously an INFOSEC system engineer with General Dynamics Canada in Alberta, Canada.

Managing Software Acquisition—Open Systems and COTS Products

B. Craig Meyers and Patricia Oberndorf. 2001. Reading, Mass.: Addison-Wesley.
360 pages. ISBN 0-201-70454-4.

(CSQE Body of Knowledge area: Program and Project Management)

Reviewed by Joel Glazer

What do the terms software, open systems, and commercial-off-the-shelf (COTS) have in common? The authors explain this in 360 well-written and easily understood pages with sidebar notes of keywords. The short answer is that for those involved with acquiring software systems in the 21st century to succeed, an “open” architectural approach to computer systems must be in place, supported by a set of common interfaces based on agreed upon standards. As we all know, software-based systems are everywhere, growing, and unstoppable. The capabilities of software-based systems in weapons has been recently demonstrated in the successful military engagements in Iraq and Afghanistan, as well as in medical equipment and operations, transportation, finance, entertainment, home appliances, and shopping, to name a few.

Are COTS products and open systems the proverbial “silver bullets”? Will they solve all the ills of software that plague the information technology (IT) and embedded computer industry? Of course not! But the authors present a good case that this approach will go a long way in providing relief to this crises.

The authors present both sides of the conflicting claims made on behalf of and against open systems and COTS, the pros or promises, and the cons or pitfalls. “To maximize the benefits and minimize the drawbacks,” the potential manager must have an understanding of and be informed about open systems and COTS. “The intended audience of this book is project managers and their staff who are involved in designing, developing, procuring, maintaining, funding, or evaluating computer systems in both private and public sectors.” Proponents of each side have interests and motives tied to economics and marketplace forces behind their positions. The believers in open system advocate “standards,” whereas the opponents perceive that this will lead to loss of controls and too much conformity.

The authors’ approach, as the teachers they appear to be, is to build up the case for COTS and open systems as potentially the solution to the ills of the IT and software industry. They do this first by establishing and providing readers with a foundation in terminology they use and then by describing the new business environment resulting from a paradigm shift in the way future systems will operate and will have to be acquired. The essence of the paradigm shift is that producers change into consumers. As a producer, one creates the item in a “white box” approach—one specifies the “how and the what.” As a consumer, one purchases an item in a “black box” approach—one specifies only the “what.” The consumer must adopt the standards and interfaces available in the market.

The paradigm shift manifests itself in the change from a “producer” to a “consumer” approach to system development. Prior to the 1990s, acquisition was based on a producer/developer model of software systems, where producers created the product mostly in house. Now with COTS, producers are able to assemble the products from ready-made consumer parts available “off the shelf” to all consumers, and, therefore, must be able to use these parts in a “plug-and-play” manner. To be successful, the interfaces are critical, conformance to standards is important, and one must place trust in the COTS maker to deliver what was promised.

One of the longest chapters in the book is about standards (Chapter 6). Having personal experience dealing with the establishment of standards at the corporate, national, and international levels, the authors provide an excellent explanation on how standards evolve, the process of development of usable standards, and the economic benefits that are derived from good standards.

COTS software units are becoming analogous to electronic devices that exist in the physical world — devices that are described and cataloged in sales brochures and thick “yellow book” like listings on the Web, from which the design engineer selects the items he or she needs to design the subsystem or system. The devices are usually pre-tested, bar-coded, and ready to be used off the shelf. With COTS, assuming they are properly described as to their capability, cataloged, tested, bar-coded, and accessible (usually over the Internet), the software designer can develop the application and incorporate these “packages” at the appropriate locations in the application stream to create the desired results. With the Internet, COTS could be accessed instantaneously worldwide regardless of the physical location of the vendors.

Using COTS will create new challenges in human resources, job security, staff stress, and cause changes in the way systems are planned, the way project resources are allocated, and testing, evaluating, and maintenance. Chasing the latest technology upgrade for insertion will drive cost up. Additionally, the paradigm shift also involves new approaches to:

  • Equipment repair, which is now usually a case of replace, depending on the cost
  • Licenses issues
  • Government regulations
  • People issues (job security, staff stress levels) test and evaluation
  • Integration issues
  • Upgrades are a matter of course when dealing with systems’ changes from within and without—new requirements, new technology, new operating environment

The book concludes with a prophetic chapter called “Looking Ahead” in a world that continues to shrink and get more connected, where relationships are established not only between humans but also between man-machine and machine-machine in a borderless, seamless manner because of interoperability and systems that conform to standards and protocols.
This book belongs on the shelf of every practicing or potential acquisition manager, and should be used frequently as a reference source.

Joel Glazer (, current ASQ Region 5 Software Division Councilor, has more than 30 years of experience in the aerospace engineering, software engineering, and software quality fields. He has dual master’s degrees from The Johns Hopkins University in computer sciences and in management sciences. He is a member of IEEE and a Senior member of ASQ. He is an ASQ Certified Software Quality Engineer, Auditor, Reliability Engineer, and Quality Manager. Glazer is a Fellow Engineer in the Software Quality Engineering Section at Northrop Grumman Electronic Systems in Baltimore, Md.

Dr. Peeling’s Principles of Management

Nic Peeling. 2003. New York: Dorset House. 288 pages. ISBN 0-932633-54-4.

(CSQE Body of Knowledge area: Project and Program Management)

Reviewed by Scott Duncan

Nic Peeling says he wrote this book because, as a new manager searching for material on management, he could not find “books that encapsulated best practice for someone facing management responsibilities for the first time” and once he did find such a book he was “well into [his] management career and had learned enough to know that [he] did not agree with much of what it contained.” So he wrote this book “in a colorful and memorable style.”

Regarding the style of the book, Peeling says it might tend to lead people to “take [him] too literally” as well as to believe he has “made what to do seem more straightforward than it really is.” In this regard, he indicates that he has provided “a framework” to “help you move from theory to practice,” not a “recipe that can be followed precisely,” and expects that readers will have to adapt things for their own use.

And, in the context of providing a “framework,” perhaps the best material is at the end of the introduction and the first page or so of Chapter 1 where Peeling provides his “Golden Rule of Management”:

You will be judged by your actions, not by your words, and your actions shall set the example for your team to follow.

He also lists the following, which are “little different from the principles of being a good parent or a good teacher”:

  • Set high expectations of people’s performance and behavior.
  • Set clear boundaries for acceptable behavior.
  • Impose discipline and, where necessary, punishment when behavior is unacceptable.
  • Set clear boundaries of acceptable performance and then work with under-performance to improve performance.
  • Provide clear, immediate feedback, praising good performance and constructively criticizing poor performance, to show what is required.
  • Set an example personally of the expected performance and behavior.
  • Behave in a way that wins respect.

Within this framework there is a lot of material, since Peeling writes to the point without a large amount of context setting. It would be hard to give one a sense of everything in this book since it is written in the style of a handbook. It is a welcome change from many management books and has achieved the purpose Peeling set for it.

One interesting thing is that Peeling willingly admits he doesn’t always have an answer that he is fully comfortable with, although he does say what he would do in such circumstances. One such example, late in the book in the chapter of “Managing in the Real World,” involves having to pass on information “that you think is probably untrue.” Peeling says he does not know what a manager should do and thinks there may be no right, or even good, answer. His behavior has been to deliver the information as presented to him, suggesting, “there is more to the matter than meets the eye” and that “we will only find out the full picture in time.” As a person on the receiving side of such an approach, I can say it will work if there is already some trust between the manager and staff, but it will wear thin after a while if too many messages have to be passed with such caveats attached.

There is the occasional drifting into an odd area, such as needing to talk about hygiene to some programmers. (There must be companies where this might be necessary.) There is also advice on dealing with things that suggest deeper issues exist rather than the symptom Peeling mentions. For example, on “firefighting” Peeling urges one to ensure that day-to-day problems don’t overwhelm thinking about more strategic issues, doing the latter at home, if necessary.

Finally, there is one piece of advice not in the book, which applies to those who are not managers but would like to be. Besides reading Peeling’s book, it would help to work (or have worked) in an organization where management embodies the advice Peeling gives both to see it in action and to discuss it with managers who function as Peeling advocates.

[Nic Peeling bio needed]


Automated Web Testing Toolkit: Expert Methods for Testing and Managing Web Applications

Diane Stottlemyer. 2001. New York: John Wiley and Sons. 304 pages.
ISBN 0471414352.

(CSQE Body of Knowledge areas: Software Verification and Validation; General Knowledge)

Reviewed by Noreen Dertinger

As a software tester with many years in the high-tech industry, and as someone who has a strong interest in software test automation, I was excited when I saw Diane Stottlemyer’s book, Automated Web Testing Toolkit: Expert Methods for Testing and Managing Web Applications. Here, I thought, was a book that would address the concepts and issues surrounding automation relating to testing a Web site or Web software; this would be a book that one could read and would form an important part of the software testing bookshelf by supplementing material not already covered in depth in other books about software development or testing. After all, the title of this book leads readers to expect a volume that addresses the basic areas of Web development that could be automated, along with pointers to the appropriate tools and methodologies (as well as tips and pitfalls) to carry out such work. Indeed, in the introduction, author Diane Stottlemyer writes:

“The focus of this book is to provide you with the necessary tools to design, test, and implement your site. It is a must read if you need to understand what kinds of tools are available, what the tools can do, and how to get the pertinent information you need to make an educated decision that will be best for your Web site.”

The author’s intended audience includes:

“...managers, Webmasters, Web developers, programmers, and test analysts who are in charge of developing and testing applications and Web sites. It is written so that anyone with a Web application to test can use the resources and information covered.”

The book is divided into three parts: Part One, “Managing the Web Testing Process,” Part Two, “Web Testing Tools and Techniques,” and Part Three, “Templates.”

In Part One, Stottlemyer presents four chapters of concepts that are important to sound software testing practices in general but are not limited to Web testing. The first two chapters contain material that, while useful, is applicable to general software testing and has already been covered in greater detail in other books on software testing. Chapter 3, Web Site Management, outlines concepts that are covered in other books on software development and software testing, and in an ideal world would be applied to a well-run software testing operation. Unfortunately, the section that should be the main focus concerning Web site management is given very little coverage at the end of the chapter. An accompanying table lists some tools that are specific to the Web.

Part One concludes with a chapter on risk management that outlines some of the elements that are key to software risk management. There is a table of Web-site risk testing areas that is somewhat self-explanatory. It flows directly from a discussion of version control without a heading or introduction. Throughout Part One, more detail on the Web-specific issues, and particularly any automation-specific issues, would be helpful to someone working on a Web automation project.

The title of Part Two, “Web Testing Tools and Techniques,” offers more promise. In this part of the book, the author presents chapters on Web-site testing tools, preparing the Web test environment, load and stress testing, running the Web test, and analyzing the test process and documentation. In the chapter “Web Site Testing Tools,” a brief overview is provided of selecting a Web testing tool and the types of tools available. The chapter concludes with a discussion of some of the leading test tool developers such as Compuware, Mercury Interactive, Rational, Seque, and more. Again, more focus on Web-specific issues would have been welcomed, along with an elaboration of any insights, tips, or pitfalls associated with the various tools represented in the categorized table of Web testing tools.

The remaining chapters in Part Two deal with areas that could be of interest to those involved with the testing of Web sites or Web software, although the discussion of automation remains rather thin. The topics presented include setting up the test environment, setting up the test bed, firewall testing, various languages (such as Java, VBScript, Jscript, and Java Script), a brief discussion of testing scripting languages, databases and testing, Web servers, and load testing. As a general overview of testing issues important to the Web, the material provided here is useful, but more could have been done to tie in the issue of automating this work.

Part Three, “Templates,” contains the (blank) templates in Microsoft Word (native Word or TIF images) or pdf format to document various aspects of the testing that are referred to in the book. These templates can be customized to users’ individual needs. Based on the description on the back of the book, I was also expecting to find sample test plans, cases, scripts, and scenarios that I presumed would provide some examples of how to automate the testing of a Web site or Web application. Checking through the folders on the CD, I was unable to locate any such samples. Under links, the user will find a word or HTML document with links to Web sites for various Web testing tools. A simple Web search will reveal similar resources.

“A high level compendium of factors that should be considered when testing software” would, perhaps, be a more fitting description of this book. While generally clearly written and approachable, this book is perhaps a little too simplistic for advanced practitioners (a specific example would be the summaries). To be sure, this reviewer’s expectation was not for a thick volume covering all the possibilities that could arise in automating tests for a Web site or Web software. However, I was expecting more detail on automated testing for some of the common issues that currently detract from the area of Web development. Also, I would have preferred to see the tables of testing tools in appendices with the body of the book providing more of an overview of the application of these tools in the Web environment. In terms of fulfilling audience needs, this book in its current form should still be useful for anyone interested in learning about the basics of a sound testing practice. It would also help that audience to learn the names of and main manufacturers of some of the automated tools available in support of testing initiatives.

Developers or testers who have reached a more advanced stage of their careers and are looking for a more in-depth discussion of automating their Web testing and the associated methodologies and techniques are likely to be disappointed with this book. For that audience, less focus on the basics and more details about automating the Web testing process would transform this book into the toolkit it claims to be.

Noreen Dertinger ( earned a master’s degree from the University of Ottawa, a certificate in information technology from the University of Victoria, and completed her CMII certification. She has 15 years of experience in the software industry in development, configuration management, and quality assurance. Dertinger is a software quality control analyst with Cognos in Ottawa, Canada, working on PowerPlay Enterprise Server.

Testing Extreme Programming

Lisa Crispin and Tip House. 2003. Reading, Mass.: Addison-Wesley. 306 pages.
ISBN 0-321-11355-1.

(CSQE Body of Knowledge areas: Software Engineering Processes, Software Verification and Validation)

Reviewed by Ray Schneider

Is a tester a valid role on an extreme programming (XP) team? It seems a strange question for a development method that emphasizes both test first and automated testing. One can easily imagine the members of an XP team saying, “We don’t need a tester. We need another programmer.” The experience of reading Testing Extreme Programming will solidly challenge that view and give programmers more insight and respect for what testing professionals can bring to an XP team.

The authors set out the goals of the book in the preface: “Convince current XP practitioners that a tester has a valid role on the team.” Moreover, “Convince testing and quality professionals that XP offers solutions to some of their worst problems.” These goals and several more are developed in a three-part metaphorical journey beginning with a definition of the role of an XP tester and then providing an illustration of how such a role works in the context of an XP project. Finally, the last part is a “Road Hazard Survival Kit” focused on coping with the less-than-ideal real world of XP embedded in larger non-XP efforts or where “…critical XP practices are modified or omitted.”

Some wags have suggested that for an agile process, XP certainly needs a lot of books to explain it. One could argue for fewer, but each book in the XP series has highlighted a different view or perspective on this controversial development methodology. Moreover, the colloquial and anecdotal style of the books with a distinct air of shared reality makes them quick and easy to read with a high coefficient of retention.

The core concept of XP is that an austere set of synergistic, highly focused practices, which implement XP’s four values of communication, simplicity, feedback, and courage, can have huge payoffs in increased productivity. This is accomplished while simultaneously shrinking the common development bloat associated with excess documentation required by more traditional practices, documentation that is never read much less maintained.

XP has a deep commitment to testing; four of the 12 practices have testing as a major component. Crispin and House walk readers through common myths about testing, focusing particularly on the idea that testers are not necessary up front. Not so! The presence of a tester on the team further empowers the test-first commitment of XP. I was impressed by the many descriptions of how the tester brings value to the project. The particular consciousness of the tester leads to identification of many otherwise overlooked deficiencies. The tester occupies a unique position with one foot in the developer’s shop and one foot in the customer’s shop. He or she shares the customer’s concerns while understanding the developer’s world as well. The XP tester helps the customer during planning by participating in clarifying stories, negotiating quality, and acting as a mediator, advocating the customer’s rights while also guarding the programmer’s rights.

The tester accomplishes this by asking lots of questions not only of the customers but of the programmers as well. As I read the book I became more and more convinced that the role of the tester was essential to XP scalability. This was not a claim made by the book. It emerged from the description of tester roles. These most often focused on the interfaces between the customers and programmers, clarifying stories, opening up deeper understandings of the customers’ needs and the limitations faced by the programmers. It is just in the cracks of the partitioning that large programs seem to falter. The roles of the tester are exactly those that foster early detection of the cracks and ensure a deeper understanding on the part of the programmers through pairing with the tester.

Part Two of the book is built around an extended metaphor, that of a journey from Chicago to Salt Lake City, as a model for the journey represented by an XP project supported by an XP tester. The journey starts easily enough with the planning phase and illustrates the role of the XP tester making explicit the hidden assumptions in the stories, hidden by the customer because they are too familiar or take for granted things about the domains that need clarification or in some case are untrue. Each chapter in this part highlights some aspect of the XP tester role, offers stores from the trenches and examples, and ends with a summary and an exercise. (Yes, the answers are in the back of the book!)

The chapter titles tell a story, for example, “Defining High-Level Acceptance Tests” and “High-Level Acceptance Test Estimates.” The 21 chapters of Part Two cover it all from “Planning the First Iteration” through “Test Automation” and “Making Executable Tests Run” to “Bugs on the Windshield: Running Acceptance Tests” and some all-important summing up at the end of the metaphorical road trip. The repeated warnings about the dangers of manual testing caught my eye. The authors warn that manual testing, embraced usually to make up for lost time, ends up as a quagmire of inadequate and incomplete testing, consuming far more time in the end than was saved by their adoption.

The chapter titles make it easy to find and revisit topics. As I read the book I was struck by the many insights into how the XP tester role can enhance an XP project. One interesting sidelight is the XTrack Web application introduced as an illustrative example linking many of the chapter exercises together. An instance of XTrack is available at .

One would not go driving a long distance without a Road Hazard Survival Kit, the title of Part Three. It contains seven chapters that aim to help one cope with a reality that is less than ideal. This section drills down and talks about tools like JUnit ( or other off-the-shelf tools if one does not have time to roll his or her own. There’s help for when one needs to bend or blend XP in the face of competing priorities or imposed alternatives.

Testing XP is a fun read. It is thought provoking and instructive. It highlights the role testers can play, not only on an XP project, but on any project that requires testing to be an integral and insightful component of the project. As a contribution to the XP series, it is one of the first to offer methods that start to push out the close-knit edges of XP to show how one might begin to scale XP to larger and more formal projects.

Ray Schneider holds a bachelor’s degree in physics, a master’s degree in engineering science, and a doctorate in information technology. He is a licensed professional engineer in the state of Virginia. He has more than 35 years of product development and applied research and development experience working for government, defense industry, and small business. He is currently an assistant professor in the Mathematics and Computer Science Department of Bridgewater College in Bridgewater, Va. He can be reached at .

Testing Embedded Software

Bart Broekman and Edwin Notenboom. 2003. Harlow, UK: Addison-Wesley. 368 pages. ISBN 0321159861.

(CSQE Body of Knowledge area: Software Testing)

Reviewed by John W. Horch

Testing of embedded systems is an important topic in today’s world of computerized everything. One would like to believe that there are many new techniques for this type of testing. With that expectation, I was eager to read about some of them in this book. Ah, well, better luck next time.

The good news is that this is a useful book for new testers of pretty much any kind of software, not just embedded software. Much time is spent introducing concepts and general information. Experienced testers, however, won’t use this text to any great degree.

A major complaint (and really the only one of import) is the reintroduction of the mythical waterfall life-cycle model. I say mythical because in 40-plus years of development, testing, and software quality, I have never seen the waterfall life cycle used as a purely sequential model. The authors renew the myth that the product “is first fully designed, then built, tested…” I have used the waterfall for commercial, military, standalone, and embedded systems, and we never waited for full requirements before we designed on the requirements we had validated. We never waited for full design until we began coding on the design we had validated, and so on.

Getting past that personal concern, I found the book to be tricky to read because the authors use European terms not generally used in the United States. For instance, they use “measures” to mean actions and “commissioner” as the person who originates the assignment to create a master test plan. Of course, there is nothing wrong with these and other definitions, except that they are curious to someone who reads and speaks American rather than English.

The text contains six major parts, each devoted to a particular topic of interest to testers and test planners. Part One is a quick discussion of fundamentals and an introduction to the TEmb—“a method that helps to assemble a suitable test approach for a particular embedded system.” There is no further mention of TEmb outside the chapter introducing it. Had I picked up this book in a bookstore and glanced through the table of contents, I might have thought this was the new method for which I was hoping.

Part Two discusses life cycles and gives a very good explanation of multiple V-models. This will be something from which new testers will benefit. The part continues with brief coverage of master test planning and developer testing, and a good discussion of independent test teams.

Part Three is entitled “Techniques,” again an eye catcher in the table of contents that may disappoint the reader in its actual text. Virtually all of it, though well written and understandable, is basic to any kind of testing and the stuff of introductory seminars for new testers.

Part Four presents high-level material about test environments, tools (categories, not specifics), and test automation. If an experienced tester is to gain anything new from this text, it is in Chapter 16, which is called Mixed Signals. It is here that the impact of sensors and “actors” (modules that react to outside signals) is considered as a testing challenge. It is by far the most complicated chapter in the book, and worth reading and understanding if one is really testing imbedded control systems.

Part Five discusses various test organizational topics, and the last part, Part Six, is a set of five appendices. I would have liked the book better if the appendices had been used as outlines for chapters. There is the seed of material for experienced testers in most of them.
I was pleased to see that the authors included a glossary of terms as they are used in this text. A list of some 47 references is also included. Sadly, 26 of them, some of the still good sources, are more than five years old. On the other hand, about 10 of them are dated 2000 or later.

Overall, this was a bit of a nostalgic book for me, reminding me of things we did long ago that are still valid. For new testers or for those who want to know what kinds of things need to be considered in testing of both embedded and standalone systems, this book might be worth a look. For those who have been testing for a few years, they probably know most of what this book presents.

John Horch ( has 40-plus years of experience in all aspects of software development and management. Thirty-five years have involved software quality management tasks. He is currently an independent consultant specializing in software quality education. Horch is the author of Practical Guide to Software Quality Management- Second Edition, he is a Senior Member of the IEEE and ASQ, and he serves on the Editorial Board of ASQ’s journal Software Quality Professional. Horch has received the IEEE Computer Society Golden Core award, the IEEE Millennium Medal, and the QAI Lifetime Achievement Award.

Return to top

Featured advertisers

ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.