Software Quality Professional Resource Reviews - March 2002 - ASQ

Software Quality Professional Resource Reviews - March 2002

Contents

QUALITY PRINCIPLES

An Introduction to General Systems Thinking: Silver Anniversary Edition
By Gerald M. Weinberg

Peopleware: Productive Projects and Teams
By Tom DeMarco and Timothy Lister

SOFTWARE QUALITY MANAGEMENT

Problem Frames: Analyzing and Structuring Software Development Problems
By Michael Jackson

SOFTWARE ENGINEERING PROCESSES

Software Craftsmanship/The New Imperative
By Pete McBreen

Extreme Programming Explored
By William C. Wake

Strategic Software Production with Domain-Oriented Reuse
By Paolo Predonzani, Giancarlo Succi, and Tullio Vernazza

Requirements Analysis and Design, Developing Information Systems with UML
By Leszak A. Maciaszek

Computer Networking Essentials
By Debra Littejohn Shinder

The Unfinished Revolution: Human-Centered Computers and What They Can Do for Us
By Michael Dertouzos

PROGRAM AND PROJECT MANAGEMENT

Technology Acquisition: Buying the Future of Your Business
By Allen Eskelin

SOFTWARE METRICS

Software Assessments, Benchmarks and Best Practices
By Capers Jones

SOFTWARE CONFIGURATION MANAGEMENT

A Guide to Software Configuration Management
By Alexis Leon

QUALITY PRINCIPLES


An Introduction to General Systems Thinking: Silver Anniversary Edition

Gerald M. Weinberg. 2001. New York: Dorset House Publishing. 279 pages.

ISBN 0-932633-49-8

(CSQE Body of Knowledge areas: General, Knowledge, Conduct, and Ethics)

Reviewed by John D. Richards


As one can tell from the title, this is not a new book—it is a classic. The author worked on the original from 1961 to 1975. He begins the preface to this silver anniversary edition with a quote from Albert Einstein: “The significant problems we face cannot be solved at the same level of thinking we were at when we created them.”

This book is about thinking. It is about how humans organize, synthesize, and put order to their universe. Weinberg in his original preface described his role:

My role, consequently, is to integrate a mass of material into an introductory form. I have tried to gather insights both from general systems theorists and from disciplinarians, to arrange them in a consistent and helpful order, and to translate them into a simpler and more general language so that they become common property (pg. xi).

The book consists of two prefaces, a section on how to use the book, seven chapters, an appendix (a brief mathematical and statistical glossary), end notes, an author index, and a subject index. All are well organized and integrated. Each chapter contains a section titled “Questions for Further Research” and a list of recommended reading. The questions in the first chapter cover 10 disciplines: economics, social psychology and sociology, mechanics, archaeology, thermodynamics (or “thermostatics”), operations research, poetry, neuroendocrinology, and utopian thought.

The Problem begins the reader’s journey into systems thinking with a view as to how to define and scope the problems one is going to tackle in later chapters. During this discourse the author draws on examples in physics, biology, and mechanics, to name a few.

The Approach outlines the way people go about solving problems. Weinberg examines many of the “laws” as well as the history of science and systems thinking in a humorous manner. System and Illusion focuses on the development of systems and provides some warning associated with them.

Interpreting Observations examines the interpretation of observations when a superobserver makes such observations. Superobservers see and remember all about a situation; unfortunately, they do not exist. This is followed by an examination of observations as they are affected by the role and orientation of observers. Breaking Down Observations discusses ways in which the limited mental powers of observers influence the observations they make. In reality people do not make perfect or complete observations.

Describing Behavior discusses the use of simulations and their limitations. The author cautions that simulations are limited and may not contain all the information or functionally of things or systems they are intended to represent. Weinberg views chapter 7, Some Systems Questions, not as his last chapter but rather as the end of Part 1.

It is difficult to summarize the book’s broad chapters in a few sentences and even more difficult to give this book the credit it deserves in such a limited review. Suffice it to say this is one of the classics of systems or science of computing. I recommend it to all; it will cause both scientists and nonscientists to examine their world and their thinking. This book will appear on my reading table at regular intervals, and one day I hope to update to the golden anniversary edition.

John D. Richards (john_richards@sra.com) is an account and project manager for SRA International in San Antonio, Texas. He has spent more than 30 years as a manager and leader. He is an ASQ certified quality engineer and auditor and a Senior member. He has a doctorate and an advanced master’s degree in education from the University of Southern California, and master’s and bachelor’s degrees in psychology. He serves as an adjunct professor at the University of the Incarnate Word, teaching courses in statistics, quantitative analysis, management, and psychology.

Back to top

Peopleware: Productive Projects and Teams

Tom DeMarco and Timothy Lister. 1999. New York: Dorset House. 226 pages. ISBN 0-932633-43-9

(CSQE Body of Knowledge areas: General Knowledge, Conduct, and Ethics)

Reviewed by Joe Zec

Throughout the course of human history, there are some things that have not changed much, such as the length of a day, gravity, or basic human psychology and emotions. It is this last category that is the concern of the second edition of Peopleware: Productive Projects and Teams. Although the first edition was published 14 years ago, the original material is as relevant today as it ever was. As testament to this, consider that only one of the original 26 chapters needed tweaking. There simply are not many computer industry books that have withstood the test of time so well.

This book is a treasure trove of valuable insights into the psyches of software engineers and their managers. Much of the book will strike readers as common sense, making it even more amazing that such issues are virtually ignored in today’s high-tech work environment. It is not too surprising to learn that long hours, intentionally compromised product quality, lousy workspaces, and environments that are nonconducive to productive creative thought lead to unhappy engineers. And it is not much of a leap from unhappy engineers to team problems and failed projects. So why do people usually ignore human/human and human/environment interactions? They do because the software industry is technology-centric, not psychology-centric, and people look to technology (tools, methodology, and so on) to drive them. Well, read this book and see the light!

The first five parts comprise the original first edition and are full of nuggets regarding the emotional and psychological responses people have to the way they are being managed and the environment, both physical and psychological, they work in. I was amazed at the revelations on how (or how not) to build a great team and execute projects flawlessly. Although the book belabors points at times, any manager would still learn many valuable lessons from this book.

The final part was added for the second edition and expands the book’s scope from development teams and projects to the development organization as a whole. The chapters on process improvement programs and the difficulty of change struck a chord for this SQA and process engineer. The importance of people in the successful execution of process and the psychology of change are too often ignored in the impersonal corporate climb up the process maturity ladder. Also, the closing chapters on the responsibility of managers for their organization’s health are gems of wisdom.

In short, the book is a wonderfully entertaining presentation of vital, sociological issues. Managers who fail to read this are doing a disservice to their teams and organizations. Besides, when was the last time that the ending of a technical book brought a lump to the reader’s throat?

Joe Zec (Jzec@Avidyne.com) obtained his bachelor’s degree in economics from Harvard University in Cambridge, Mass. In his 20 years in the high-tech industry, he has worked mainly in software testing, software test management, and software development process engineering. He is the quality (process) manager at Avidyne Corp.

Back to top


SOFTWARE QUALITY MANAGEMENT

Problem Frames: Analyzing and Structuring Software Development Problems

Michael Jackson. 2001. Boston: Addison-Wesley. 179 pages. ISBN 0-201-73804-X

(CSQE Body of Knowledge areas: Software Quality Management, Software Engineering Processes)

Reviewed by J. David Blaine

How many different problems are there? An infinite number? How about five? This book offers something new: a complete taxonomy of all the problems possible in the world, and when the counting is finished, there are only five!

This book follows in the tradition of Jackson’s previous books: it provides a crisp method such that performing the described portion of the systems development task is repeatable and practitioner-independent.

For those who write software requirements; analyze system-level wants, needs, and expectations; and have jobs involving determining functional and quality requirements; this book can help them.

Jackson’s thesis is that to achieve successful software development projects, the problem to be solved must be thoroughly analyzed and structured. To do this, problems must be seen in a different light. Jackson introduces problem frames as a mechanism to structure problems.

Frames help the analyst focus on the problem to be solved, and not prematurely drift into solutions, or prejudice the perception of the problem by thinking of a solution and asking solution-related questions. Problem frames are analogous to design patterns. While design patterns look inward to computer and software design, problem frames emphasize the world outside the computer. Frames are part of the problem domain. They identify and describe recurring situations. Recall the punch line to an old mathematics joke, “…the problem has been reduced to one already solved.” This is what frames do. They help resolve requirements uncertainty and fluidity, by working at a higher level of generality.

This book provides readers with the capability to decompose complex and realistic problems into structures of simple subproblems that fit recognized frames (problems people have solved before).

The book has 12 chapters that take readers from an introduction to basic problem-frame concepts to locating and defining the boundary of the problem using the context diagram to explicitly showing the requirement in a problem diagram to introducing five basic problem frames.

The problems that a computer program (system) must solve can be described in terms of one or more of these basic frames. A problem frame is a type of pattern. It defines a problem in terms of its context, the attributes of the domain in which it occurs, and its interfaces. The five basic problem frames are:

  1. Required behavior—build a machine to control some part of the physical world so certain conditions are satisfied, for example, control a sluice gate in an irrigation system so that the gate is opened 10 minutes out of every three hours, otherwise closed.
  2. Commanded behavior—build a machine that accepts operator commands to control a part of the physical world, for example, control the sluice according to an operator.
  3. Information display—build a machine that will obtain information about some part of the physical world and present it at the required place in the required form, for example, an odometer.
  4. Simple workpiece—build a machine that will allow a user to create and edit a certain class of computer-processable text or graphics objects so they can be subsequently copied, printed, and analyzed, for example, a simple text editor to maintain an address book.
  5. Transformation—build a machine that will translate given computer-readable input files into output files according to a set of rules, for example, a Pascal compiler.

These five basic problem frames are distinct from each other in several important aspects. These differences make the concept of framing viable. Understanding the different aspects of frames enables the analyst to determine what must be specified to fully describe a particular problem to be solved. It yields the unique set of questions that is based entirely on the problem.

The differences between problem frames include:

  • Requirements. The user of a transformation system will compare input and output files for correctness. The user of a required behavior needs only to look at the controlled part of the world to see how it behaves under machine control.
  • Domain characteristics. In a transformational problem, the data values and structures of the files are important. In a required behavior system problem, the cause-effect relationships of the physical world to be controlled are important.
  • Involvements. In an information display problem the relevant part of the physical world is involved only to the extent that it is monitored, not changed. The physical world in a required behavior problem is very involved because the machine changes it.


In subsequent chapters, variations and flavors of these frames are explained that accommodate realistic variations. Specific concerns of each basic frame and more refined problem descriptions are given. A chapter guides readers through the construction of composite frames.

A result of using problem frames is that readers will be able to build reusable libraries of problem descriptions applicable to their own specific domains. Problem frames are independent of any particular software development method (structured, O-O, JSD,).

I highly recommend Problem Frames for anyone involved in writing requirements and analyzing problems for software development.

J. David Blaine (jblaine@san.rr.com) holds the CSQE and PMI project management professional certifications. He is now an independent consultant after having worked the past six years in telecommunications as a software process and quality improvement specialist, preceded by 19 years in aerospace as a software developer and project manager. Blaine supports clients in establishing software metrics programs. He is a cofounder of the San Diego SPIN, is the ASQ Region 7 councilor for the Software Division, and is a member of ASQ, IEEE, INCOSE, and ACM. Blaine holds a bachelor’s degree and a master’s degree in math/computer science and a master’s degree in electrical and computer engineering.

Back to top

SOFTWARE ENGINEERING PROCESSES

Software Craftsmanship/The New Imperative

Pete McBreen. 2002. Boston: Addison-Wesley. 192 pages. ISBN 0-201-73386-2

(CSQE Body of Knowledge areas: Software Engineering Processes)

Reviewed by Milt Boyd


Software Craftsmanship is written for programmers who want to become exceptional at their craft and for the project manager who wants to hire them. The author presents a method to nurture mastery in the programmer, develop creative collaboration in small developer teams, and enhance communications with the customer” (taken from the back cover).

The author is an independent consultant, with many years of experience in formal and informal process improvement initiatives. He believes that “software development is meant to be fun. If it isn’t, the process is wrong.” His Web site is http://www.mcbreen.ab.ca/.

McBreen contrasts software craftsmanship and software engineering projects in several ways.

  • Size: a small corps of master craftsmen (with journeymen, and a few apprentices) vs. an army of average engineers
  • People: individuals with established reputations for results vs. interchangeable employees with certificates and licenses
  • Product: really good software (exemplified by Open Source software) vs. good enough software (typified by most commercial bloatware)

He compares the development of a commercial airliner and the Gossamer Condor. “[As] a passenger in a “fly by wire” aircraft, I want to know that a systematic, disciplined, and quantifiable approach was taken to the development and verification of the flight control software.” That approach requires planning and anticipation, and staged coordinated development, because doing anything over is very expensive. But the Gossamer Condor was developed on a different principle. Many problems could not be anticipated and could only be found through experimental flights. Doing things over was a (nearly) daily event, and the design/development accommodated that fact. The result: the small team of dedicated individuals won the prize they sought. Elsewhere, he contrasts the engineers in a firm producing thousands of widgets with the craftsmen trying to make one good movie. The question is, which is more typical of one’s own situation?

The term software engineering was coined in 1967. A NATO study group was formed to discuss “the problem of software.” Their conference, held in Garmish, Germany in 1968, published a report entitled “Software Engineering.” The term was considered provocative at the time, and software craftsmanship may be similarly provocative today.

Software engineering was developed in response to the need for truly large defense systems. Some would argue that, in that area, it has delivered solutions that otherwise would be impossible. But is that a good basis for current commercial projects that involve fewer people, shorter time scales, and different market needs? McBreen thinks not.

This book is organized into five parts. The first questions software engineering; the second defines software craftsmanship; the third examines the implications of software craftsmanship; the fourth repositions software engineering; the last considers how to use the new model. In every part, McBreen writes clearly, and persuasively presents his alternative vision. While acknowledging the importance of software engineering for large projects, he continually challenges its role for more routine work.

Of course, the problem is not in understanding the difference between software engineering and software craftsmanship, but in applying that understanding. Part 5 (“What to do on Monday Morning”) devotes three chapters to this, or almost a third of the book. The highlights are to go with experience (not credentials), to think of applications (not projects) designed for testing and maintenance, and to create a learning environment that fosters reflection.

This book may have relevance to the ASQ certification programs. McBreen cites the experience of the ACM, as they tried to develop a software engineering body of knowledge (BOK). The ACM found software development to evolve rapidly, so that any static BOK becomes obsolete. Also, the field is diverse, so finding “the core” is difficult. The techniques for safety-critical systems (where licenses and certificates might be demanded) are very specialized, and would probably be excluded from any proposed core. Eventually, the ACM withdrew from an effort to license SWEs.

The author succeeds in being provocative. He holds out a vision, but admits he does not know how to get there easily, or even if it is possible to get there. There are lots of suggestions, many of which will seem practical, and some of which will sound outrageous.

Milt Boyd (MiltBoyd@aol.com) is a member of ASQ’s Software, Reliability, and Quality Management Divisions. He is certified as an ASQ Quality Manager, and is certificated by IRCA as a Lead Auditor of Quality Management Systems. He is currently employed as a systems engineer by Avidyne Co. of Massachusetts.

Back to top

Extreme Programming Explored

William C. Wake. 2002. Boston: Addison-Wesley. 159 pages. ISBN 0-201-73397-8

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Ray Schneider

Extreme programming (XP) is a phrase that is immediately polarizing. Extreme has so many denotations and connotations both positive and negative that the word alone generates cognitive dissonance. Add to extreme the word programming and one enters a world of wizards, smoke, and mirrors and virtual everything, no less reality.

Extreme Programming Explored is another book in the Addison-Wesley XP-Series edited by Kent Beck. Couple extreme programming with other so-called agile methods (www.agilealliance.org) and one has a movement, if not yet a revolution or a paradigm shift.

Wake’s book begins with a brief foreword by Dave Thomas, coauthor of The Pragmatic Programmer. He promises readers that "Extreme Programming Explored will take you on a journey." The journey he has in mind is the Lewis and Clark expedition seeking a northwest passage to the Pacific, by analogy we have the imputed significant of XP. It is nothing less than a new way of thinking about the journey of programming. Of Wake’s treatment of XP he goes on to say, "You’ll sit in on a pair-programming session…develop a database application test-first…plan a library system. Then you’ll see what a typical day looks like for a manager and a programmer and their customer."

Thomas goes on to praise the book for taking readers into the extreme programming experience and closes with a prediction: "Pretty soon, you’ll be the old hand telling scary stories to the tenderfoot arriving from the East." Peering into this metaphor one sees a vision of a new world where XP has taken its proper place in the sun, perhaps in company with other explorers trying to introduce a new order. What began is a rivulet of interest a couple of years ago in this extreme programming thing has become something of a raging torrent.

The more staid practitioners of traditional methods seem to see XP and the agile movement as a threat. The blurb on Hillel Glazer’s article in the November 2001 issue of CrossTalk (http://stsc.hill.af.mil/crosstalk/) pointed the issue in the following language: "Many process-oriented software developers (some of whom use the Software Engineering Institute’s Capability Maturity Model‚ (CMM)) think of extreme programming (XP) as a seat-of-the-pants development method. Many high-speed cutting-edge developers (whether they use XP methods or not) see CMM as a cumbersome unnecessary impediment to developing software quickly.…"

In the December 2001 issue of Software Development (http://www.sdmagazine.com/documents/s=1811/sdm0112h/0112h.htm), an opinion piece by Scott Ambler entitled The Threat of the New speaks of the "buzz about agile processes." His thesis is summarized as "The negative reaction stems from the fact that agile processes are new and unfamiliar, and require changes in personnel, process, and culture."

Glazer sees the conflict as due to a myth or misconception that CMM cannot be rendered in an agile fashion and, conversely, that agile processes such as XP are disciplined and not merely glorified hacking as some have characterized them. Ambler sees a paradigm shift in the offing when he says, "I equate the agile paradigm shift to the object paradigm shift …."

Certainly XP is in the air. Wolfgang Strigel of the Software Productivity Center as guest editor for the November/December 2001 issue of IEEE Software offers up four articles on XP while sitting firmly on the fence. "Many methodologies have come and gone. Only time will tell if one of the more recent methodology innovations, extreme programming, will have a lasting impact on our way to build software systems." Strigel goes on by pointing out that "…XP is not the ultimate silver bullet…." Later in the same paragraph, he points to what may well be the real crux of the matter. He says, "XP tends to be a grassroots methodology. Developers and development teams typically drive its introduction…CMM, on the other hand, is typically introduced at the corporate level and then deployed to development teams." In the agile process movement, characterized in part by XP, one may be looking at a revolt of the programmers, striving to take back their particular workplace, the conceptual heart of software development from those they feel cannot do it, but can tell them how to do it.

I spent a couple of weeks on the agile programming listserv observing and participating, to some degree, in the discussions. Scott Ambler, Ron Jeffries, and others prominent in the agile movement were heavy posters, and one of the fascinating things I observed was the sheer passion both in defense and on attack, not flame wars, but serious differences of opinion about what constitutes good software development processes. It was exhausting trying to keep up with the discussion. There is deep commitment driving the new wave of agile methods. In a curious and inquiring frame of mind I turned to William Wake’s book.

Extreme Programming Explored is divided into three sections focusing on programming, team practices, and processes. Wake characterizes himself as "just a programmer" who attended an XP Immersion class in December 1999. With some minor overlap he maps Beck’s set of 12 XP core practices to the three layers that divide his book:

  • Programming encompasses simple design, testing, refactoring, and coding standards.
  • Team practices include collective ownership, continuous integration, metaphor, coding standards, 40-hour week, pair programming, and small releases.
  • Processes covers on-site customer, testing, small releases, and the planning game.

Over the past 35 years, I have been an observer and participant on the research and development scene in both hardware and software development capacities, as a researcher and an R&D manager, working for government, a large defense contractor, and for small business. Most recently I have joined the academic world. I want to discuss the highlights of Wake’s book.

The Programming section of the book is presented in two chapters. "Program incrementally and test-first" is the theme. I cannot summarize the chapter in just a few words, so instead I will focus on the really radical idea. Test first does not mean write the code and then test it as one might suspect. Instead it means write "automated, repeatable unit tests before writing the code that makes them run." The test becomes the specification, and an operational specification at that with pass/fail values.

Wake takes readers through this apparently implausible idea, and step-by-step, it makes more and more sense. Test first is a self-imposed discipline that enforces a kind of incremental design as the project unfolds and provides an embedded test capability, which allows the code under development to be fully tested at any time.

Chapter two focuses on refactoring, the practice of changing perfectly good code to make it better during development. Why? Wake illustrates why, and it’s to make the product better. Test first makes it possible, and since the tests are already written, one can tell if it still works when one makes it better. When I read about it, I remembered Brooks’ observation in Mythical Man Month, that one plans to build it twice because the first time would not be any good anyway. Refactoring in the context of incremental programming is building it twice in the small.

The whole first section is saturated with code examples illustrating Wake’s points, not easy going for managers that don’t do code. Of course XP seems focused on using people in ways that emphasize their real strengths. This resonates with the goals of the Agile Alliance’s four core values as expressed in their manifesto:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Team practices are the focus of chapters 3 through 6. Here are many more of the controversial practices that make the extreme in XP seem right on. I am going to mention the practices and only focus in a little on those that I am particularly interested in. XP practices collective code ownership. "Whoa you’re not going to touch my code!" Wake shows that there are really no good solutions to this conundrum. Collective ownership leads to teamwork and collective responsibility, which often means in practice that no one is responsible. When all the options are sort of so-so one has to pick one and this is the one that XP picks. To keep things on-track XP practices continuous integration. The presence of tests and the primacy of test first makes this possible.

The 40-hour week is an XP practice that I heartily endorse. The specter of death march programs is one I have experienced everywhere I have worked. Upper-level managers view overtime, especially unpaid overtime, as the magic bullet they can shoot whenever they are in schedule trouble. The problem is that it is addictive, destructive, and it doesn’t work anyway. The tiger team practices celebrated in books like The Soul of a New Machine by Tracey Kidder metaphorically resembles Hamlet. At the end everyone is dead, all the team members move on to new jobs in an orgy of post partem depression. That’s no way to run a company. Destroying one’s development team is bad business. XP shows its maturity in endorsing a more human way of doing business.

XP’s open workspace practice is one that I have only experienced in integration laboratories. It works well there because it’s essential to what is going on. It is a truism in project management that colocating the development team is the most efficient way to run a project. "XP specifies an open workspace, usually with small private spaces around a central area," according to Wake. This reminds me of IBM’s open office concept in the early 1970s. It was unveiled with much hoopla and heralded as the office of the future. It turned out though that people didn’t like working in a fishbowl all the time. So at least some private space is essential. The danger Wake mentions is the degeneration of the open workspace into a bullpen. This is unlikely, however, if there is a clear project focus. It’s ironic that "open office" is now an open source software project. (www.opensource.org)

Chapter 4 takes on an XP biggie – pair programming. Of all the XP practices, this is one of the most controversial because it is so counterintuitive. How can it be possible for two programmers working together at one workstation with only one at the keyboard to be as productive or more productive than working on separate things separately? Most people will take some convincing here. I’m personally on the fence because I have never done it. But there is a lot of testimonial evidence out there that it is a good thing.

In chapter 5 Wake addresses architecture, which is a topic near and dear to my heart. I have taught architecture based on the work of the patterns movement and Garlan and Shaw’s book Software Architecture: Perspectives on an Emerging Discipline. One of the methods I have always supported is one that I have summarized as core plus elaboration. XP’s concept of first iteration, that is a functioning skeleton of the system as a whole, is a variation on that same theme. Spike solution and do the simplest thing that will work are both useful principles. The first is essentially rapid throwaway prototyping to prove a concept. On the hardware side one calls such things concept demonstration prototypes or demonstration test articles. Do the simplest thing that will work is more profound I think. There is a tendency to throw in everything but the kitchen sink when the design is done upfront in an entirely theoretical way. That’s what the agile crowd has been calling BDUF (Big Design Up Front). It’s clear enough that it’s a dangerous practice. Brooks found it dangerous in Mystical Man Month and nothing has changed since. Better and best are always the enemies of good enough. In software, except in fairly rare situations, you can afford to do the simple thing first and optimize later if it’s really required. The whole Internet is a testimony to this practice.

But where is the architecture, the actual title of chapter 5, is a question that is best answered by the system metaphor, which is the title of chapter 6 in the form of the question, "What Is the System Metaphor?" The answer is that the metaphor is like a pattern of the architecture providing a common vision and a shared vocabulary with the generative power common to all metaphors without the straitjacket of overly designed systems on paper. The whole thing to me has a right feel to it.

I tell my students that software is entirely metaphor and that they ought to realize early and clearly that there are no objects, no hierarchies, not even any bits in a computer. All of those things are metaphorical language about little pieces of highly patterned and metalized silicon connected by copper wires and the patterns in the forms of electrical voltages that they contain. Software is the closest analog we have developed to pure thought. The pattern and the wonder of it are in the set of metaphors through which it is created.

System design should be approached using methods that exhibit just in time constraints, late binding, and no decision unless it has to be made. This kind of methodology keeps maximum flexibility in hand and avoids getting seriously stuck later in the development process when it is too late to go back and too badly flawed to go forward.

Process, the last section of the book, was my favorite because it is so right and yet so rare. In the theory X world that dominates much of the industrial world, much of these last few chapters approach the unthinkable idea of a sane theory Y or maybe theory omega workplace where things are done by mature adults in a reasonable, logical, systematic, and well-motivated manner. XP manages the software development process through something called the Release Planning Game. It’s not really hard, but too involved to fully describe here. The key however is that roles are assigned in a rational way. Customers write the stories that are a bit like use cases. Programmers estimate the effort to develop the software that would implement the story. It’s OK to say, "I can’t" or "I don’t know."

In many industrial settings, saying that the story violates the law of conservation of energy would not be enough to get management to drop the requirement. In XP if the story is too big to be estimated, the customer is asked to split the story into smaller progressive stories. The key is that each player in the release planning game performs a role in which he or she is truly the expert. In a past life, I often would find myself saying wishing doesn’t make it so in an effort to explain to folks who should have understood that wanting something isn’t the same as having the resources to get it.

Wake goes through this process in an insightful and informative way. In chapter 8 he describes planning a software iteration. Here again there are rules to the planning game. Iterations are time-boxed. Iteration planning can be thought of as a board game says Wake. The customers and programmers play the game to develop the plan. It amounts to matching the work to the available talent so that instead of planning for failure that most methods end up doing, the XP iteration planning game focuses on getting a win-win definition, iteration by iteration.

Does it work? I don’t know. It certainly sounds appealing. I am a little skeptical that most organizations can get past the traditional views of work and distribution of power to make XP work. It requires a lot of maturity on the part of all and requires power sharing that allows each party to exercise power in their own area of expertise.

Chapters 9 and 10 wrap up the book with a treatment of the roles of customer, programmer, and manager on a typical day and some brief conclusions. XP expects to be provided with an on-site customer. That alone is a pretty revolutionary idea. The on-site customer performs four key jobs that only the customers can truly perform. These are answering questions about the tasks, that is, the implementation of the stories often involving decisions and interpretations, writing acceptance tests, and running acceptance tests (this is not writing the software but identifying the things that a test must test in order for one to know that the story has been implemented properly), and then run the acceptance tests keeping a record. The fourth customer responsibility is to steer the iteration. This is based on the two games: release panning and iteration planning and a third responsibility for choosing what the team does if the iteration needs to be adjusted.

Most of the things the customer does in the game-rule constrained environment sound like the things that managers do traditionally. That’s another XP innovation. The role of manager is one of tracker and coach. Many functions thought to be management functions are relinquished to more appropriate players. Customers set priorities. Programmers assign tasks and estimate stories and tasks. Schedules are negotiated between programmers and customers. XP managers face outside parties, form the team, obtain resources, manage the team through tracking and progress reporting as well as hosting meetings and celebrations.

When readers have finished reading Wake’s book, they may or may not be convinced that XP is something they want to try. What they will have accomplished is an exercise in mind stretching. XP is a different way of thinking about programming projects. Its focus seems to be on appropriate empowerment. Wake writes about the value of high-bandwidth communication provided by XP as being a key to productivity. But there are caveats, too. XP has been applied predominantly on small projects. Can the process be scaled to large teams and large projects? What further optimizations can enhance the process? Only time will tell whether XP is just another partially successful candidate for a "silver bullet" like so many that have come and gone in the past, or as Scott Ambler believes when he writes "We’re only partway along the agile paradigm shift — but agility is here to stay." I recommend Extreme Programming Explored as an insightful read on an important topic.

Ray Schneider (schneirj@adelphia.net) has worked for more than 35 years in both hardware and software research and development for government and defense industry developing sensors and signal processing software, and for small business where his teams have developed many portable instruments with embedded software solutions. He is a member of the IEEE and the ACM. He is a licensed professional engineer in the state of Virginia and holds a bachelor’s degree in physics, a master’s degree in engineering science, and a doctorate in information technology. He is currently a visiting assistant professor in the Computer Science Department at James Madison University in Harrisonburg, Va.

Back to top

Strategic Software Production with Domain-Oriented Reuse

Paolo Predonzani, Giancarlo Succi, and Tullio Vernazza. 2000. Boston: Artech House. 399 pages. ISBN 1-58053-104-0

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Dave Zubrow

First, let me confess that I read this book as someone with an interest in software product lines and some familiarity with the concepts. However, I am not immersed in the field or in the techniques discussed. So this review represents the comments and observations of someone who has an interest in learning about this field rather than from the viewpoint of an expert. This book describes an approach to performing domain analysis and engineering. The method is called Sherlock. The book is organized into eight chapters describing the fundamentals of domain analysis and engineering followed by four detailed case studies illustrating the application of Sherlock. It uses object-oriented (OO) approaches and terminology as well as unified modeling language (UML) diagrams and notations. An appendix is provided to assist those who are unfamiliar with these concepts and notations.

Sherlock is organized into the following phases:

  • Domain definition
  • Domain characterization
  • Domain scoping
  • Domain modeling
  • Domain framework development


For each domain, the book describes the deliverables or expected outputs as well as how
to produce them. Each chapter of this first part concludes with an illustration of the tasks and deliverables in terms of an example that is carried through from one chapter to the next. This helps readers learn the individual elements of the method as well as see the relationships among them.

The first three chapters provide an introduction, an overview of domain analysis and engineering, and an overview of Sherlock. These chapters provide the motivation for the method and set the stage for the rest of the book. In particular, they provide the rationale as to the benefits likely to be realized from a domain-based approach to the software product development. By doing so, they also help to define those situations where such an approach makes economic sense as well as those where it would not. As one reads the book, it becomes clear that like most undertakings, planning, analysis, and preparation are likely to reap good returns. In this case, the planning, analysis, and preparation are essential to success.

The following five chapters describe each of the five phases of Sherlock. The first phase is domain definition. It involves identifying and arranging information relevant to analyzing and defining the domain. It also involves making a decision as to whether to continue on with domain and engineering. The outputs from domain definition include:

  • A domain vocabulary
  • Classified information on the domain
  • Definitions of the feasible, strategic, and current domain
  • A feasibility analysis

It is important to understand the extent of the various domains. The feasible domain represents all product possibilities associated with a set of features. The current domain is most easily characterized in terms of the current products in the marketplace, as well as those under development. In between these two lies the strategic domain. This is the product space that the organization would like to inhabit.

Domain characterization is the observation of a domain andthe planning of a strategy for the development of products within that domain.” The basis for characterizing a domain and selecting a strategy revolves around how products deliver value to users. Here the author distinguishes between internal and external value generation. Internal value is the result of the features and quality intrinsic to the product. External value results from interoperability and compatibility with other products in the domain. This extends the functionality and user base for product. Consideration of these two types of values is explicitly addressed during domain characterization. The deliverables from this phase include:

  • Value diagrams
  • Descriptions of installed product bases
  • Tables of compatibility among products
  • Diagrams of estimated user flows or migrations among products based on their features
  • Diagrams relating pricing policies with market segments
  • A plan for developing new products based on the analysis to date

The chapter on domain scoping introduces the concepts of commonalities, variation points, and variants. Domain scoping analyzes the products and features of the domain to identify attractive variation points and to select variants. Variation points represent “not fully specified features in the common part that can be implemented in different ways in specific products.” Variants, then, refer to specific instantiations of variation points. In terms of this approach, products are the results of assembling variants into a common product base. This is where much of the analysis occurs with respect to Sherlock. The deliverables from this phase include:

  • A representation of the variability space, perhaps as a matrix
  • A definition of possible product strategies to pursue
  • A product strategy to be pursued

Domain modeling involves the development of models for the domain and the products in the domain. The domain model is at a level of abstraction such that all product models can be derived from it. The domain model and product models are described from the perspective of both the user and the product developer. In the former case, use cases help specify how the user interacts with the product and how the product will behave. In the latter, an analysis model describes the structure of the product in terms of its constituent objects. The deliverables from this phase include:

  • Product use-case and analysis models
  • Domain use-case and analysis models
  • Tracings between use cases and elements of the analysis models

The final phase of Sherlock is domain framework development. The domain framework is the assemblage of the elements needed to build products. This includes the software components as well as an architecture to guide the assembly of components. To realize the domain framework, five deliverables are required:

  • A definition of the nature of domain framework
  • Specification of the domain framework architecture including module, class, and object dynamics projections
  • Documentation of components
  • A catalog of the components in format and medium that developers can easily search
  • Guidelines for application development

The second part of the book contains four case studies illustrating the use of Sherlock in a variety of applications, including wireless Internet communications, neural network products, and command and control systems for remotely operated vehicles. Each case study provides detailed examples of the Sherlock deliverables for the five phases. These give readers a good sense of how to implement this methodology.

This book has many positive features, including a consistent approach to describing the phases of Sherlock and its use of examples and case studies. Also, its use of standard notations and diagrams make it understandable to many readers. To get the maximum benefit from this book you should be prepared to spend considerable time reading, rereading, and studying the diagrams and examples. The authors have gone to great lengths to provide these in-depth examples and discussion.

Some minor detractions that I found include:

  • The illustrations are often not on the same page as the text that describes them.
  • Occasionally some terms and notation are not defined. This was not, however, a general occurrence.
  • Some points are obscured by trying to be overly precise with their jargon.

None of these detractions are serious enough to keep me from recommending this book to those interested in software reuse or software product lines. It provides an approach that guides the reader from business analysis associated with a product domain to the technical details of implementing products for the domain. The book describes many details that will benefit those embarking on a domain-based reuse program, especially with the analysis and planning of how to exploit a particular product domain.

Dave Zubrow (dz@sei.cmu.edu) is team leader for the Software Engineering Measurement and Analysis (SEMA) group within the Software Engineering Institute (SEI). His areas of expertise include empirical research methods, data analysis, and data management. Zubrow earned his doctorate in social and decision sciences and has a master’s degree in public policy and management from Carnegie Mellon University. He is an authorized SEI instructor. He is a member of SQP’s Editorial Board and chairs the committee on metrics, measurement, and analytical methods for ASQ’s Software Division. Zubrow also is an ASQ Certified Software Quality Engineer.

Back to top

Requirements Analysis and Design, Developing Information Systems with UML

Leszek A. Maciaszek. 2001. New York: Addison-Wesley. 378 pages. ISBN: 0 201709449

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by David Walker


This book is a complete reference for requirements analysis and design, but it is based on object technology and employs the popular standardized unified modeling language (UML). The depth of coverage includes an overview of the software development process, an overview of requirements analysis using object technology, requirements determination, requirements specification, advanced analysis, system design, user interface design, database design, program design, formal reviews, execution-based testing, and change management. There is valuable guidance on modeling of requirements and software architecture, including many popular methodology alternatives. Many diagrams and examples are provided to clarify the concepts, and some good questions close each chapter. A guided tutorial (online shopping) employs the concepts of the book to a specific application domain and provides a complete solution. Four other case studies in other domains (university enrollment, video store, contact management, and telemarketing) are provided. The examples, scattered throughout the book are pulled together with a set of activity diagrams, another major feature of the book.

The emphasis on the design phase is something readers can appreciate. The author went to great lengths to provide depth where other books have fallen short. Verification strategies, such as design review, code walk-through, and inspection, are also discussed. Declared within the book is a Web site (http://www.booksites.net/maciaszek) that contains solutions to the questions in each chapter and solutions to the case studies. The site maintains an instructor’s manual and student resources for potential use in the education sector.

This book should be valuable to anyone interested in UML or looking to explore a more advanced methodology for developing software, specifically focusing on the requirements and design phases.

David Walker (david.w.walker@pharmacia.com) has a master’s degree in computer science from Northwestern University and is an ASQ Certified Software Quality Engineer with 18 years of software engineering experience in the communications and health care industry sectors. He is currently a senior information scientist at Pharmacia Corporation.

Back to top

Computer Networking Essentials

Debra Littejohn Shinder. Indianapolis: Cisco Press. 2001. 735 pages
ISBN 1-58713-038-6

(CSQE Body of Knowledge areas: Software Engineering Processes, General Knowledge, Environmental Conditions, Maintenance Management)

Reviewed by Gordon W. Skelton

Most people encounter networks daily. Some have only limited knowledge of the fundamentals of networks and how networks actually work. As networks have become more pervasive it is essential to have more than a cursory knowledge of networking.

Shinder recognizes the need for improving the knowledge of network users and individuals starting a career in the networking industry. For this audience she has written this book.

The book is subdivided into four parts:


I. Introduction to Networking Concepts. This part covers such topics as network classification, network concepts, network models and standards, communication methods, network specifications (that is, Ethernet).

II. Networking Hardware and Software. This section covers those physical components that comprise a network, media, protocols and services, LANS, WANS, network operating systems, directory services, and related operating systems and hybrid networks.

III. Network Specialty Areas. This part focuses on additional concerns of networking such as security, virtual private networks (VPNs), remote access, thin clients, and monitoring and managing a network.

IV. The Future of Networking. This part looks to the future and discusses such topics as improvements in IP and futuristic aspects of networking.


For those who have experience in networking but want to refresh their basic knowledge, this is a great book. Parts I and II present a good overview of the key concepts and components that make up the standard network. In Part III Shinder addresses additional areas of importance, topics that may not have been experienced by all individuals supporting networks.

For individuals concerned with the quality aspect of networks this book has much to offer. Of particular help are the extensive glossary and the broad coverage of many key networking concepts. Simply used as a reference book Parts I and II would serve well. Adding the issues of security and management enhances the value of the book as a tool for system quality persons. Chapter 18 focuses on monitoring and managing networks, as well as troubleshooting. This area is certainly a concern for network quality control.

In general, this book is very readable. The chapters are well organized and the subsections are precise, which adds to its reference text quality. Each chapter ends with a series of review questions and related references, which are helpful.

I recommend this book as a good addition to the library of those who are network users and those who have a responsibility for the oversight of a network (not daily management). For individuals with vast network experience, this book has less value.

In addition, for individuals starting out in the networking world, this book should be kept close at hand. As one encounters terms and abbreviations it is helpful to have a place to go to quickly find the correct answer.

Gordon Skelton (gwskelton@mvt.com) is vice president for information services for Mississippi Valley Title Insurance Company in Jackson, Miss. In addition, Skelton is on the faculty of the University of Mississippi, Jackson Engineering Graduate Program. He is an ASQ Certified Software Quality Engineer and is a member of the IEEE Computer Society, ACM, and AITP. Skelton’s professional areas of interest are software quality assurance, software engineering, process improvement, and software testing.

Back to top

The Unfinished Revolution: Human-Centered Computers and What They Can Do for Us

Michael Dertouzos. 2001. New York: HarperCollins Publishers. 217 pages. ISBN: 0-06-662067-8

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Douglas A. Welton

In The Unfinished Revolution, Michael Dertouzos, director of the Laboratory for Computer Science at MIT, lays out a well-formatted speculative road map for a future built around human-centered computing. He begins with answering the question “why?” followed by a detailed description of “what?” and finally a peek at a possible “how?”

As an opening move, Dertouzos puts forth his speculation on “why” human-centered computing is necessary. The author motivation is two basic questions: “How do we make computers easier to use?” and “How can computers help users do more with less?” In speculating about the answers to these questions, Dertouzos concludes that current marketing of computers has done more to increase hype than productivity. The author suggests a radical change, that human-centered computing will transform today’s notions “by throwing out last century’s model for computing and adopting—indeed, demanding—a new computing philosophy, a new master plan, that lets people interact naturally, easily, and purposefully with each other and the surrounding physical world.”

Machines should be judged by how well they serve people’s needs, not processor clock speeds. Dertouzos states that people have not fundamentally changed in thousands of years, but technology is constantly evolving. Thus, the responsibility of adapting falls on the shoulders of new technology. As an example, the author points to the ease of use of a car’s gas pedal and steering wheel and in a human-centered computing world, he speculated what would be the appropriate computing metaphor. In concluding his opening moves, Dertouzos states that three things are necessary for the paradigm shift to human-centered computing to begin. First, the mindset of designers and users must shift to focus on computing interactions from a human sensory vantage point. Second, machines must be easier to use and more effective in making people more productive. Third, this new technology must be available to everyone.

The meat of the book is found in the author’s description of the attributes of human-centered computing. Dertouzos deconstructs the “what” of his human-centered computing philosophy into five elements:

  1. Natural interactions—building systems that interact with humans by speech and vision, forms that people have evolved to be comfortable with.
  2. Automation—building systems that off-load work from people’s brains and eyeballs facilitating the process of “doing more with less.”
  3. Individualized information access—building systems that understand how people individually like to organize and describe information.
  4. Collaboration—building systems that let people work with each other across space and time.
  5. Customization—building systems that adapt to the wide variety of human and organizational interests, capabilities, styles, and goals.

Each element is addressed in its own chapter. In each chapter, the author provides a reason for selecting the particular element, examples of how current computing ideas may be affected by this selection, and, in some cases, he examines the social consequences of adopting human-centered computing as the paradigm for future machine-human interactions. Only after detailing the benefits of each element of human-centered computing does Dertouzos get around to writing about the element that some might consider the most important factor of all: humans.

In closing his speculative about human-centered computing, the author details a project under way at MIT. Oxygen is a collection of prototype projects that handle the individual computing needs from the level of the handheld system to the nomadic network system. In the book’s final chapter, Dertouzos voices his opinion of how human-centered computing might impact various social institutions, such as the info “haves” vs. the “have nots,” globalization, and monculturalism.

The Unfinished Revolution is an easy-to-read, 217-page book written in a conversational style that will not challenge one’s vocabulary. This book is pure conjecture, and thus is impervious to practical criticism. As I read the book, however, I found myself becoming frustrated by the author’s habit of asserting a speculation, as if it was a proven and accepted fact, without providing any proof or supporting evidence. In particular, many of the assertions about the social impact of human-centered computing seem to have no negative effects whatsoever. Perhaps my caution is simply a byproduct of my belief that the unintended consequences of any act always outnumber the intended consequence.

The familiar wisdom states that one should “never judge a book by its cover.” This wisdom is something readers must keep in mind when they pick up The Unfinished Revolution: Human-Centered Computers and What They Can Do for Us. On the cover one will find quotes about Michael Dertouzos from Bill Gates, Tim Berners-Lee, Vint Cerf, and half a dozen other notables of equal stature from the world of technology, business, and economics. Oddly enough, most of the quotes do not actually mention the book.

Douglas A. Welton (dwelton@bellsouth.net) is a computer scientist and playwright. Over the course of his career, he has contributed innovation, vision, and excellence to leadership roles and product success at Digital Equipment Corp., HBO & Co., Bell + Howell, and Merant. One of his current projects is authoring the forthcoming book Insanely Great Object: A Guide to Software Development Using Objective C and Cocoa on Mac OS X.

Back to top

PROGRAM AND PROJECT MANAGEMENT

Technology Acquisition: Buying the Future of Your Business

Allen Eskelin. 2001. Boston: Addison-Wesley. 179 pages. ISBN 0-201-73804-X

(CSQE Body of Knowledge areas: Program and Project Management)

Reviewed by Jayesh G. Dalal

The increasing trend by information technology (IT) managers to buy instead of build their technology is the author’s motivation for this book. He developed and refined the approach presented to carry out his technology acquisition management assignments. In his words, the goal for the book is, “to describe a way of managing a technology acquisition project…
so that you select the right vendor, with the right technology, for your business.” He also describes how to implement and operate the acquired technology. The book is aimed at experienced project managers taking on their first technology acquisition management.

The author begins by presenting a seven-step project life cycle for managing a technology acquisition project, and then detailing each of the seven steps. For each step, the author defines and discusses the objective and then explains the associated process/activities. This description is often supported by templates and checklists. In some instances the author has provided case studies based on real-life situations.

Eskelin considers people as important as process in managing technology acquisition, and he has identified and described the role of various groups of people involved during the steps. The author has a Web site (www.technologyacquisition.com) and invites readers to share their experiences and advance the practice of technology acquisition management.

The reference map at the beginning of the book is useful for quickly locating a specific template, a case study, or people responsible for a particular step. Unfortunately, the checklists are omitted from this reference map. Given that the IT managers and professionals motivated the author, it is interesting that he has not provided the templates and checklists as electronic files with the book or at his Web site.

The book has seven chapters, and they are sequentially devoted to the seven steps of the technology acquisition project life cycle.

  1. Initiation step—includes defining the business need and chartering the project.
  2. Planning step—covers planning the acquisition, defining and prioritizing requirements, defining the solution, and identifying and contacting vendors. A template for scoring vendors is provided.
  3. Research step—the objective of this step is to identify the best vendor. Nine “research methods” are presented and their respective strengths and weaknesses are discussed. The author refers to them as methods and subprocesses at various points in the chapter. This could be a bit confusing since one may use one or more of these methods in combination to select the best vendor. An extensive request for proposal (RFP), based on a successful technology acquisition, is presented in this chapter.
  4. Evaluation step—aimed at vendor evaluation. To aid vendor evaluation the scenario-planning tool (commonly discussed in the literature on strategic planning) is introduced. The discussion of this tool and its use is rather weak and seems to be an afterthought.
  5. Negotiation step
  6. Implementation step—addresses deployment of the acquired technology.
  7. Operations step—addresses management of fixes and enhancement of the acquired technology. It also addresses project closure. These days when the “cradle to grave” approach to product design is stressed, it is interesting that the author has omitted from this step any discussion of retiring/discontinuing the acquired technology.


In summary, this is a good “how to” book for a new technology acquisition project manager. It is well organized and easy to read. Although the author’s motivation and case studies have their roots in the software arena, one could apply his methods and manage technology acquisitions unrelated to software.

Jayesh G. Dalal (jdalal@worldnet.att.net) is the immediate past chair of the ASQ Software Division, an ASQ Fellow, and a national Baldrige Award examiner. He has more than 30 years of experience as an internal consultant and trainer in the manufacturing, software, and service industries. He has an independent practice offering management systems and process effectiveness enhancement services to businesses.

Back to top

SOFTWARE METRICS

Software Assessments, Benchmarks and Best Practices

Capers Jones. 2000. Boston: Addison-Wesley Longman, Inc. 659 pages.
ISBN 0-201-48542-7

(CSQE Body of Knowledge areas: Software Metrics, Measurement, and Analytical Methods, Software Engineering Processes)

Reviewed by Pieter Botman


A former colleague used to commence software quality activities on every project by setting expectations early: “In God we trust—all others, bring data.” Most software engineering organizations now do record and analyze data relating to their own work products, projects, and processes. At a higher level, however, effective software engineering management and quality management require judgments based on industrywide measurements and benchmark data. Everyone has heard the roar of experts claiming to offer advice on best practices and methodologies, but where are the truly useful supporting data?

Capers Jones brings much industry data and analysis to help senior software people tackle larger-scale process issues and complex projects in their organizations. This book is not intended for neophyte programmers or those concerned only with simple software projects.
It will be of most use to senior managers and process improvement experts. However, all software engineers and software quality personnel with a solid understanding of process issues throughout the life cycle will benefit from considering the best practices and various benchmark summaries.

Jones has published many landmark books dealing with software productivity, quality, and the many practices/factors affecting them. Readers familiar with Jones’ work will already know of his research and consulting activities, and his intended audience in presenting summaries of this nature. For those unfamiliar with Jones’ work, however, an introduction might be in order.

Jones founded Software Productivity Research (SPR), which has conducted software assessments worldwide for medium- and large-scale software organizations. (SPR has subsequently been acquired by Artemis International Corporation.) Through this work, the company amassed a large database of software engineering measurements, likely one of the largest such databases in the world (9155 projects, almost 600 sites as of 1999). SPR conducted these assessments in a confidential and detailed manner, attempting to characterize the overall software productivity of a given organization, and then measuring the many practices and factors affecting that productivity. In the book, Jones is open about the commercial nature of SPR and the assessment business, and does include a sample SPR survey in the appendix, but takes care not to actively sell SPR data or services.

As important as the data collection might be, it is the structuring and analysis of the data that provides the most value for the reader. Knee-deep in data, one can easily imagine a researcher getting bogged down on lower-level issues and statistics. But Jones keeps his eyes on the ball. His primary focus is always on the measurable results (software productivity, software quality) of interest to software-producing organizations, and the factors and subfactors affecting them.

Jones begins with a worthwhile discussion of software assessments and benchmarks in general. He explains the nature of baseline studies (within a company) and benchmarks (across an industry group), and the need for appropriate and consistent assessments. He discusses the Software Engineering Institute’s Capability Maturity Model (SEI CMM) and SPR assessment methods, and provides an approximate mapping between the SPR assessment “scores” and SEI CMM maturity levels. He outlines technical issues in collecting consistent data, such as varying industrial practices in tracking costs, resources, and schedules.

Jones outlines a careful and systematic scheme for categorization of data. He categorizes companies by industry (SIC) code, by geographic location. He classifies the projects by “nature” (10 cases, ranging from “new development” to “modifications for Euro or Y2K”), by size (function point ranges), and by scope (10 cases). Data are taken primarily from assessments of U.S.-based companies or sites. Overall, Jones discriminates using 36 factors in six major categories:

  1. Software class
  2. Project specific factors
  3. Technology factors
  4. Sociological factors
  5. Ergonomic factors
  6. International factors

But he chooses to aggregate results, and group best practices, by software class, an attribute assigned from one of the following cases:

  • End-user applications—developed privately for personal use
  • Information systems—developed On-Site for corporate use
  • Outsourced or contract projects—developed under contract
  • Commercial software—developed to be marketed to external customers
  • Systems software—developed to control physical devices
  • Military software—developed to military standards

Jones presents some benchmark information concerning overall U. S. industry averages, such as the ranges for defects per function point (from roughly nine to approximately two) and defect removal efficiency (from under 60 percent to more than 95 percent). Though Jones has published this type of data before, it is useful within the context of this book, and should be republished when the data are constantly changing.

Jones presents many ranked lists of factors and best practices associated with outstanding or leading organizations, but carefully qualifies their evaluation by reminding the reader that these rankings represent “relative positive or negative impacts” of each factor upon productivity or quality. These factors are difficult to isolate from one another, and in any given project, the positive impact of a given factor may be altered by the project context. Jones provides further support for his analysis of best practices and factors by showing the correlation between an organization’s SPR “score,” and adoption of the best practices.

The lists are fascinating, and best practices are described for each software class. It is worth noting the changes in emphasis of most important best practices in moving from the MIS software class to the commercial class. Jones points out that the commercial class is the one with the sharpest distinction between success (30 percent) and failure (70 percent). In this class, customer satisfaction and development speed (productivity) are critical. Yet, while commercial software organizations are aware of the best practices in quality control (say, design inspections and code inspections), as a group they are “troubled by marginal quality levels” and have “lagged in quality measures and metrics.” Microsoft is briefly discussed as an example of an organization in this class. E-commerce and Internet software and applications are not yet represented in this work.

This book is not simple-minded. There are many potential practices in all life-cycle phases, and many factors to consider. Some readers might hope for a simplistic comparison of “packaged methodologies” (RUP, XP, and so on), perhaps to help reaffirm their own biases and investments. But Jones does not fall into that trap and presents a fine-grained analysis based on many practices and factors. This work does present the reader with a large set of facts, but it accompanies them with a sound, structured analysis. This is an important tool—one that allows the thinking software manager to compare organizations according to certain factors, and evaluate the adoption of best practices as appropriate for his or her organization.

Pieter Botman (p.botman@ieee.org) is a professional engineer registered in the Province of British Columbia. With more than 20 years of software engineering experience, he is currently an independent consultant, assisting companies in the areas of software process assessment/improvement, project management, quality management, and product management.

Back to top


SOFTWARE CONFIGURATION MANAGEMENT

A Guide to Software Configuration Management

Alexis Leon. 2000. Boston: Artech House. 382 pages. ISBN 1-58053-072-9

(CSQE Body of Knowledge area: Software Configuration Management)

Reviewed by Linda Westfall

This book is a fair introductory text to software configuration management (SCM) that is in desperate need of a good editor. I say that because I became very tired of reading the same book over and over again. There is a saying I learned in speech class: “Tell them what you are going to tell them, tell them and then tell them what you told them.” This book takes this saying literally. The first 90 pages are an introduction to configuration management. The next 90 pages repeat major topics (sometimes it seemed like entire paragraphs) from this material with only a little added information. Even within chapters and sections an idea, concept, or exact phrase is stated multiple times.

So putting the repetitiveness aside, the chapter on software configuration management tools is excellent. It includes:

  • A discussion of the benefits and advantages of having automated SCM tools
  • A list of how SCM tools support major SCM functions (for example, version management, change management, problem tracking, promotion management, system building, status accounting)
  • An overview of SCM tool selection and a discussion on selection criteria with examples and additional references
  • A discussion of issues related to SCM tool implementation

If one is in the market for SCM tools, this chapter, coupled with the information in Appendix A, might make this book an excellent investment. Appendix A includes a list of SCM tools currently available on the market by type and a list of major SCM vendors with a description of their tools and contact information.

The appendices may be the most valuable part of the book. Appendix B includes information on various SCM standards. Appendix C includes an extensive and annotated list of Internet sites for additional information on SCM. Appendix D is an SCM bibliography. There is also a glossary of SCM terms and acronyms.

Besides people who are considering the purchase of SCM tools, this book might provide a good, fundamental introduction to SCM for project managers, software developers and testers, or anyone trying to understand the basics of this topic and its language. It is written in an easy-to-read and jargon-free style that makes these concepts understandable. However, I would not recommend this book to anyone who is charged with the SCM role. It does not include the depth or detail needed for the implementation of the concepts discussed. For example, there are only six brief pages defining configuration audits and reviews at a very high level with no “how to” information at all.

Linda Westfall (WESTFALL@idt.net), currently chair of the ASQ Software Division, has 20 years of experience in software engineering, quality, and metrics. Prior to starting her own business, The Westfall Tea

Back to top

Featured advertisers


ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.