Design and Use of Software Architectures: Adopting and Evolving a Product-Line Approach
Jan Bosch. 2000. New York: ACM Press. 400 pages. (CSQE Body of Knowledge areas: Software Quality Management, Software Processes, and Software Project Management)
Reviewed by Milt Boyd
The central portion of the house is Greek Revival, two story, while the one-story wings to the North and South are Palladian. Twenty-five words or less, and those in the know can make a pretty good sketch! Lots of information about components and their relationships is conveyed through the use of standard architectural labels.
More and more, software professionals recognize the value of well-defined architectures. The values include:
These values increase for larger, more complex products, and for families of products. The body of knowledge for CQSE recognizes this and addresses it in sections 2 (Quality), 3 (Processes), and 4 (Project Management). There is a need for solid information on this topic, and this book meets that need.
The book is unusual in its focus on the quality requirements. It distinguishes them from the functional requirements of products. One goal is to evaluate the potential of the designed architecture to reach the required levels for its quality requirements. Examples of development quality requirements are configurability, flexibility, maintainability, and demonstrability. Examples of operational quality requirements are performance, reliability, safety, and security. These are system properties; they get assessed for each software architecture.
Bosch accepts and uses Bass definition of software architecture as the structure or structures of the system that comprise software components, the externally visible properties of those components, and the relationships between them. The most common architecture styles are pipes and filters, layers, blackboard, object-oriented, and implicit invocation.
Bosch takes several real-world products and illustrates the imposition of different styles upon them. He also considers the impact of having one style imposed on a high level, and another style imposed on a lower level.
This book has 15 chapters. The first chapter presents an overview of the topics addressed. In it, Bosch identifies the important issue that there is a considerable difference between the academic perception and the industrial practice of software architecture. The concepts and techniques in the book are based on the perspective of industry, not the academy.
Bosch provides maps to the content, with several reading guides. In particular, he provides paths for those interested in just the basics, only in theory, only in design, or mostly in use. These seem to work well.
There are three major phases of architecture design, according to Bosch. These are: 1) functionality based design; 2) assessment; and 3) transformation. Bosch insists that architecture design be based solely on functional requirements and then assessed against the quality requirements. He recognizes that others may advocate that architecture design be based on all requirements. He presents persuasive evidence, however, for his preference.
Bosch acknowledges the creative aspects of architecture design but provides a framework with detailed discussion of each. In particular, he clarifies the distinction, and reduces the confusion, between two ideas. These are hierarchical architecture designs with architecture components on one hand and detailed software designs with software components on the other.
In the second part of the book, Bosch presents some success stories of companies that used software architecture in their product lines. These show three types of use of software architecture. They correspond to three dimensions along which software product lines can be decomposed. This discussion is fairly schematic and seems more abstract than the usual concrete discussion elsewhere.
Bosch also addresses what can be meant by the term component in the definition of architecture. There is a good degree of flexibility in the application of the principles that are advocated.
There is a short discussion of validation and testing in the chapter on family-based system development. It recognizes the difficulties involved in testing members of a product line. This is typical of the book, which presents both positive and negative aspects of the topic, in contrast to other books enthusiastic promotion of some favored idea.
Throughout the second part are scattered discussions of the financial, organizational, cultural, and managerial changes that must accompany the technological changes for successful use of software architectures in product lines. The last chapter is the culmination of these discussions. If this book is revised I hope the last chapter is expanded to gather comments now presented piecemeal throughout the book. I hope also for more discussion of unsuccessful attempts, to extend our knowledge of pitfalls.
The adoption of software architectures is an investment, that is, the expenditure of current resources with an expectation of future benefits. Bosch presents fairly both the expenditures and the benefits.
For the software professional concerned about the theory or application of design of architecture, this book provides valuable information.
Milt Boyd is a member of ASQs Software, Reliability, and Quality Management Divisions. He is certified as an ASQ Quality Manager and is certificated by IRCA as a lead auditor of quality management systems. He is currently employed as a system engineer by Avidyne Co. of Massachusetts.
EXTREME PROGRAMMING EXPLAINED - EMBRACE CHANGE
Kent Beck. 1999. Reading, Mass.: Addison-Wesley. 224 pages. (CSQE Body of Knowledge: Software Processes)
Reviewed by Scott Duncan
Perhaps the best way to begin a review of this book is to use its own words. Kent Beck says, Extreme programming (XP) is a lightweight methodology for small-to-medium size teams developing software in the face of vague or rapidly changing requirements. The term extreme comes from taking commonsense principles and practices to extreme levels. For readers who work in this type of environment and are looking for ideas on how to deal with the situation, then get a copy of this book. There has been some active exchange on newsgroups across the Internet (notably comp.software-eng), but it has not done justice to the book.
Given its small-team approach, XP focuses on the construction aspect of software development, that is, what programmers can and should do. It is, however, not a how-to book in the pure sense of the word. It provides basic principles, such as advocating:
Beck makes the point that XP is a discipline of software development...because there are certain things you have to do to be doing XP. Thus, XP is not some random hacking together of code without regard for design, architecture, or quality. Indeed, constant feedback on the quality of the code is a major part of the XP philosophy, which is why it emphasizes developing test cases before coding and using daily integration testing to monitor the quality and integrity of the system. XP provides some of the most constant visibility into the development effort of any approach imaginable since, through pair programming, there is always someone watching what another person is doing and providing immediate feedback on what they see. Though I do not recall the term being mentioned, XP seems to me to be the height of egoless programming since there is constant code review occurring (where code includes the test cases).
It is perhaps telling, though, that the first chapter of the book is titled Risk: The Basic Problem. Beck indicates in the preface that the book is divided into three parts, and identifying the problem extreme programming is trying to solve is the first. XP is based, therefore, on addressing software development risk, including schedule slips, system quality, misunderstood or changing requirements, feature creep, even staff turnover. Beck states that there are four variables in estimating a project: 1) cost; 2) time; 3) quality; and 4) scope. Management and customers believe they can pick the value of all four, but under XP, they can only pick the values of any three, leaving the fourth to the development team. (Karl Weigers, in his book Creating a Software Engineering Culture, adds staff as a fifth variable, but takes the same position that one cannot expect to control all the variables since they are dependent on one another.) Beck says, however, that scopewhat the customer wants, which changes is the most important variable to be aware of. This leads to XPs incremental implementation of functionalitythe most important firstin case something has to be dropped/altered later.
But perhaps most important, and controversial, of all, is Becks statement that the traditional assumption about the exponentially increasing cost of change is no longer accurate. Indeed, Beck states that, in the world where XP is applied, it is possible to experience a curve that is really quite the opposite. That the cost of change does not have to rise dramatically over time is the technical premise of XP, resulting in making big decisions as late in the process as possible to defer their cost and to have the greatest possible chance that they would be right. This would lead one to only implement what you had to and introduce elements to the design only as they simplified existing code or made writing the next bit of code simpler.
I do not think the example in the book, however, is about a situation where the traditional curve has validity. The book addresses a contained change to an existing system, while the curve has mostly been used (as Beck admits) to describe the cost of fixing defects late vs. early in a development effort. While it seems true that if the cost of change rose slowly over time, you would act completely differently from how you do under the assumption that costs rise exponentially, the example does not seem to support the view. Beck argues that a simple design (based in object-oriented technology), automated tests that successfully execute without error, and lots of practice in modifying the design all inspire confidence when changes are made and contribute to produce a flattened, not exponential, curve. This may be a case, then, of having to try this out to see what data ones experiences produce.
Beck finishes the opening third of the book discussing the four values of XP (communication, simplicity, feedback, and courage), how they feed into XPs basic principles (rapid feedback, assume simplicity, incremental change, embracing change, and quality work), and how that leads to XPs fundamental focus on the activities of coding, testing, listening, and designing. The three chapters that cover this progression establish a style of software development behavior that, despite XPs lightweight, incremental approach is quite humane and focused on long-term benefits, not short-term self-interest. Who can argue that poor communication, undue complexity, and lack of feedback (project invisibility) can be project killers? To this, Beck adds the value of courage to engage in more high-risk, high-reward experiments, pointing out that communication, simplicity, and feedback are what give people such courage.
The rest of the book is about implementing XP and describing the practices that XP offers as a solution to software development risk:
Planning Small releases Metaphor (shared stories) Simple design Testing Refactoring (reengineering) Pair programming Collective ownership Continuous integration 40-hour week On-site customer Coding standards
Did I agree with everything in the book? No, I did not think the examples fit the situations. I had no problem with the ideas or approach being taken even if all the justifications did not seem to quite work for me. People with less discipline and less professional integrity could certainly take these ideas and produce a poor system that nobody after them (and perhaps they themselves) could maintain. But some of Becks concluding remarks about what he fears (for example, doing work that does not matter; doing work I am not proud of) and what he does not fear (for example, relying on other people, proceeding without knowing everything about the future) certainly give me confidence that I would know in what context XP can be expected to work. I do not believe I have ever worked anywhere that XP could not have been employed, in at least some instances, to produce a better, more satisfying result.
I recommend this book. Even if readers do not agree with much of it, it is a book worth reading.
Scott Duncan brings more than 27 years of experience in all facets of internal and external product software development with commercial and government organizations. For the last six years he has been an internal/external consultant helping software organizations achieve international standard registration and various national software quality capability assessment goals. He is the current Standards chair for ASQs Software Division and is a member of the U.S. Technical Advisory Group for ISO/IEC JTC1/SC7 standards in software engineering. Duncan can be reached at firstname.lastname@example.org or email@example.com.
SOFTWARE PRODUCT LINE ENGINEERING: A FAMILY-BASED SOFTWARE DEVELOPMENT PROCESS
David M. Weiss and Chi Tau Robert Lai. 1999. Reading, Mass.: Addison-Wesley. 426 pages. (CSQE Body of Knowledge area: Software Processes)
Reviewed by Pieter Botman
Software Product Line Engineeringthe name says it all, but not many software practitioners even realize what it entails, never mind how to implement it.
Readers in the engineering and quality fields might have heard of design for manufacture, product family, and concurrent engineering. This book is meant to immerse the software practitioner in these concepts as applied to software development. It is aimed neither at programmers nor at neophyte software engineers struggling to understand and exert some control over various basic software life-cycle processes.
As Professor David Parnas points out in the foreword, thinking about a software product family, and appropriate production processes suited for such a family, represents an evolutionary step in the development of a software engineer or a software engineering organization. Depending upon ones view of the evolution of the software engineering profession and the economics of software, one could make a case that this topic is important, since this is where software engineers will increasingly be needed.
The book requires a strong focus on software production. Readers with experience in generating complex software products, with insight into the many nitty-gritty aspects of software production, will gain the most from this work. This is because the authors commence immediately by discussing the extension of the software production process (and all other life-cycle processes) to accommodate a family of products. Imagine generating deliverable code for a product family member directly from the model of that family member, knowing that the behavior of the deliverable code corresponds to the behavior captured in that model! The list of potential payoffs for a well-engineered product family and appropriate processes is lengthy, including gains in flexibility, quality, turnaround time, and cost. The authors claim that their target for improvement in time interval and cost for software production of a family member is between 5:1 and 10:1.
The authors goal is to provide readers with a systematic approach to analyzing potential families and to developing facilities and processes for generating family members. They do so in a series of successively more detailed and more formalized descriptions of FAST (family-oriented abstraction, specification, and translation) processes. They note that the book represents neither a survey of domain engineering development processes, nor a comparison of FAST with other software development processes. This is important for readers to note.
The FAST processes are introduced after reviewing the motivation for product families, and they include product family work in analysis (predicting expected changes to a system), design (use of abstraction, formal modeling), implementation (reliable, reusable components), and production (concurrent engineering, family member generation). The authors run through a reasonably straightforward example of applying FAST processes to produce a software product family for a set of commands and reports.
Later in the book the authors present a more detailed example, a software product family to control floating weather stations (FWS). Included in this case study are detailed discussions of family commonality analysis, economic analysis, and the development of a case-specific language (dubbed the FWS Language, or FLANG), which is used to model family members (and later to drive family production). The authors carry the example through all the FAST processes, to module design and coding, and the resulting software is included on the CD-ROM, which accompanies the book. Detailed artifacts from the FWS family are included in several appendices, including the detailed commonality analysis, a module guide describing the design rationale, and the detailed (PERL) scripts for family member generation.
The authors do discuss the case where existing software is considered as the basis for a product family, and the principles of domain engineering, which must be applied. They take care to map the decisions made regarding commonality of features, at the outset, to various artifacts and steps further on in the life cycle. And importantly, they note that in practice the establishment of such a product family-oriented methodology may require special emphasis or investment in certain parts of the life cycle. They admit that it may not be feasible to implement all aspects of the methodology to the same extent. Having a framework such as FAST allows software engineering organizations to at least contemplate their existing and ideal life-cycle processes with respect to their product families and their organization, before determining where to apply their investment.
The authors introduce a formal, general method for describing and documenting process models, called PASTA (process and artifact state transition abstraction). This formalism is centered on subprocesses (each with its appropriate state, roles, and artifacts) linked together to represent a given overall development process. PASTA is generic, it is thorough (complete with guidelines for forms, state definitions, role definitions, and so on), and it is structured for automation (meaning for support by automated tools).
Finally, the authors employ the PASTA formalism to rigorously describe the FAST process model. This complete FAST PASTA model is included in the book, and on the accompanying CD-ROM along with a browser.
Readers can gain insight from any of the five sections of the book: the introductory concepts of product line (domain) engineering, the basic descriptions and principles of the FAST processes, the case study of FAST processes applied to an FWS family, the PASTA formalism for documenting process models, or the detailed FAST PASTA definition.
All five sections of this book are well thought out, and where appropriate, specific and rigorous. In the first two sections the authors define terms and relate (via references) their concepts to fundamental software engineering concepts expressed in the literature. The detailed FAST PASTA model might not be literally adopted by large software engineering organizations, particularly those with large existing software libraries facing (after the fact) issues of software commonality, refactoring, and evolutionary restructuring. But the FAST processes, and the product family engineering principles described are worthwhile as major points of focus for a software engineer faced with evolving products and evolving processes.
Pieter Botman is a professional engineer (software) registered in the Province of British Columbia. With more than 20 years of software engineering experience, he is currently an independent consultant, assisting companies in the areas of software process assessment/improvement, project management, quality management, and product management. He can be reached at firstname.lastname@example.org.
REENGINEERING SOFTWARE: HOW TO REUSE PROGRAMMING TO BUILD NEW, STATE-OF-THE-ART SOFTWARE
Roy Rada. 1999. New York: AMACOM. 260 pages. (CSQE Body of Knowledge area: Software Process)
Reviewed by David Walker
This book starts with a simple explanation of the software engineering environment and quickly progresses into the complex details of software reuse. The term is fully defined. Someone not familiar with this domain could read this book and learn about this reuse buzzword.
The author uses section 1 to orient readers on the software life-cycle and management concepts. Current methodologies, practices, and standards are discussed to appropriate detail.
Section 2 explores the components and mechanics of the reuse concept and thoroughly covers industry standards work.
Section 3 covers the fundamentals of organizing and retrieving from the reuse library. Detailed information is given concerning extracting and using components from the reuse library. The author does not portray the extraction as a perfect science...instead the imperfections are exposed.
Section 4 of this book lists and evaluates tools used for software reuse, delivers case studies in industry, and discusses the reuse of computer-based learning material.
Companies involved in software development that define quality as time to market, defect free, and cost effective will want to take a serious look at this book and the reuse concepts portrayed. Software reuse requires more effort and planning up front but pays big dividends in quality. Proven software modules can be quickly and safely reused to speed development. The quality assurance effort is greatly reduced as well, since effort is focused on interfaces instead of code segments.
The book does not provide specific information or give examples of program code and how a software engineer implements this to support reuse, but general guidelines are given that should be enough for an experienced developer to figure it out. Again, a person not familiar with software engineering could read this book and understand software reuse.
David Walker has a masters degree in computer science from Northwestern University and is an ASQ Certified Software Quality Engineer with 16 years of software engineering experience in the communications and health care industry sectors. He is currently a consultant with Trilogy Consulting Corporation. He can be reached by e-mail at email@example.com.
CREATING HIGH PERFORMANCE SOFTWARE DEVELOPMENT TEAMS
Frank P. Ginac. 2000. Upper Saddle River, N. J.: Prentice Hall PTR. 123 pages. (CSQE Body of Knowledge area: Software Project Management)
Reviewed by John W. Horch
In spite of the title, Frank Ginac has written a book more on managing software development teams than creating them. He tries to show how to understand the project, assemble (he says hire) the proper project members, and mold them into a passionate, high-performance team. He uses more space, however, discussing requirements, the importance of planning, organizational structures, software life-cycle models, and various other topics than on team creation.
There is little material of a new or eye-opening nature, but the author assembles what experienced managers already know into a new format. The format may help new managers learn more easily. Even allowing for some questionable grammar and punctuation instances, some debatable statements about the content of a requirements specification, and a miniscule index, the text is clear and easy to read. Everything the author says may not be absolutely correct, but the errors are, by and large, inconsequential. Unfortunately, so is the book.
If I had had this text 40 years ago when I assumed the reins of a new software development group, would it have helped me? To be sure, I would have had a better grasp of the importance of firm requirements. I may also have paid closer attention to detailed planning. Beyond that, I think there would have been more wrongly formed expectations than benefits.
The author uses a number of pages addressing the defining of personnel requirements and hiring group members to fit that profile. In 40 years, I have never had the luxury of doing that. The best I could hope for was filling a gap or two that current, available company employees left vacant. More often, I inherited a group formed by my predecessor and had to live with it. The author overlooks this situation for the most part.
There is another observation. Although most of the authors contribution is a reformatting and rewording of widely held conventional wisdom, surely not all of the ideas and information presented are considered by the author to be his original thinking. There are exactly two citations of source material in the entire textboth of his own previous book.
John Horch is corporate quality manager for the COLSA Corporation in Huntsville, Ala. His more than 40 years of experience include administration, education, and management, involving large-scale computer programming, customer interface, software control, verification and validation, and the management of corporate facilities.