Software Quality Professional Resource Reviews - December 2002 - ASQ

Software Quality Professional Resource Reviews - December 2002

Contents

GENERAL KNOWLEDGE, CONDUCT, AND ETHICS

Building Operational Excellence, IT People and Process Best Practices
By Bruce Allen and Dale Kutnick

SOFTWARE QUALITY MANAGEMENT

Implementing the Capability Maturity Model
By James R. Persse

SOFTWARE ENGINEERING PROCESSES

Information Security Risk Analysis
By Thomas R. Peltier

A Practical Guide to Security Engineering and Information Assurance
By Debra S. Herrmann

Solid Software
By Shari Lawrence Pfleeger, Les Hatton, and Charles C. Howell

PROGRAM AND PROJECT MANAGEMENT

Software Engineering: Principles and Practice, 2nd Edition
By Hans Van Vliet

SOFTWARE METRICS, MEASUREMENT, AND ANALYTICAL METHODS

IT Measurement: Practice Advice from the Experts
By International Function Point Users Group

SOFTWARE VERIFICATION AND VALIDATION

Visual Basic for Testers
By Mary Romero Sweeney

Systematic Software Testing
By Rick Craig and Stefan P. Jaskiel

Quality Web Systems: Performance, Security, and Usability
By Elfriede Dustin, Jeff Rashka, and Douglas McDiarmid

From the Resource Reviews Editor:
In this issue there are 10 reviews covering six of the seven areas of the CSQE body of knowledge. There are four new reviewers for this issue. Welcome! Two of our reviewers have a special feature on the Extreme Programming series. You will find all the biographies for reviewers on the SQP Web site at /pub/sqp. If you know of anyone who likes to read books about software quality please have him or her contact me about doing reviews.

If you have any comments about Resource Reviews, please send them to me. I’d like to know if the reviews are useful to you. You can contact me at Sue_carroll@bellsouth.net .

General, Knowledge,Conduct, and Ethics

Building Operational Excellence, IT People and Process Best Practices

Bruce Allen and Dale Kutnick. 2001. Boston: Intel Press, Addison-Wesley Pearson Foundation. 219 pages. ISBN 0-201-76737-6

(CSQE Body of Knowledge areas: General Knowledge; Program and Project Management)

Reviewed by Ajit Ghai
ghai@rogers.com

Building Operational Excellence: IT People and Process Best Practices, written by executives Bruce Allen and Dale Kutnick from the META Group, is based on the Group’s work and experience at various information technology (IT) organizations. In the authors’ own words, the book focuses on how an IT organization can assess its operational processes and bring it to the level of best practices or operations excellence.

The book is intended for IT managers seeking to improve the quality, timeliness, or economy of the operational services they provide. Since the book is about IT operations, subject areas such as application development and maintenance, infrastructure design, and project management are not addressed. Security is the subject of a companion book.

The authors prefer that companies establish their best practices through introspective research rather than a comparative review of other companies’ achievements. They claim that best practices applicable to one company may not be applicable to another, and competitors may hesitate to share their best practices with others in similar industries. As an alternative, the authors encourage readers to use this book as a tool to formulate their own best practices.

Best practices are achieved by cataloging one’s processes, comparing them to the best processes implemented by experts (as documented in the book), identifying the gaps, and then filling these. Like processes are grouped into common sets, and these sets are referred to as “centers of excellence” or COEs. Best practices require analysis, thoughtful implementation, establishment of metrics, and a commitment to use the metrics as input to a continuous improvement cycle. Readers are encouraged to think in terms of processes rather than tasks. The pursuit of best practices entails a move to process-based IT operations.

There is an explanation of how to identify tasks and processes. A list of 38 generic IT processes is provided. The processes identified are rated for maturity on a scale of 1 (immature) to 5 (mature). Readers may recognize the semblance here to the Software Engineering Institute’s Capability Maturity Model (CMM) for software development. A catalog of processes is created using the information presented in this chapter.

Gap analysis for processes is explained. This is done by comparing the processes readers have cataloged for their operation in the previous chapter with the process catalog listed in the latter part of the book. Emphasis is placed on identifying tasks missing from the process. After gaps have been identified for all the processes and redundant ones have been eliminated, they are categorized and prioritized. The next step before filling the gaps is identifying and developing gap-filling scenarios. Once these are identified, implementing the chosen scenarios fills gaps. The authors ask readers not to stop once the gaps have been filled; regular gap analysis is encouraged, as is the pursuit of best practices.

Five types of COEs are described. These are command center, service center, data/media center, asset management, and application management. A brief description of each is provided.

There is a chapter that is dedicated to metrics and metric gap analysis. An inventory of IT metrics currently in use is recommended, followed by a comparison to the list provided in the book. The authors point out that these metrics must be mapped to the business metrics employed by the larger organization. The use of surveys to measure quantitative metrics is encouraged. In addition to using metrics to measure and improve the IT organization, the use of metrics to assess the value of IT to the whole organization is encouraged. For example, one value metric listed is the IT budget as percentage of gross company revenue.

In the section on productizing operations the authors urge IT departments to build a product model. IT organizations run as pure cost centers and recognized only for reducing costs will face an ongoing struggle to gain additional resources. Department heads are urged to understand their customers, and identify what customers want and what they are willing to pay at appropriate service levels. Pricing models are discussed, and a charge-back methodology (rather than fixed allocations) is advocated. In addition, aggressive marketing of IT services to customer departments is recommended. A phased approach to business relationship management (BRM) is proposed, and consists of three phases. The first phase builds relationships, the second sets expectations and reports on service levels, and the final phase helps the business by having IT operations become part of organizational strategy and planning.

There are also lists or catalogs of processes and COEs. Each process is documented using eight sections or dimensions. These are tasks, skills, staffing, automation technology, best practices, metrics, process integration, and futures. In addition, each process is rated on a scale of 1 to 10 on its level of automation and its stability. Among the 38 processes documented are asset management, negotiation management, software distribution, quality assurance, and workload monitoring.

Eight COEs are documented. Each COE consists of like processes grouped together. The authors indicate that these COEs are “typically found at the most advanced IT organizations in large firms.” The centers listed are application, asset, command, customer advocacy, data and media center, engineering support, outsourcing, and security.

In summary, the book should be useful to managers of IT operations seeking to build a comprehensive set of processes and best practices. The examples, lists, and catalogs provided should also be helpful. This book will not automatically provide one’s organization with best practices, but it will certainly help readers get on the path to them. As the authors caution the reader in the preface, “Every step within this process requires you to spend considerable time identifying the form the goal should take so that best practices are implemented in a thoughtful manner, leading to faster response, lower costs, and improved quality of service.” (p. xii).

Ajit Ghai (ghai@rogers.com) has worked in the technology industry for more than 22 years. He is currently an independent consultant and principal of Azalea Systems Inc., an Ottawa technology and project management company. He is a project management professional (PMP) and an ASQ Certified Quality Manager.

Return to Top

SOFTWARE QUALITY MANAGEMENT

Implementing the Capability Maturity Model

James R. Persse. 2001. New York: John Wiley & Sons, Inc. 421 pages. ISBN 0-471-41834-X.

(CSQE Body of Knowledge areas: Software Quality Management; Methodologies)

Reviewed by Eva Freund
efreund@erols.com

“The CMM is a framework for managing software process improvement activities. It is a means to an end, not an end in itself.” This statement prepares readers for learning how to implement the Software Engineering Institute’s Capability Maturity Model (SEI-CMM). This book (based on SEI-CMM, version 1.1) provides a plethora of information with which a dedicated organization will be able to create a framework that works for them if they want to leave the many organizations at level 1 and become one of the few that are at level 2 and/or level 3.

Part One contains an overview of the CMM, including the key process areas (KPAs) of levels 1-5 and how each is built on common elements. Part Two contains detailed recommendations for building a level 2 compliant organization. Part Three contains clear recommendations for migrating a level 2 program into a level 3 compliant process improvement program. Part Four turns to the best ways to implement the CMM and how to ensure a beneficial outcome by focusing on 11 key success factors that influence implementation readiness. Understanding the CMM process is covered in Chapter 15, and Chapter 16 provides an understanding of the assessment process. Appendices contain a sample level 2 preassessment questionnaire and samples of level 2 policies.

Each part of the book addresses the structure, processes, training, and policies to be considered when implementing the CMM. Each part contains examples of:

  • Practical CMM-compliant process outlines
  • Plan templates for each of the level 2 and 3 KPAs
  • Breakdowns of the resources, findings, and tools helpful to each KPA
  • Outlines of the policies recommended by the CMM
  • Lists of sample artifacts
  • A summary table at the end of each KPA chapter

Having recently completed a year providing support to a government agency IT effort to move from quiet chaos to being a level 2 organization, I was especially interested in reviewing this book. My interest almost waned as I scanned a few chapters because it seemed that I was looking at the SEI-CMM. When I stopped scanning and started reading I realized that the author had placed his informational nuggets before and after that which was SEI-CMM specific.

These nuggets remind readers what the CMM is and what it is not. They remind readers that level 2 is a time for learning and what that really means for the project and the organization. Throughout the level 2 section the author reminds readers that the CMM is a process and not an end, that there is no single correct process, and that the only requirement is that the processes (for each project) are documented, followed, and then measured. The project processes are modified based on the learning that occurs.

Then at level 3 the organization builds on the learnings of level 2, which were project-focused to create an organization-focused structure. At level 3 the KPAs become more sequential then they were at level 2. At level 3 the first thing is to institutionalize the software development process and management activities and make them available to the entire organization.

The introduction states, “The result (of this book) is a basic, hands-on approach to setting the model into place, be yours a large, medium, or small IT operation.” I cannot say it any better than that.

Eva Freund (efreund@erols.com) is an independent verification & validation (IV&V) consultant with 20 years of experience in software testing, standards, and project management. She offers IV&V and software process improvement services to private and public-sector organizations. She is an ASQ Certified Software Quality Engineer and a Certified Software Development Professional from the IEEE-Computer Society.

Return to Top

SOFTWARE ENGINEERING PROCESSES

Information Security Risk Analysis

Thomas R. Peltier. 2001. Boca Raton, Fla.: Auerbach Publications. 281 pages. ISBN 0849308801

(CSQE Body of Knowledge areas: Software Engineering Processes)

Reviewed by David Walker
david.w.walker@pharmacia.com

I am responsible for business continuity planning and testing within my organization so the title of this book intrigued me. Hopefully you will look past the grammatical errors and see impressive content, beginning with an excellent introduction to business risk, and finishing with valuable resources for executing the risk analysis process.

The first part of the book explains the risk analysis process followed by a case study. Appendices A through D contain valuable forms and samples. Appendix E classifies and defines potential threats, and Appendix F is a short collection of articles from other risk analysis industry leaders.

The book introduces “cost-effective” qualitative risk analysis techniques that can be used to identify accidental and malicious threats to computer systems. The author explains the facilitated risk analysis process (FRAP). A case study of a truck rental company illustrates application of this process.

Throughout the seven chapters and appendices there are many tables and charts that are not available in an electronic format. To use them, one would have to re-create them manually.

This book should not be overlooked by anyone who is responsible for security, business continuity, or disaster recovery planning.

David Walker (david.w.walker@pharmacia.com) has a master’s degree in computer science from Northwestern University and is an ASQ Certified Software Quality Engineer with 18 years of software engineering experience in the communications and health care industry sectors. He is currently a senior information scientist at Pharmacia Corporation.

Return to Top

A Practical Guide to Security Engineering and Information Assurance

Debra S. Herrmann. 2002. Boca Raton, Fla.: Auerbach Publications. 393 pages. ISBN 0-8493-1163-2.

(CSQE Body of Knowledge areas: Software Engineering Processes; Program and Project Management)

Reviewed by Carolyn Kennard
carolyn.kennard@lexisnexis.com

This book is a good place to begin or continue the process of learning about information security and information assurance (information security/IA). The book is divided into eight chapters. Each chapter contains exhibits and examples designed to assist readers’ understanding of the topics presented.

The author provides a concise, well-rounded discussion of information security/IA based on a broad definition of the term “information assurance” and focuses on automated systems. She specifically discusses commercial business and government/military systems that must work together to make up the complicated networks that control functions vital to the continued operation of telecommunications, banking, power, water, air, transportation, and emergency systems.

To begin, the author shares with readers the purpose of the book, which is to be a comprehensive guide to information security/IA. The text contains an in-depth discussion of intentional and accidental actions that may threaten information security/IA and includes models to establish guidelines for protecting systems against the accidental or malicious intentional actions or lack of action that would impact information security/IA.
Throughout the book the author stresses the importance of information security/IA plans in maintaining information integrity and that plans should be implemented early, examined often, and reviewed and updated regularly.

The author discusses several historical approaches to information security/IA by exploring some early models used to provide for the physical safety of hardware, software, and data of individual systems and progresses through the encryption and authentication of security models used to meet the needs of modern automated systems.

One of the models provided gives readers a high-level view of the major characteristics encompassed in an effective information security/IA plan. A sample illustrates the process used to establish a plan, defines boundaries for particular security issues, clearly states the security issues addressed in the plan, lists security areas that will not be covered, and provides details on the dependencies and assumptions on which the plan is based.

Vulnerabilities and threats to information security/IA in automated systems are explained by the author’s use of hypothetical scenarios that take readers through a variety of possible transaction paths that could lead to system failure. She offers an analysis plan to help prevent systemwide information disaster as well as strategies, contingency plans, and disaster recovery plans that may be implemented if or when failure occurs.

After the information security/IA plan is drafted, readers may use the checklists and test scenarios found in chapter 7 to assist them in verifying the effectiveness of the plan. A discussion of policies and procedures used to reduce risk, inform planning team members of remaining elements of risk, and methods and techniques to monitor continued risk exposure is also included.

Finally, the author informs readers of ways to analyze the cause, extent, and consequences of information security/IA failure. She includes a discussion on the importance of keeping detailed records of incidents and informs readers that the data generated from incident documentation are used to add further protection to the information security/IA plan and to provide the incident/accident reports required by regulatory boards in the event of system failure or breach.

This book would be well used just as the title states, as a guide to creating and maintaining information security/IA plans. It is a valuable resource for anyone in a position of responsibility for the safe and secure operation of automated systems. The text brings attention to areas where information security/IA has been weak in the past and describes ways to protect systems against these weaknesses. Readers will find step-by-step instructions to assist them in designing an information security/IA plan for implementation in their automated systems in all industries.

Carolyn Kennard (carolyn.kennard@lexisnexis.com) is a software test analyst with LexisNexis GEPD. She has been testing for LexisNexis for two and a half years.

Return to Top

Solid Software

Shari Lawrence Pfleeger, Les Hatton, and Charles C. Howell. 2002. Upper Saddle River, N.J.: Addison-Wesley. 307 pages. ISBN 0-13-091298-0.

(CSQE Body of Knowledge areas: Software Engineering Practices)

Reviewed by Milt Boyd
miltboyd@arczip.com

Solid Software is intended to be a practical guide for decision makers to evaluating and improving the quality of mission critical software: “relentlessly practical…guide to making intelligent, responsible trade-offs” according to the authors.

The goal is for the reader “to understand how to make software less fragile. Solid, survivable...that is reliable and secure, performs predictably, and is easy to upgrade.”
This book draws upon dozens of real-world examples to illustrate how to make the most of hazard analysis, design analysis, and other techniques; how to choose (and use) the best tools; and how to predict software quality and to assess finished systems.

Solid Software is organized into 11 chapters, with references at the end of each. They cover the usual topics, with such titles as: hazard analysis, testing, software design, prediction, peer reviews, static analysis, configuration management, and appropriate tools. The last chapter is “trust but verify,” a valuable discussion of learning from one’s mistakes through post-project reviews.

One of the most useful concepts was the discussion, tucked into
the last chapter, of S-systems, P-systems, and E-systems, taken from Pfleeger’s earlier text Software Engineering: Theory and Practice. These differ in what is subject to change: just the real world, or everything but the problem, or everything including the problem. It is essential to know what one is dealing with, if he or she hopes to build solid software.

The book reminds me of some panel discussions: Nothing is wrong, but overall it lacks shape, focus, and depth. The problems start with the title. What is “solid” software? When should it be the goal of a development project? It is clearly a term of approval, but what does it mean? It seems that safety and reliability are important components but not the whole story. The idea shifts from chapter to chapter.

Some topics are treated narrowly. Chapter 4, “Testing,” says it does not describe methods to test the requirements to check that they are fit for use in design, or to test design to check if it is fit for implementation. Rather “testing” is defined narrowly as finding faults in artifacts once code is written. Considering that many studies have shown that the “really big faults” are usually in requirements and design, this seems unfortunate. (In fact, the chapter includes some material on the value of test requirements and design.)

Some topics are treated in a one-sided manner. For example, sidebar 4.7 asserts that Six Sigma does not apply to software, quoting an argument by Binder. No counter argument is presented (or referenced), even though some software engineers find Six Sigma, and its focus on monitoring defects per opportunity, to be quite useful. One might suspect there was another point of view (else why argue against it), but you cannot evaluate the pros and cons.

The references are more likely to be books and articles by noted individuals, rather than “official” publications. For example, chapter 3 “Hazard Analysis” has 19 references. Of these, one is a GAO report on the risk assessment practices of leading organizations (good real-world stuff), and another is MIL-STD-882D “Standard Practice: System Safety.” No mention is made of such organizations as the Federal Aviation Administration, Food and Drug Administration, NASA, Nuclear Regulatory Commission, or Ministry of Defence, which have very specific (and, I find, helpful) advice about analyzing hazards of systems involving software.

The index shows no recognition of the existence of international or industrial standards on safety and reliability of software, and almost no mention of industrial, academic, or governmental groups interested in safe and reliable software. The text mentions the Software Engineering Institute (SEI) and its Capability Maturity Model (CMM), but cautions that a defined process will do the same thing over and over, even if it is inappropriate. The reader will be hard pressed to find the mention of the book, Safer C, but it is in there.

This book provides useful information for most sections of the trial version 1.0 of the software engineering body of knowledge, and similarly for the CSQE body of knowledge.
For those who want well-written opinions and anecdotes about generally better software, then they will find good value in this book. But if one wants detailed engineering on how to achieve or improve availability, or reliability, or safety, then you should keep looking.

Milt Boyd (miltboyd@arczip.com) is a member of ASQ's Software, Reliability, and Quality Management Divisions. He has been certificated by IRCA as a lead auditor of quality management systems. He is currently Project Manager for Software Process Improvement, at Instrumentation Laboratory of Massachusetts.

Return to Top

PROGRAM AND PROJECT MANAGEMENT

Software Engineering: Principles and Practice, 2nd Edition

Hans Van Vliet. 2000. Chichester: John Wiley & Sons. 726 pages. ISBN 0-471-97508-7.

(CSQE Body of Knowledge areas: Software Engineering Processes; Software Verification and Validation; Software Metrics and Measurement; Program and Project Management)

Reviewed by Jayesh G. Dalal
jdalal@worldnet.att.net

I received Hans Van Vliet’s book for review soon after I had taken the CSQE examination. I had prepared for the exam using the 860-page Software Engineering: A Practitioner’s Approach by Roger Pressman and my first reaction was, “Do I need another 700-plus page book on software engineering?” I was expecting more or less the same content in a different package. Once I started reading the book, however, my mistake was quickly obvious. I found a significant amount of new material, and it is packaged differently.

The book is presented in a style that is typical for books on engineering disciplines. The concepts are introduced along with the underlying principles. Comparisons with construction projects are used and anecdotes are frequently provided to foster understanding and appreciation of the concepts. I found Van Vliet’s style appealing. I enjoyed reading the book; however, I would have preferred a larger font size. This book should be a valuable addition to one’s collection on software engineering or a good first book on the subject.

The book is written as a textbook for software engineering courses. Presentations on a topic include a balanced mix of historical background, state-of-the-art issues, and areas for further research. Each chapter begins with learning objectives and concludes with a summary, further reading (including annotated bibliography), and exercises sections. At a companion Web site, detailed answers to selected exercises and suggestions for student projects are provided.
The book I reviewed was reprinted with corrections in February 2001. However, some errors have escaped. For example, on p. 114, a reference is made to “footnote a) to Table 6.1.” There are no footnotes to Table 6.1.

The book is divided into three parts: Software Management, The Software Life Cycle, and Supporting Technology. The software management part includes chapters on configuration management, quality management, cost estimation, project management, and risk management. At the beginning of this part, people management and team organization are addressed and a brief discussion of the various software life cycles is presented. I found the discussions of configuration management and project management rather brief. Since the book was published in 2000, the discussion of the ISO 9000 standards and Software Engineering Institute’s (SEI) Capability Maturity Model (CMM) are dated. The 1994 instead of the 2000 version of ISO 9000 standards is described and the CMMI model is not even mentioned.

The part on the software life cycle includes chapters on requirements engineering, software architecture, software design, software testing, and software maintenance. A chapter on object-oriented analysis and design is also included. Software verification and validation is addressed within each chapter in this part of the book. The chapter on software testing includes a discussion of the differences between the testing objectives for fault detection and for confidence building. It also includes an extensive discussion of test adequacy criteria. The inclusion of managerial and organizational issues in the chapter on software maintenance is very useful. The object-oriented methods are a relatively new development and extensive literature exists on the subject. Van Vliet provides a good overview of the object-oriented methods and identifies underlying issues in about 40 pages.

The last part of the book on supporting technology includes chapters on formal specification, user interface design, software reusability, software reliability, and software tools. Once again, each chapter contains a comprehensive overview for the relatively new topics of user interface design and software reusability. The chapter on reliability discusses both the concepts of fault-tolerance and mathematical models for reliability. Given the importance of reliable software from the user perspective, I found this chapter interesting but brief. The chapter on software tools provides a good understanding of different classes of tools and describes different types of workbenches and environments. Specific commercially available tools are not discussed.

In the appendix, the ISO 9001:1994, IEEE 730, IEEE 830, and IEEE 1012 standards are summarized. An extensive bibliography is also provided.

Dr. Jayesh G. Dalal (jdalal@worldnet.att.net) is an ASQ Fellow and CSQE, and past chair of the ASQ Software Division. He is an independent consultant and provides consulting and training services for designing and deploying effective management systems and processes. Dalal was an internal consultant and trainer for more than 30 years at large U. S. corporations with manufacturing, software, and/or service operations. He has served on the board of examiners for the Malcolm Baldrige National Quality Award and the New Jersey Quality Achievement Award.

Return to Top

SOFTWARE METRICS, MEASUREMENT, AND ANALYTICAL METHODS

IT Measurement: Practice Advice from the Experts

International Function Point Users Group. 2002. Boston: Pearson Education, Inc. 759 pages. ISBN 0-201-74158-X.

(CSQE Body of Knowledge areas: Software Metrics, Measurement, and Analytical Methods)

Reviewed by Carolyn Rodda Lincoln
lincoln_c@bls.gov

IT Measurement is a collection of articles edited by the metrics subcommittee of the International Function Point Users Group (IFPUG). IFPUG is the organization in the United States that provides the standard for counting function points, a measure of software size. Despite the IFPUG sponsorship, the book covers a wide range of IT measurement topics.
The book includes 43 articles by industry experts and practitioners. It is divided into 13 parts such as Measurement Program Approaches, Using Metrics to Manage Projects, Problems with Measurement Programs and How to Avoid Them, and Using Software Metrics for Effective Estimating. Each section has an introduction, which describes the articles that follow. The committee suggests that readers use the introduction to decide which articles to read since not all will be applicable to one’s interests or level of understanding. A short author biography is provided at the end of each article in case the reader would like more information.

Although the book is not directly about software process improvement (SPI), measurement is a crucial part of any SPI effort. The book supports various SPI models such as the Software Engineering Institute’s Capability Maturity Model (CMM) and ISO. A good measurement program is essential to any type of quality effort and has been sadly neglected in information technology. Even though the reader is already interested enough in metrics to pick up the book, the one theme that runs through the articles is that there must be a good business reason to collect and disseminate metrics. After getting past the cultural barriers to measurement, an organization can find many helpful hints in this book to make it successful.

IT Measurement is good if the reader is interested in a tasting of many aspects of IT measurement. Its strength is the variety of subjects and authors. They range from “how to get started” articles to advanced topics such as usability measurement models. Its weakness is the lack of depth to any one topic. It should not be read from cover to cover, especially by someone who is a metrics beginner. Even for someone with a more advanced understanding of metrics, the book is a smorgasbord approach. It will provide many ideas, which then should be explored elsewhere.

Carolyn Rodda Lincoln (lincoln_c@bls.gov) is an ASQ certified quality manager and member of the DC Software Process Improvement Network. She is currently employed as a process improvement specialist for Titan Systems Corporation and is implementing software process improvement at the Bureau of Labor Statistics in Washington, D.C. She holds bachelor’s and master’s degrees in math and was previously a programmer and project manager.

Return to Top

SOFTWARE VERIFICATION AND VALIDATION

Visual Basic for Testers

Mary Romero Sweeney. 2001. New York: APRESS. 538 pages. ISBN 1-893115-53-4.

(CSQE Body of Knowledge areas: Software Verification and Validation)

Reviewed by Michael Yudanin
yudanin@hotmail.com

Test automation is one of the hottest topics on the software quality assurance zeitgeist. The test automation body of knowledge that exists at this time may be classified into two homogenous groups. The first group deals with test automation methodology, and ranges from general discussions on the subject to step-by-step instructions for setting the automation process. These texts are challenging and interesting to read, but they usually lack strong connection with the technology. The second group of materials is developed with a specific tool in mind. These usually limit readers’ imagination to a fixed set of capabilities provided by the “guiding tool.”

Visual Basic for Testers is an exception, and as such and because of other advantages, it is definitely the best of the breed. Essentially, the book describes how to use visual basic for testing purposes. Because of the similarity between the name of the book and the line of “YYY for dummies,” it is important to stress what the book is not: It is not a “simple version” of visual basic. The book describes some fairly advanced features of the language, which are used by experienced programmers. The main advantage is that it is written for software testers, and geared toward their need. The topics were selected with an intention of providing useful and powerful means to automate testing tasks. To mention only the few, these are:

  • Creating reliable, time saving, and easy-to-use test utilities, for example, for logging and test timing
  • Testing Windows registry
  • Utilizing Windows API for gathering information about the application-under-test and the system, sending instructions to objects and inspecting objects’ states
  • Testing databases: retrieval of information from various databases using visual basic capabilities
  • Testing components—this level of testing, usually the missing link in the project, based on my experience, may contribute a lot to an application’s reliability and to the return on investment of the testing process. The author provides valuable tools that allow building test beds for components and accessing their properties.
  • Testing Web applications assisted by visual basic—plenty of tools for Web testing usually do not provide the flexibility sometimes desperately needed; visual basic may supply it.
    The book is packed with examples, code samples, and exercises. The author shares her reach experience in testing in general and in test automation in particular, which is extremely valuable. Various “try this” sections keep readers from diving too deep into theory and ensure some hands-on experience. “Notes,” “tester’s checklists,” and “tester tips” keep the tight connection with testing and provide useful tips.

The book may be read by beginners without any real knowledge of software development or programming languages, as well as by testers with a development background. Beginners will find clear introduction into the language and the development environment, and explanation of basic concepts. More experienced testers will benefit from the advanced topics.

Visual Basic for Testers deals mainly with technology, but managers and consultants focused on methodological issues will also find it beneficial. The book addresses, briefly but efficiently, the following topics:

  • Pros and cons of test automation
  • When and what to automate
  • Team building
  • Project management issues

Some of these I would readily turn into checklists and use in everyday work.
The book is well structured and begins by setting clearly defined goals. Most of the chapters may be used separately. It also could be used as a textbook for a training class—and it is used as such. Disciplined individuals may use it for self-study, including a list of references on testing, test automation, and visual basic, which spans from books and articles to Web sites.

The techniques described in the book may be used in different organizational environments and fit into various models of software development and corresponding testing processes. Properly implemented, they may fit well into agile/Xtreme frameworks, with their emphasis on “test-first” and iterative development.

The book includes a chapter on VB.NET. These chapters are out of the book’s context. The only use I can see for these is by testers who need to use the new .NET development environment to build automated test scripts.

One thing I did not find in the book but would like to see in the next editions is coverage of the issue of supplementing commercial-off-the-shelf (COTS) test automation tools with self-developed add-ins. The techniques described in Visual Basic for Testers may be used for this purpose. A few examples from my own experience are:

  • Developing executable programs in visual basic, callable from the existing tool, that will add new functionality to the vendor automation tool, such as reading and verifying XML files
  • Integrating different vendor tools into a test environment, by developing programs that will utilize tools’ Application Programming Interface (API) and allow different tools to exchange data
  • Adding advanced reporting capabilities to testing tools, for example, reporting results to specially formatted Excel sheet

Another thing that may be helpful is a chapter on visual basic of applications (VBA)—this subset of visual basic language is an integral part of MS-Office programs, and it may be very helpful in automating a variety of testing tasks.

The last chapter in the book, “From Tester to Tester: Advice to the Visual Basic Automator,” provides a practical look from people who have been implementing visual basic and other test automation tools.

I have been using visual basic and visual basic for applications for years as a test aid, and I found lots of precious, interesting, and challenging knowledge in this book that may help in everyday automation and consulting work. I have already started to implement certain concepts and techniques from the book—and they work fine!

Michael Yudanin (yudanin@hotmail.com), a Certified Software Quality Engineer, is a Senior Project Manager in TesCom USA Software Systems Testing. He works in software quality assurance and testing both as a manager and as a consultant. His main areas of professional interest are test management, planning, design, and execution, as well as other software quality related issues: integrating quality assurance activities into various stages of software development life cycle, requirements reviews, code inspection and reviews, setting test methodologies, and software measurement. He works with test automation: setting test automation framework, developing scripts in different tools, and using regular programming languages. He also delivers training classes on software quality assurance and testing methodology, software measurement and metrics, automation tools’ implementation, and so on.

Return to Top

Systematic Software Testing

Rick Craig and Stefan P. Jaskiel. 2002. Norwood, Mass.: Artech House Publishing.
511 pages. ISBN 1-58053-508-9.

(CSQE Body of Knowledge area: Software Verification and Validation)

Reviewed by Cindy Streng
cindy.streng@lexisnexis.com

Systematic Software Testing should be on the shelf of every software test engineer. It is an excellent reference guide for the new or seasoned test engineer. The book covers the full gambit of the systematic test and evaluation process (STEP), using the IEEE standards. This book is in plain language with key points noted in the outside margins. Examples throughout the book use an ATM scenario—something everyone understands. The authors also provide real-life experiences as supplementary information. Their understanding and knowledge are evident throughout the book.

An overview of the testing process sets up the book. Although each chapter may be read and used alone, reading the book from cover to cover is beneficial. The STEP program is defined in detail.

The book emphasizes the importance of risk analysis in today’s test planning, as well as how to use risk analysis in planning where to test heavily and where less rigorous testing may be sufficient. Then each step in the testing process is defined in detail with examples that are carried through, giving the reader the continuity of how each step builds onto the next. This is the meat of the book.

The importance of planning and involvement with the project from the beginning is stressed. Being involved during the early stages of a project helps to define potential problems before coding even begins.

Each chapter goes into great detail, without getting bogged down, about the different phases of the test process. A person new to testing could take these chapters and come away with an understanding of the test process.

There are chapters on roles to give readers an overview of who does what and when. Who is responsible for the different steps and support areas is clearly defined. The importance of the total team buy-in of the process, including management, comes across in these chapters. The manager’s chapter also covers being a team member, leader, training, and the use of metrics.

The final chapters cover improving the test process, which is a never-ending process. There is also valuable information on ISO Certification, the Capability Maturity Model (CMM), and the Test Process Improvement (TPI) Model.

The appendices in this book are extensive yet valuable. The glossary covers all definitions readers could possibly need. There are sample templates and the different plans that can be used and customized for any company or project. They all follow the IEEE standards.

Overall, Systematic Software Testing is a necessary investment as a reference resource.

Cindy Streng (cindy.streng@lexisnexis.com) has a bachelor of applied science in systems analysis degree from Miami University in Oxford, Ohio. She started working as a systems analyst doing programming at NCR. She also worked as a contract programmer and consultant at BASS, Inc. (now known as Retalix). She is currently a software test analyst with LexisNexis, in Dayton, Ohio.

Return to Top

Quality Web Systems: Performance, Security, and Usability

Elfriede Dustin, Jeff Rashka, and Douglas McDiarmid. 2002. New York: Addison-Wesley. 318 pages. ISBN 0-201-71936-3.

(CSQE Body of Knowledge area: Software Verification and Validation)

Reviewed by Eric Patel
epatel@rapidsqa.com

This Web-testing book is the second one from author Elfriede Dustin and her colleagues (their first was Automated Software Testing reviewed in SQP vol. 4, no. 3). In this book they address the key success factors of quality Web system engineering that are often overlooked during the effort to quickly deploy:

  • Proper functionality
  • Ease of use
  • Compatibility
  • Security
  • System performance
  • Scalability

Each issue is examined, techniques are outlined, and problem areas as well as testing strategies are discussed. In addition, a single case study (the Technology Bookstore) is referred to throughout the book. Software quality assurance engineers and software testers in addition to Web architects, Web developers, and project managers can benefit from the material covered here.

The authors present a use-case approach called requirement-service-interface (RSI). The chapter covering the RSI approach discusses how it provides a framework for analyzing and understanding potential use-case deliverables and their interrelationships. It helps to specify how a system functions. RSI structures use cases into three categories that reflect various levels of granularity (size of activity described in the use case) and abstraction (level of detail) described in the use case:

  1. Requirement use cases
  2. Interface use cases
  3. Service use cases

What about nonfunctional requirements such as security, performance, and usability? The authors simply document them (using the case study) in appendix C. This chapter ends with a generic test procedure template on pp. 53-54.

The first attribute to be discussed—security—is covered in the next chapter. Also covered are Web, application, and database server security, as well as client, communications, and network security. Throughout the chapter the authors show coding examples in C/C++ and Perl and explain how security can be breached. The step-by-step scenarios can be translated into use cases or test cases.

Next, another two critical Web application attributes are discussed: performance (response time and resource utilization) and scalability (ability to add computing resources to keep up with demand) from both a user’s and administrator’s perspective. They also further break down performance-related tests into:

  • Base performance (under optimal conditions)
  • Load (under real-world conditions)
  • Stress (under limit/saturation conditions)
  • Reliability (to verify points of failure)

Measurements give readers a starting point for determining the metrics that may be required during testing. With these measurements in hand, the reader can jump to section 4.6 and learn some tips for improving the performance and scalability of his or her Web-based application.

In Compatibility readers learn how to test the myriad of operating systems, browsers, and other software that reside on end user’s computers in conjunction with our Web-based application. The authors’ suggestion of performing a risk analysis prior to compiling a compatibility table (to reduce the magnitude and scope of the compatibility test effort) is a good one. This chapter is also rich in example-code fragments used to illustrate important points. Sometimes companies may choose to outsource this type of testing if there are too many combinations to test within the timeframe.

Finally, the attributes of usability and accessibility are discussed, including various usability design issues such as:

  • Usability vs. compatibility
  • Usability vs. performance
  • Content size and download times
  • Browser interface issues
  • Screen resolution
  • Complex screens
  • Content depth levels
  • Internationalization

Ten simple principles are listed for the reader’s consideration to make Web sites “usable.” Here the authors suggest that customers perform usability testing as a way to obtain their early and ongoing feedback. Is your site accessible to users with disabilities? This often-overlooked quality attribute is also covered.

A 17-page evaluation checklist is included that covers each of the quality attributes mentioned in the book. There is also a completed 38-page matrix that evaluates five different automated test tools from some of the most popular vendors. For those who are looking for a book that covers these Web-based quality attributes more in detail, your search is over.

Eric Patel (epatel@rapidsqa.com) is chief quality officer for RapidSQA, software quality service provider (SQSP) of training and consulting solutions. Patel serves as regional councilor for the ASQ Software Division Region 1 and treasurer for the ASQ Boston Section. A reviewer for SQP and The Journal of Software Testing Professionals, Patel also is cofounder and instructor for Northeastern University’s new Certificate Program in software quality assurance.

Return to Top

Featured advertisers


ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.