Time to Market vs. Time for Quality
Steven Rakitins article, Balancing Time to Market and Quality (June 1999, p. 54) hits upon several valid points in this all-too-often battleground. The project team that succeeds in getting a good enough product to market in the required time frame becomes the corporate hero, worthy of great praise and admiration, even though in reality it is all smoke, mirrors, and perception.
I have some comments relative to three points that Rakitin discusses. Although I cannot claim experience in every field and aspect of software projects, I have yet to be associated with a project that fits the scheduled backward paradigm. Most of the projects I have worked on had some kind of estimation done at the projects inception. Almost all of these projects came in late, and there had been support by all involved parties. There were just too many unforeseen variables that caused problems within the projects lifetime. I dont think the problem is lack of commitment by team members, but a lack of process that would provide accurate estimation numbers without huge fudge factors. Software estimation, in spite of the work done by industry leaders, is still an art, and a black art at that. As the adage goes, The best laid plans of mice and men often go awry.
I wholeheartedly agree with the concept of adding quality goals to the entire project teams objectives. By making the whole team responsible and accountable, there is less chance for individual finger pointing. But to do this measurement, everyone has to agree on the measurement points, and the tools must be in place to support the data gathering.
Getting management buy-in and complete support is absolutely critical, and Rakitin brings out several reasons why. One of the major problems in the software industry is that although companies get their first product out into the commercial world with good enough quality, the expectations for the next product, be it an upgrade or something new, are much higher. The success of this first product tends to cause management to resist change. In general (and there are certainly exceptions), little process and little commitment to quality existed. The required change is to instill a quality culture and all that goes with it. That done, quality products will be released in the desired time.
Using the Software CMM with Bad JudgmentI was reading Mark Paulks article Using the Software CMM with Good Judgment (June 1999, p. 19) (and just a few weeks earlier attended Paulks presentation based on it) when I got to thinking about organizations that misuse the CMMs. Most of us have heard stories about organizations that have to get to Level n by... and spend enormous energy attempting to do so.
The intent of the CMMs, as I understand it, is to provide organizations with a roadmap for process improvement. Assessments are used, ideally, to identify opportunities for improvement. Sadly, some organizations are less interested in actual improvement than in the appearance of improvement.
For the results of an assessment to be meaningful, all key process areas must be institutionalized; that is, they must be a part of the way people do their jobs. For such a condition to exist, a culture of quality must pervade the organization.
Unfortunately, for some organizations, CMM-based assessments, and actions based on them, are viewed as just another example of managements flavor-of-the-month style. This is particularly true in organizations where senior management does not understand the benefits of process improvement in the first place. Back that up with the human tendency to resist change,especially when it is away from the familiar and comfortable to the new and uncomfortable.
Although one of the original purposes of the CMM was to enable an acquisition agency to differentiate between competing software development contractors, it has sometimes been specified that an organization must have achieved level n to be eligible to bid.
When there are large organizations with multiple concurrent programs, an assessor may not (almost certainly will not) be able to assess all programs equally. In fact, the assessment team may not have time to probe even one program in depth. The evidence/responses to assessment questions are likely to be prepared in advance and presented to assessors. Some organizations go to great lengths to prepare for their assessments in advance, providing notebooks or CD-ROMs containing all of the material necessary for the assessor to evaluate the organization. Doing so makes it possible for assessors to perform the evaluation from wherever they choose.
Perhaps the thought is that if the assessors are presented with at least one example of an appropriate response to each question, They wont have time to look further.
Culture is also important after the assessment to ensure follow through on findings. In at least some instances, after the assessment the organization heaves a collective sigh of relief and gets on with the real work the way that it has always done it.
Some organizations have slavishly attempted to mold the existing organization to the CMM rather than recognize that they may simply be following an alternative implementation of a process or processes and still satisfy the spirit of the CMM requirements.
Call for Submissions
Software Quality Professional would like to feature articles on the following topics in upcoming issues:
Any other topics within the body of knowledge for the ASQ Certified Software Quality Engineer are always welcome. Please inquire (to SQP_Editor@asqnet.org) before submitting material in areas recently addressed within the journal, such as testing, inspection, metrics, process improvement, or standards.
Submissions are accepted at any time, but the deadlines for specific issues are:
December 15 for June issue
March 15 for September issue
June 15 for December issue
September 15 for March issue