Appendix for Choosing a Tool to Automate Software Testing
- What does this tool do? What are the benefits of using this tool?
- How can this tool solve my problems? Which of my current problems can and cannot this tool help with? (You need to know what your own problems are.)
- Will you demonstrate your tool using our tests on our site? What environment is needed to run this tool? (version numbers of operating systems, RAM, network configuration, and so on) How much disk space does it use?
- What does this tool not do? What future enhancements are planned for the tool? (Note that this also indicates current limitations of the tool.) What influence do tool users have on future development of the tool?
- Can tests be rerun in debug mode within the tool? Can the tests be run as a background task so I can do other work while it is running the tests (if this matters)?
- What is the market share of this tool? How do you define market share? (Every vendor will have a different definition that makes their tool look best.)
- Why is this tool better than other similar tools?
- What proportion of tools sold are providing real benefits to the organizations that bought them? For those that have not succeeded, why havent they? What will I have to do to be sure to succeed with this tool?
- What support is available? Training, consultancy, help desk, service-level agreement for resolution of problems, technical expertise in our area? What effort is needed On-Site to support the tool?
- What test planning standards, test processes, test structuring scheme, and so on need to be in place to gain real benefits from using this tool? (Ask if the vendor has read this article.)
- What features are included to ease the learning curve for the tool?
- How do other sites usually work with the tool?
- Is there a user group, and can I attend the next meeting?
- Can you give me the names of reference sites, and can I meet with at least two users who are achieving real benefits using this tool?
- How many versions have been released in the past year? How is release management handled? Do you release a defect list with the product?
- How many known defects are there in the tool currently? (If they say none, be wary!)
- What are your quality assurance and test processes? What testing is done on the tool itself? (What does the vendor know about testing in general?) Is the tool used to test itself? (This doesnt necessarily mean a lot.)
- What kind of tailoring/customization is possible for this tool? What extensions/add-ons/On-Site routines have other users built? (May indicate tool drawbacks or additional work that you will need to do to achieve the best benefit from the tool.)
- How does this tool integrate with other tools, such as other testing tools, project management tools, configuration management tools, and so on? Ask specifically about tools you already have.
- How long will it take to achieve real payback from this tool? Can you give me case histories of financial benefits from other users? (This is surprisingly difficultmany users actually track itmake sure you plan to!) Can you help me estimate how much effort will be involved in implementing this tool in my organization?
Thanks for contributions from Paul Herzlich, Peter Oakley, Peter Herriott, and Matthew Flack
Questions to ask other computer aided software testing (CAST) tool users
- How long have you been using this tool? Are you happy with it?
- How many copies/licenses do you have? How many users can be supported? What hardware and software platforms are you using?
- How did you evaluate and decide on this tool? Which other tools did you consider when purchasing this tool?
- How does the tool perform? Are there any bottlenecks?
- What is your impression of the vendor (commercial professionalism, on-going level of support, documentation, and training)?
- How many users actually use the tool? If the tool is not widely used, why not? For example, technical problems with the tool, lack of management support to go through the learning curve, problems maintaining test scripts when software changes, resistance, lack of training, too much time pressure, and so on. If the tool is currently shelf-ware, that is, not used, skip to question 18.
- How much space does the tool-related test data take up, and how is this controlled? (This can be a hidden cost.)
- Does the tool perform your entire set of tests or are there pre- and post-test activities that have to be performed manually? If so, why?
- How easy is it to interpret the results of the automated tests?
- What is your assessment of the quality of your own internal testing practices prior to acquiring the tool? How did the use of the tool affect the quality of the testing?
- Were there any nontechnical problems in your organization through introducing the tool, and how were they overcome?
- Is the tool now integrated into your work processes and standard procedures? If so, how much effort and how long did this take?
- Were there any other benefits or problems in using the tool that were not anticipated?
- Do you feel the tool gives you value for money?
- Have you saved any money by using this tool? Can you quantify the savings or improvements?
- How long did it take you to achieve real benefits? What are the critical factors for achieving payback?
- What improvements are needed in the tool? Is the vendor responsive to requested enhancements?
- What were your objectives or success criteria when buying the tool? (for example, run more tests, more consistent regression tests, improvement in meeting release deadlines, improved productivity, capacity planning, performance assessment)
- Have your objectives been achieved? Were they the right objectives? If not, what should they have been?
- If you were doing it over again now, would you still purchase this tool? What would you do differently?
Thanks to Peter Oakley for additional ideas.