“Don’t fix bugs later; fix them now.” – Steve Maguire
Why automate Testing?
Significant reduction of future Maintenance Efforts for Automated Testing of many technologies
In today´s fast moving world, it is a challenge for any company to continuously maintain and improve the quality and efficiency of software systems development. In many software projects, testing is neglected because of time or cost constraints. This leads to a lack of product quality, followed by customer dissatisfaction and ultimately to increased overall quality costs.
The main reasons for these added costs are primarily:
- poor test strategy
- underestimated effort of test case generation
- delay in testing
- subsequent test maintenance
Test automation can improve the development process of a software product in many cases. The automation of tests is initially associated with increased effort, but the related benefits will quickly pay off.
Why automate Testing?
Robust Test Automation Projects balanced for Value and Effort
Automated tests can run fast and frequently, which is cost-effective for software products with a long maintenance life. When testing in an agile environment, the ability to quickly react to ever-changing software systems and requirements is necessary. New test cases are generated continuously and can be added to existing automation in parallel to the development of the software itself.
In both manual and automated testing environments test cases need to be modified for extended periods of time as the software project progresses. It is important to be aware that complete coverage of all tests using test automation is unrealistic. When deciding what tests to automate first, their value vs. the effort to create them needs to be considered. Test cases with high value and low effort should be automated first. Subsequently test cases with frequent use, changes, and past errors; as well as test cases with low to moderate effort in setting up the test environment and developing the automation project are best suited for automation.
Optimization of Speed, Efficiency, Quality and the Decrease of Costs
Quick Return on Investment (ROI) of Test Automation
The main goal in software development processes is a timely release. Automated tests run fast and frequently, due to reused modules within different tests. Automated regression tests which ensure the continuous system stability and functionality after changes to the software were made lead to shorter development cycles combined with better quality software and thus the benefits of automated testing quickly outgain the initial costs.
Advance a Tester’s Motivation and Efficiency
More efficient Assignment of QA Tasks
Manual testing can be mundane, error-prone and therefore become exasperating. Test automation alleviates testers’ frustrations and allows the test execution without user interaction while guaranteeing repeatability and accuracy. Instead testers can now concentrate on more difficult test scenarios.
Increase of Test Coverage
Different Types of Testing to increase Test Coverage
Sufficient test coverage of software projects is often achieved only with great effort. Frequent repetition of the same or similar test cases is laborious and time consuming to perform manually. Some examples are:
- Regression test after debugging or further development of software
- Testing of software on different platforms or with different configurations
- Data-driven testing (creation of tests using the same actions but with many different inputs)
Test automation allows performing different types of testing efficiently and effectively.
Cost Benefits Analysis of Test Automation
Many managers today expect software test automation to be a silver bullet; killing the problems of test scheduling, the costs of testing, defect reporting, and more. Automating testing can have positive impacts in many areas, and there are many success stories to provide hope that test automation will save money and solve some testing problems. Unfortunately, there are many more horror stories, disappointments, and bad feelings, even in cases where automation has been beneficial. I have been brought into more than one situation where previous attempts at automating software testing have failed; where large investments have been made in shelfware, and many years of effort creating automated tests abandoned. The purpose of this paper is to provide some practical guidance for understanding and computing cost and benefits from test automation. It describes some financial, organizational, and test effectiveness impacts observed when software test automation is installed. The paper also advises about areas that are difficult or impossible to factor into the financial equations and addresses some common misconceptions management holds about test automation. There are many factors to consider when planning for software test automation. Automation changes the complexion of testing and the test organization from design through implementation and test execution. It usually has broad impacts on the organization in such things as the tasks performed, test approaches, and even product features. There are tangible and intangible elements and widely held myths about benefits and capabilities of test automation. It is important to really understand the potential costs and benefits before undertaking the kind of change automation implies if only to plan well to make the most of them. Organizational impacts include such things as the skills needed to design and implement automated tests, automation tools, and automation environments. Development and maintenance of automated tests is quite different from manual tests. The job skills change, test approaches change, and testing itself changes when automation is installed. Automation has the potential for changing the product being tested and the processes used for development and release. These impacts have positive and negative components that must be considered. Setting realistic expectations in management and understanding where benefits should be derived from test automation are key to success. We can easily provide cost justification for proposed automation if management demands numbers. Tool vendors and experts publishing their test automation strategies provide excellent sources of equations and customer examples justifying almost any approach. In my experience the trick has been to figure out what costs and benefits really relate to the automation at hand, and how to make best use of them. It is critical to keep in mind that the purpose of test automation is to do testing better in some way. Automation is only a means to help accomplish our task – testing a product. The automation itself generally does not otherwise benefit the organization any more than the testing does. Cost benefit analysis provides us with useful information for deciding how to best manage and invest in testing. There are also many automation areas that have the potential to provide a benefit or be a drawback depending on how they are handled. For example, automated tests may reduce staff involvement during testing, thus saving in relation to manually running the same tests. But, automated tests may generate mountains of results that can take much more staff involvement for analysis, thus costing more to run than manual tests. Often the information obtained from automated tests is more cryptic and takes longer to analyze and isolate when faults are discovered.
Existing metrics techniques such as code coverage can be used to estimate or compute test effectiveness before and after automation. Automated tests can be incredibly effective, giving more coverage and new visibility into the software under test. It also provides us with opportunities for testing in ways impractical or impossible for manual testing, yet conventional metrics may not show any improvements. Automated tests can generate millions of events and sequences limited only by the machine power and time available for running the tests. These tests can find defects in code already 100% covered. Employing random numbers allows sampling of events and sequences, and also allows tests to do new things every time they are run. Automated probes can look inside the product being tested at such things as intermediate results, memory contents, and internal program states to determine if the product is behaving as expected.
Some testing can only be accomplished through software test automation. Computing the cost benefit of a simulated load from a thousand users isn’t often necessary. If you need to measure what the system does under such a load automation may be the only other practical way. Testing for memory leaks, checking program internal state variables, or monitoring tests for unexpected system calls can only be done using software automation.
This paper breaks down the many factors and provides examples of how the elements may be viewed, analyzed, utilized, and measured. The goal is to dispel some of the myths about test automation and provide a practical approach to deciding what types of investments are worthwhile.
There are several areas in which we should set management expectations: intangible costs and benefits, falsely expected benefits, factors common to manual and automated testing, and organizational impacts. We also need to be careful about how we measure and account for things.Intangible costs are difficult to realistically assess. Where they can be measured, there is great variation in the financial value we place on them. There are also issues in capturing values to measure how much change is due to automation. Some of the intangibles are generally positive, some negative, but most can be mixed blessings, depending on one’s perspective and how they are done. Due to the difficulties attaching objective values to these factors, they are best left out of the ROI computation in most cases.
- Hands-off testing. Although the cost of people is easily measurable, any additional value of
computer control is difficult to quantify.
- Improved professionalism of test organization. This often increases motivation and
productivity, and comes with the new discipline and tasks automation requires.
- Expansion into advanced test issues. This can occur because of the new abilities to test and
monitor that are a consequence of automation with the improved professionalism.
- An immediate reduction in perceived productivity of the test organization. This perception is
due to a pause in testing while people ramp up, the lag due to installation of automation
tools, and creation of automated tests.
- Not all members of the test team will want to change. Some turnover in personnel often
accompanies test automation even when some testers continue to test manually.
- The changes in the quality of tests. The quality may improve or get worse, but manual and
automated tests almost always are different exercises.
- Numbers of product rolls (test cycles) before release. Automation often allows faster
confirmation of product builds and may encourage more turns. The churning may
improve productivity and quality or possibly may cause laziness, lack of attention, and
degradation of quality.
- Test coverage. It may improve or lessen, depending on the effectiveness of manual testing,
the automated tools, and the automated tests: some testing that can only be done through automation, the value of changes in test coverage are difficult to quantify even though we can compute many objective measures of coverage, good exploratory testing may cover many more interesting cases than mundane automation, manual testing may encourage situations difficult to manage automatically
Management expectations have often been set through the media, vendor hype, conferences, and books extolling the virtues of test automation. Some of the information is quite valid and applicable, but much of it under emphasizes the special circumstances and specific considerations that apply to the projects described and over emphasizes the successes. Test automation is not a silver bullet. It does not solve all testing problems and has to be carefully planned to be even marginally successful. Poorly set expectations can result in a beneficial automation effort getting labeled as a failure.
Some falsely expected benefits:
- All tests will be automated. This isn’t practical or desirable.
- There will be immediate pay back from automation. An immediate payback may be seen for some automation (e.g., build tests), but usually the pay back comes much later after the investment. It takes a lot of work to create most automated tests and the savings usually comes from the running and rerunning after they are created.
- Automation of existing manual tests. The machine can compound our errors thousands of times faster than we can, and computers don’t care that their results are ridiculous or useless.
- Zero ramp up time. Automating tests takes time. The tools have to be selected, built, installed, and integrated before testing can take place, and planning and implementing automated tests often take many times the effort of equivalent manual tests.
- Automated comprehensive test planning. Automated tools that analyze requirements, specifications, or code don’t do everything. They often make significant assumptions and the assumptions should be validated or compensated for. Showing that ‘the code does what it does’ is not the same as showing that it works per the specification (or per the requirements, or that it meets the users’ needs). Automated analysis often misses significant classes of possible errors.
- Use of capture/playback for regression testing. This only works in the rare case when the product is so stable that there is very little expectation that any existing tests will need change in the future.
- One tool that fits perfectly. Many commercial automation tools are worth the cost and pay back very well. The commercially available tools may fit some organization and project perfectly – it’s never been one I’ve worked on or run into.
- Automatic defect reporting (without human intervention). This often is disastrous for the testing organization and development. Problems include duplicate reports, false detection of errors, error cascades (one error triggers many test failures), and unreproducible errors, among others. Even with human review of the reports this feature of some automation tools can require far more effort than it saves.
It is worthwhile noting a few additional things for management regarding the cost of automation:
- Much of the benefit from test automation comes from the discipline applied to analysis of the software under test and planning of testing. These benefits are unrelated to automation and would be beneficial for manual testing as well.
- Introducing test automation often causes an improved professionalism within the test organization. This leads to better testing, closer relationships with development, and expansion into more advanced areas of software testing.
- Often the pay back for the investment in automation does not pay off in the current project, and is
usually seen in the next project that uses it. This is often many months or years after the
investment is made.
- There are significant negative schedule and performance impacts in testing when automation is
- Writing of automated tests is more difficult than manual tests and requires a superset of knowledge and experience over manual testing. Not all members of an existing test group are able to make such changes.
- Automated tests frequently require substantial maintenance, especially for cheap (quick and dirty) techniques such as capture/playback. In some cases automated tests may have an expected “life” of only one or two test runs, yet cost three to ten times as much to create as equivalent manual tests.
- Great care must be taken to gather unbiased figures and maintain balance if we want to use the measures to understand how we’re doing. “Figures don’t lie” doesn’t apply to what we refer to as software metrics; we can prove any point we want to (and often delude ourselves by doing so).
Where financial values can be compared there are simple ways to decide about automation. More difficulty is encountered when comparing organizational changes, and comparing test effectiveness is confounded by intangible factors. With of all of the confusing measures, questionable benefits, and unknown factors it is often better to make first order approximations rather than try to measure carefully. The first order approximations can often be measured easily. It isn’t really useful to compute ratios of numbers to five decimal places that are approximations, estimations, guesses, or exact measures of factors totally confused by technology in any case. In many cases we have difficulty knowing what value to use even when we have complete information. (Is a test worth five times more because it is run on five systems instead of one? Is it worth five times more because we run it on one system five times? On a system that is five times faster?)