As with any metrics, automated testing metrics should have clearly defined goals of the automation effort. It serves no purpose to measure something for the sake of measuring. To be meaningful, it should be something that directly relates to the performance of the effort.
Prior to defining the automated testing metrics, there are metrics setting fundamentals you may want to review. Before measuring anything, set goals. What is it you are trying to accomplish? Goals are important, if you do not have goals, what is it that you are measuring? It is also important to continuously track and measure on an ongoing basis. Based on the metrics outcome, then you can decide if changes to deadlines, feature lists, process strategies, etc., need to be adjusted accordingly. As a step toward goal setting, there may be questions that need to be asked of the current state of affairs. Decide what questions can be asked to determine whether or not you are tracking towards the defined goals. For example:
- How much time does it take to run the test plan?
- How is test coverage defined (KLOC, FP, etc)?
- How much time does it take to do data analysis?
- How long does it take to build a scenario/driver?
- How often do we run the test(s) selected?
- How many permutations of the test(s) selected do we run?
- How many people do we require to run the test(s) selected?
- How much system time/lab time is required to run the test(s) selected?
In essence, a good automated testing metric has the following characteristics:
- is Objective
- is Measurable
- is Meaningful
- has data that is easily gathered
- can help identify areas of test automation improvement
- is Simple
A good metric is clear and not subjective, it is able to be measured, it has meaning to the project, it does not take enormous effort and/or resources to obtain the data for the metric, and it is simple to understand. A few more words about metrics being simple. Albert Einstein once said:
“Make everything simple as possible, but not simpler.”
When applying this wisdom towards software testing, you will see that:
- Simple reduces errors
- Simple is more effective
- Simple is elegant
- Simple brings focus
It is important to generate a metric that calculates the value of automation, especially if this is the first time the project has used an automated testing approach. The test team will need to measure the time spent on developing and executing test scripts against the results that the scripts produced. For example, the test team could compare the number of hours to develop and execute test procedures by the number of defects documented that would not likely have been revealed during a manual test effort.
Sometimes it is hard to quantify or measure the automation benefits. For example, often defects are discovered using automated testing tools, which manual test execution could not have discovered. For example, during stress testing 1000 virtual users execute a specific functionality and the system crashes. It would be very difficult to discover this problem manually, using 1000 test engineers. Another way to minimize the test effort involves the use of an automated test tool for data entry or record setup. The metric, which applies in this case, measures the time required to manually set up the needed records versus the time required to set up the records using an automated tool.
Consider the test effort associated with the system requirement that reads, “The system shall allow the addition of 10,000 new accounts”. Imagine having to manually enter
10,000 accounts into a system, in order to test this requirement! An automated test script can easily support this requirement by reading account information from a file through the use of a looping construct. The data file can easily generated using a data generator. The effort to verify this system requirement using test automation requires far fewer number of man hours than performing such a test using manual test methods.
Automated software testing metrics can be used to determine additional test data combinations. For example, with manual testing you might have been able to test ‘x’ number of test data combinations; with automated testing you are now able to test ’x+y’ test data combinations. Defects that were uncovered in the set of ‘y’ combinations are the defects that manual testing may have never uncovered.