One of the reasons we look to use automation is to allow us to run more tests in a shorter timeframe. Machines are, after all, better at running lots of repetitive tasks quickly. And while ‘quick checks’ will not cover the full scope of our testing, much of our work can be categorized in that vein.
So automation is great for running lots of tests. But even automation can encounter barriers when the potential number of tests stretches to the many thousands or more. When this happens we face a number of problems.
1. The time required to execute tests is far greater than is practical. Even a very fast machine cannot execute all the tests in a reasonable timeframe.
2. We need a way to validate the results. If the tests are all identical, this is trivial, but when the results are complex, then we need to devise some way of determining what the result should be, and then, if the test passed or failed. This is the so-called oracle problem.
3. Many of the tests will be redundant and offer little information. We have to recognize the cost of running a test, versus the value of the information it gives. Simply running more and more tests that cover (almost) the same thing is of little value.
So the challenge is how do we first reduce the set of tests, secondly, how do we identify the pass/fail conditions of a large number of tests, and finally, how do we run the tests as quickly as possible. If we can meet those challenges, then we have a solution that can provide great coverage within the time and cost that we are happy to pay.
In this talk, I will take a look at these challenges in detail by looking at the tools and techniques we used, the challenges we faced, and the results we achieved with our automation solution.