Another way of looking at automation in testing
Carsten Feilberg
Owner | Consultant | Test Automation | Agile and Exploratory Testing | Processes | Problem Solver | ScrumMaster | Software Tester
The most typical reason anyone asks me for help with adding automation to their testing is: we want to save time or money. I always nod in understanding. Who wouldn't want to spend less time and lower the cost of testing software? I'm all for that!
What we are talking about, however, is in fact creating some software to expose some scenarios to existing software and then somehow measure and evaluate its reaction. So instead of having one set of software code, we will have two. It's almost like trying to save gas on your car by buying another car.
There are potentials for saving time of course. Computers are much faster than humans, and furthermore they always do things in exactly the same way. That's a bonus, mostly. Also, if we have enough computer power at our disposal we can run scenarios in parallel instead of in sequence. Time saving, no doubt.
Also money saving is possible. We won't need people to do what computers do, so either they are free to do something else - or they can be let go completely, so we don't have to pay for their time and efforts.
But there's a catch. Despite what all tool vendors claim, automation of scenarios and interactions with software is hard. Computers are essentially blind. They don't know how to 'see', so we need to teach them where to look and what to look for. They are also clueless as to what the meaning of what they look for is. And they don't ever get new ideas to try. We have to teach them exactly how to interact and how to measure success as well as failure. And for most software nowadays, that is far from trivial. So bad news: we probably still need to have people around.
If the automation and the software being tested runs out of sync, the automation may run to a completion and reporting false faults or successes, or inconclusive results. That's not really helpful, and keeping them in sync so they can run unsupervised and on their own is one of the biggest challenges in automation. And because there are now two systems to support, we need to attend to that and fix problems in the right system, sometimes both. It is worth noting that often we won't even design our main software to run unsupervised, so it really is quite an ambition to expect that for our automation software.
I also often hear talk about which kinds of scenarios are suitable for automation. Would you like to automate integration testing? Or smoke testing? Or regression testing? Or what?
The answer comes quite naturally: remove the word 'automation'. If it is difficult to explain how and what to do to properly test, say, regression, it certainly won't become any easier by adding 'automated' to that sentence. First, clarify what you want to achieve. Then you can pick the tool and approach that allows that. It's rarely an all or nothing - if we decide to automate all regression testing we quickly run into scenarios that just can't be automated. Others will be very easy to automate, but perhaps less valuable. And with the limits of automation we end up chasing the ability to run the automation, forgetting why we even want to do regression testing in the first place.
IMHO the word 'automation' clouds our minds more than it helps us thinking. It's the equivalence of the golden pot buried at the end of the rainbow. We are mesmerised and obsessed by the thought of all the lovely gold, and don't put too much attention on the impossibility of locating the actual end of the rainbow. Until we embark on the search.
So instead we could think of automation in exactly the same way as we think of a web browser: it enables our testing. It allows us access and interaction with a system. It contains some limitations and some quick-wins. But it's just a testing enabler. Automation enables us with powers of speed and reliability, with certain limitations. Depending on the nature of our software it may be helpful or not. Just like a web browser rarely is much help if our system to test is an android app or an old-fashioned fat client. It's about picking the right tool and use the tool when it's an advantage and go away from the tool when it no longer serves us. Maybe if we start looking at it as an operational tool instead of a full-up solution can we overcome the blindness that the promise of automation holds.
If you would like to have us visit you for discussing how automation in testing can help you in your context please contact us on https://houseoftest.rocks.
Director - Quality Assurance @ Alternative Path
6 年Another way to look at it: Just because someone owns a car does not mean it can be used everywhere. Maybe the traffic is bad and you need to go somewhere near so its better to just walk. Or maybe you can share a ride. However on other days it might be vice versa (for example when it rains you are definitely better off taking your car). The point is Automation is an enabler - a means to get faster to the end. Not an ‘end’ in itself. How and when to use it is as much an art as its science. Sometimes you are better off automating just a smoke test that could reassure you that the basic health is fine and at other times you might want to go the distance to automate an end to end integration test. Both have their own merits. You should not push a square peg in a round hole. If done right automation can save more time and effort, however looking at it as just a measure to reduce man power especially in a complex system is a disaster waiting to happen.
Director - Quality Assurance @ Alternative Path
6 年Excellent article!
Partner with channel to deliver value to the market
6 年Absolutely.
Too many people agreeing with this article. :-) I, of course, agree as well.