Manual vs. Automated Testing
With acknowledgement of Rae Cobleigh for her editing and Peter Matthies for his inspiration.
In interviews I often get asked about my opinion on automated versus manual testing. To be clear, I don't see this as a question of pitting one technique against the other, but rather a question of how to balance these different strategies to maximize efficacy.
So, let’s break this down a little. What does manual testing provide? First of all, it provides a human element. As I, or someone else on the team, goes through the product under test we are partly learning the list of choices we get to make at every stage and partly executing a workflow, looking at the experience of using the product. For me, there’s an element of discovery and a sense of “what if” as I go through each step. I’m also not thinking about each step as a set of separate functions from which the application is built; I'm thinking about the process as a whole. At the end of the activity I end up with at least two things: a list of alternate choices I could have made and a completed workflow.
The completed workflow is a candidate for a test case that can be documented. This gives you a series of nominal steps and a specific set of choices and a specific set of outcomes at each step. The list of alternate choices gives me an exploration path. Formally, the term “exploratory testing” is used in this context to describe this activity. Throughout the course of my career, I’ve found exploratory testing to be invaluable. Whether examining a legacy product—whose details of inception are often long forgotten—or working on a completely new product, a holistic examination of the product's inner and outer workings provides a source for ideas and analysis that would otherwise be left unexplored or up to chance as a customer stumbles across odd gaps in a disjointed experience.
领英推荐
Automated testing can also provide valuable discoveries, just on a finer-grained level. Although automated testing doesn't usually yield as much information about the holistic user experience, it can be a great way to see how the product will respond when it's pushed to its limits—and for discovering what those limits even are. A large part of automated testing is the setting up of the controlled initial state which, in and of itself, can yield interesting insights. You might discover that it's near-impossible to consistently recreate a particular state in highly asynchronous environments, which means the product could become unreliable once a lot of people start using it. Or you might find out that the developers didn't design from the ground up with automated testing in mind, and it's actually impossible to create a fully-automatic test harness. But assuming full automation is possible, there is a great deal of? value in the test cases that are generated. For example, as product development continues, automated regression tests are often the fastest way to detect unintended changes in behavior, thus making it more likely any mistakes will be corrected before a customer can encounter them.
So when I am asked to opine on testing efficacy and the balance between resource expenditures for automated and manual testing, my answer is “It depends.” As is often the case for dynamic systems, no single strategy or tactic is going to serve perfectly all the time. What ends up being important is that all the different activities performed in service of quality are weapons in the arsenal to combat mediocrity. And your specific choice in equipping your teams with the right balance is going to be based on what’s needed in the moment and for the foreseeable future. Those choices and the application of your arsenal is going to change as the circumstances change and as business needs change.
I submit that the testing strategy decision-making process has far too many variables as inputs to allow for a simple algorithm that will give you the best outcome in all cases. Until you've honed your instincts on how to proceed, what you will need is a coach who will guide you through making the best decision for your team in your current situation. As with a coach in any athletic endeavor, the goal for you and your coach is to help you grow and eventually outgrow their guidance. While certain general rules of thumb can be provided, such rules will eventually start to sound like "fortune cookie philosophy" and clinging too dogmatically to them will distract you from developing a practical approach that you can actually implement and use in a wide variety of situations.
How do you make these choices?
Jack of Many Trades, Quick Learner
2 年Do you think it's possible to at least enumerate the input variables for the testing strategy decision-making process? Even if a simple algorithm isn't likely to be possible, do you think a comprehensive, complex one could exist? And if it might not be possible to exhaustively enumerate the input variables—the world is always changing around us!—what are the top few that tend to be the most important to consider? What sort of questions would you ask someone who came to you for coaching advice?