Testing: Search and Replace
This is an appeal to testers, but also to the clients of testing: managers, developers, and the business people who want to release excellent products quickly with as few problems as possible.
People's reluctance to talk about problems (and potential problems) undermines the most important job for testers: finding problems that threaten the value of the product. Problems happen. If testers want to help, and if people want help from testing, we must focus on that.
In order to say credibly that we're testing a product, we must obtain experience with it; perform (thought) experiments on it; explore the product, its requirements, its context, and the technical and social systems around it. But it seems to me that a lot of common talk about testing subverts that by focusing "testing" on demonstration and confirmation that everything is okay. This misses the point of testing, which is to challenge the product and our beliefs about it.
So: here is a set of straightforward, plug-and-play replacements that provide alternatives to common testing talk. These can help us to focus on finding problems that matter, at pretty much any stage of development. Ready? Let's begin.
Try replacing "verify that..." with "challenge the belief that..."
Try replacing "validate" with "investigate".
Try replacing "Confirm..." with "Find problems with..."
Try replacing "Show that it works" with "Discover where it doesn't work".
Try replacing "Pass or fail?" with "Is there a problem here?"
Try replacing "test case" with "experiment".
Try replacing "actual result" and "expected result" with "observation" and "oracle" (that is, the means by which we recognize a problem). Testing is about encountering the unexpected.
Try replacing "test case" with "explicit test conditions and coverage ideas".
Try replacing "pass/fail ratio" with a list of problems that matter. No one cares particularly about "passing" test cases; and even if the test case passes, there can still be terrible problems on which the test case is mute.
Try replacing "counting test cases" with "describing coverage". And make sure you describe important testing not yet done; aspects of the product that your testing hasn't covered. Excellent reporting matters.
Try replacing "automated testing" with "programmed checking". Testing can't be automated, but occasionally automated checks can alert us to problems. Note, though, that creating automated checks requires programming, even if it's very high-level programming.
Try replacing "test automation" with "tool-assisted testing". Automated checks are among the least powerful and useful ways that we can use programming to help us gain insight about a product, and to help us test. Use programming for data generation; varying, perturbing, and randomizing the data; probing; analysis; visualization; configuration, ... We can use tools far more powerfully than some people believe.
Try replacing "use cases" with "use cases AND misuse cases AND abuse cases AND abstruse cases." Also remember that some people are obtuse sometimes, so think about "obtuse cases" too.
Try replacing "measurement" with "assessment". Most of what goes on in complex, cognitive, social domains (like software development and testing) can't be measured easily in valid and reliable ways. But such things can be assessed reasonably.
Try replacing "KPIs and KLoCs and 'measuring quality'" with something far more important: learning from every bug. Please don't reduce engineering to score-keeping.
Try replacing "preventing bugs" with "identifying bugs (and the earlier the better)". We testers cannot prevent bugs; whether the bug is in running code, a design, a story, or an idea that someone gives us to test, the bug is there when we encounter it. But we can identify a bug in anything that anyone give us to test, evaluate, or analyze. Then the people who are producing the product can deal with the problem before it goes any further.
Try replacing "AI might put me out of a job!" with "AI is going to need even more critical evaluation than stuff for which we have the source code." That behooves us to become better, more critical technologists and social scientists. Let's get on that, stat. See Harry Collins' Artifictional Intelligence; Meredith Broussard's Artificial Unintelligence; Marcus and Davis' Rebooting AI, Virginia Eubanks' Automating Inequality, to name but four good books on the topic
Try replacing ponderous, overly formalized, scripted, procedural test cases with concise charters. A charter is a mission statement that guides a burst of testing activity. Encourage testers to vary their behaviour and to keep reliable, professional notes on what they did. Or try replacing procedural test scripts with a description or diagram of a workflow, and charter testers to report on anything that they find difficult, confusing, annoying, frustrating, surprising, or interesting. Break the test case addiction.
Try replacing "we have to..." with "we choose too..." Nothing in testing is mandatory in any absolute sense. When you fall into the mistaken belief that you have NO choice, you limit your ability to find problems that matter. Testing is fundamentally an exploratory activity; if we're not going somewhere we've never been before in some sense, we may be confirming or demonstrating, but we're not testing. In order to test well, we need to be able to assert agency.
Try replacing "the user" with something more specific. Consider a novice user (who may be confused); an expert user (who may be sharply critical); a distracted user (who may forget to do necessary things); a disabled user (whose needs might have been overlooked). Want to have some fun with the idea? Pick a TV show, like The Office, or Orange is the New Black, or The Simpsons.
In all of this, the general idea is: try replacing all those tired, empty, folkloric, mythodological phrases and ideas that lead to unhelpfully shallow testing. Replace them with words and approaches that focus testing on finding problems that threaten value.
Now a special appeal to testers. We need to help create better clients for testing. We need to be able to articulate the things I've said here to each other for sure, but we've also got to help our clients understand the significance of these ideas.
In order to do that, we must start to speak and think and study our craft like experts. We must challenge the misbegotten ideas that most people have about testing. To me, this starts with cutting out talk of "manual testing" and "automated testing". Testing isn't manual, and it can't be automated.
No skilled profession allows other people to talk about (for instance) "manual medicine" or "automated medicine"; "manual journalism" or "automated journalism"; "manual programming" or "automated programming"; "manual management" or "automated management". It's ridiculous.
The worst of it is, testers keep referring to their work this way. "I'm a manual tester." No, you're not. You're a TESTER. You use tools; that's normal. Maybe you don't write programs, but so what? Most doctors don't build blood assay machines either. But they do use tools, and they learn to use them powerfully and skillfully.
It's typically a good idea to have programming skills if you're working with computers. I'd recommend learning about programming, even if you only learn to read code without writing it. For testers, consider Everyday Scripting in Ruby: For Teams, Testers, and You, by Brian Marick. It's old, but in a way it's timeless; it gets straight to the important point of developing useful little tools for testing. But if you're disinclined to learn to program, don't. The world already has too many barely-competent coders.
If you need code written to help you test, learn to code or ask for code to be written. But please don't call yourself or others a "manual tester". You're doing complex, cognitive, critical, analytical, scientific work, right? You're a researcher; an experimenter; an explorer. No one speaks of "manual researchers", "manual experimenters", or "manual explorers". Need a substitute for "manual tester"? Try replacing "manual tester" with TESTER.
And finally, some words from various sponsors. I teach individuals and organizations to perform fast, inexpensive testing that completely fulfills the primary missing of testing: finding problems that threaten value. I also help managers, developer, tester, and groups to solve testing problems that they didn't realize they could solve. You can contact me here, or you can check out my blog.
See also James Bach's blog.
We're performing a first-time-ever experiment March 25-27, 2020. We're taking Rapid Software Testing Explored online. Register here.
We're also presenting Rapid Software Testing for Managers online March 30-31, 2020: You could join yourself; you could also tell your manager about it, too.
In the coming weeks and months, we'll be providing lots more stuff online. Stay tuned!
Brilliant one Micheal ??!! Reminds me that I have to book #rapidsoftwaretesting soon…
Quality Awareness at Actuality One Limited
4 年Hello sir, i have some question. Can you do me a favor? Try replacing "test case" with "explicit test conditions and coverage ideas". << Can you give me an example? Try replacing "test case" with "experiment".? << How can i ensure the developer did what User story mentioned? If we are lack of requirement detail, some body ask me about feature released 6 or 12 months ago? But i do not remember clearly why do implement a feature by plan B instead of plan A? And how new member explore to our system?
AWS Certified ??? | Quality Advocate | Platform & Software Engineering Manager
4 年Incredible piece, thank you for sharing it. It takes the reader on a journey about all things testing. The nested and interconnected articles make it feel like I'm exploring different branches of testing from the tree ??of knowledge.
Mich faszinieren Data-Science und Softwareentwicklung.
4 年Great article; helps to remind us that testing is a mission, not just a task to be completed.
Senior Integration QA/Test Lead (Remote/EU-based) at Joyn GmbH with 16-yr exp. | 14yrs @ UK -> Sky, Nokia/Renesas, Motorola | 21K+ connections
4 年Can't agree more with:"Very quickly, they begin to fixate on the idea that testing can be automated. Manual bad; automated good. Soon thereafter, testers who have strong critical thinking skills and who are good at finding problems that matter have a hard time finding jobs. Testers with only modest programming skills and questionable analytical skills get hired instead, and spend months writing programs that get the machine to press its own buttons. The goal becomes making the automated checks run smoothly, rather than finding problems that matter to people."??????????????????????????????????????????????????????????????? https://www.developsense.com/blog/2017/11/the-end-of-manual-testing/