Testers shouldn’t want to verify that a program runs correctly
Nilanjan Bhattacharya
Technical Test Manager/lead for complex software products (cybersecurity, CAD, low code). Created and mentored test teams on par with the best. Public articles show my passion and thinking.
(This is a quote from the book, 'Testing Computer Software')
If you think your task is to find problems, you will look harder for them than if you think your task is to verify that the program has none (Myers, 1979). It is a standard finding in psychological research that people tend to see what they expect to see. For example, proofreading is so hard because you expect to see words spelled correctly. Your mind makes the corrections automatically.
Even in making judgments as basic as whether you saw something, your expectations and motivation influence what you see and what you report seeing. For example, imagine participating in the following experiment, which is typical of signal detectability research (Green & Swets, 1966). Watch a radar screen and look for a certain blip. Report the blip whenever you see it. Practice hard. Make sure you know what to look for. Pay attention. Try to be as accurate as possible. If you expect to see many blips, or if you get a big reward for reporting blips when you see them, you'll see and report more of them including blips that weren't there ("false alarms"). If you believe there won't be many blips, or if you're punished for false alarms, you'll miss blips that did appear on the screen ("misses”).
It took experimental psychologists about 80 years of bitter experience to stop blaming experimental subjects for making mistakes in these types of experiments and realize that the researcher's own attitude and experimental setup had a big effect on the proportions of false alarms and misses.
If you expect to find many bugs, and you're praised or rewarded for finding them, you'll find plenty. A few will be false alarms. If you expect the program to work correctly, or if people complain when you find problems and punish you for false alarms, you'll miss many real problems.
领英推荐
Another distressing finding is that trained, conscientious, intelligent experimenters unconsciously bias their tests, avoid running experiments that might cause trouble for their theories, misanalyze, misinterpret, and ignore test results that show their ideas are wrong (Rosenthal, 1966).
If you want and expect a program to work, you will be more likely to see a working program-you will miss failures. If you expect it to fail, you'll be more likely to see the problems. If you are punished for reporting failures, you will miss failures. You won't only fail to report them you will not notice them.
You will do your best work if you think of your task as proving that the program is no good. You are well advised to adopt a thoroughly destructive attitude toward the program. You should want it to fail, you should expect it to fail, and you should concentrate on finding test cases that show its failures. This is a harsh attitude. It is essential.
Testing Computer Software, Page 24, Objectives and Limits of Testing, Cem Kaner, Jack Falk, Hung Quoc Nguyen
The best testers are the GENERAL PUBLIC in volume
The issue is that the people who write these things assume that EVERYONE knows what they are doing whereas they only know a bit
Director of Quality Assurance at Omatic
1 个月I don't think we should be positive or negative. I think the terminology is ill-formed and often provides excuses for certain types of behavior. I talked about that here for those curious: https://testerstories.com/2013/09/dont-be-so-negative-or-positive/. In reality, you do need tests that verify something works as you expect. Confirmation is a huge part of experimentation and testing. You also need tests that probe to see what ways something doesn't work as you expect. Falsification is a huge part of experimentation and testing. You need to consider valid and invalid conditions. You need to consider those conditions under various forms of operation (such as volume, load, stress, intensity, etc). Implausification is a huge part of experimentation and testing. (Implausification is a huge way to counter the bias that you mention, which I entirely agree with.) Note how, as in most things, it's not an either-or. It's an and. Too much test thinking and thus test expression, in my view, falls under the idea of the Tyranny of the Or. What helps is if we frame it as emphasis. I agree that our emphasis should be first on showing something does not work or may not work (under certain conditions).