Testers shouldn’t want to verify that a program runs correctly
Image generated by Microsoft Designer in response to 'Testers shouldn't want to verify that a program runs correctly'

Testers shouldn’t want to verify that a program runs correctly

(This is a quote from the book, 'Testing Computer Software')

If you think your task is to find problems, you will look harder for them than if you think your task is to verify that the program has none (Myers, 1979). It is a standard finding in psychological research that people tend to see what they expect to see. For example, proofreading is so hard because you expect to see words spelled correctly. Your mind makes the corrections automatically.

Even in making judgments as basic as whether you saw something, your expectations and motivation influence what you see and what you report seeing. For example, imagine participating in the following experiment, which is typical of signal detectability research (Green & Swets, 1966). Watch a radar screen and look for a certain blip. Report the blip whenever you see it. Practice hard. Make sure you know what to look for. Pay attention. Try to be as accurate as possible. If you expect to see many blips, or if you get a big reward for reporting blips when you see them, you'll see and report more of them including blips that weren't there ("false alarms"). If you believe there won't be many blips, or if you're punished for false alarms, you'll miss blips that did appear on the screen ("misses”).

It took experimental psychologists about 80 years of bitter experience to stop blaming experimental subjects for making mistakes in these types of experiments and realize that the researcher's own attitude and experimental setup had a big effect on the proportions of false alarms and misses.

If you expect to find many bugs, and you're praised or rewarded for finding them, you'll find plenty. A few will be false alarms. If you expect the program to work correctly, or if people complain when you find problems and punish you for false alarms, you'll miss many real problems.

Another distressing finding is that trained, conscientious, intelligent experimenters unconsciously bias their tests, avoid running experiments that might cause trouble for their theories, misanalyze, misinterpret, and ignore test results that show their ideas are wrong (Rosenthal, 1966).

If you want and expect a program to work, you will be more likely to see a working program-you will miss failures. If you expect it to fail, you'll be more likely to see the problems. If you are punished for reporting failures, you will miss failures. You won't only fail to report them you will not notice them.

You will do your best work if you think of your task as proving that the program is no good. You are well advised to adopt a thoroughly destructive attitude toward the program. You should want it to fail, you should expect it to fail, and you should concentrate on finding test cases that show its failures. This is a harsh attitude. It is essential.

Testing Computer Software, Page 24, Objectives and Limits of Testing, Cem Kaner, Jack Falk, Hung Quoc Nguyen

The best testers are the GENERAL PUBLIC in volume

回复

The issue is that the people who write these things assume that EVERYONE knows what they are doing whereas they only know a bit

Jeff Nyman

Director of Quality Assurance at Omatic

1 个月

I don't think we should be positive or negative. I think the terminology is ill-formed and often provides excuses for certain types of behavior. I talked about that here for those curious: https://testerstories.com/2013/09/dont-be-so-negative-or-positive/. In reality, you do need tests that verify something works as you expect. Confirmation is a huge part of experimentation and testing. You also need tests that probe to see what ways something doesn't work as you expect. Falsification is a huge part of experimentation and testing. You need to consider valid and invalid conditions. You need to consider those conditions under various forms of operation (such as volume, load, stress, intensity, etc). Implausification is a huge part of experimentation and testing. (Implausification is a huge way to counter the bias that you mention, which I entirely agree with.) Note how, as in most things, it's not an either-or. It's an and. Too much test thinking and thus test expression, in my view, falls under the idea of the Tyranny of the Or. What helps is if we frame it as emphasis. I agree that our emphasis should be first on showing something does not work or may not work (under certain conditions).

要查看或添加评论,请登录

Nilanjan Bhattacharya的更多文章

  • Context free Questions for testers parsed (c.f. not the crooner)

    Context free Questions for testers parsed (c.f. not the crooner)

    Leaving out the names of the original author to avoid positive bias or angst at seeing repeated mentions. (The article…

    1 条评论
  • 101 Models of software

    101 Models of software

    Here are 101 models of your code (please see attribution below). I have provided a condensed version of each model.

  • Managing self-managed teams!!?

    Managing self-managed teams!!?

    Self managed teams became popular with the agile movement in software development. However, there isn't a lot of advice…

  • Bug Advocacy videos for testers (FREE)

    Bug Advocacy videos for testers (FREE)

    Most discussions of software defects center around templates and rigor in writing reports. In contrast, The Association…

  • Who will monitor the monitors?

    Who will monitor the monitors?

    Monitoring has been a very important concept in network or infrastructure operations for a long time. It has got much…

    4 条评论
  • User Stories as a negotiation between problems and solutions

    User Stories as a negotiation between problems and solutions

    Agile software development radically changed the nature of software creation. One of the hallmarks of agile is the…

  • Resume help for non-automation testers

    Resume help for non-automation testers

    Software testing has always been difficult to be understood. It’s even more out of fashion nowadays (for the wrong…

  • How to think smarter about software development

    How to think smarter about software development

    Edge.org is a website where some of the most renowned intellectuals discuss ideas and express opinions.

    2 条评论
  • DevOps Defect Catalog - from 1988's Cem Kaner

    DevOps Defect Catalog - from 1988's Cem Kaner

    Good ideas in software testing endure, despite changes in technology and development approaches. In 1988, Cem Kaner…

  • Anticipating Problems with Infrastructure as Code in DevOps

    Anticipating Problems with Infrastructure as Code in DevOps

    There are different interpretations of the term, ‘DevOps’. However, in the DevOps values - CAMS (Culture, Automation…

社区洞察

其他会员也浏览了