A Critique of Some Popular Testing Principles

A Critique of Some Popular Testing Principles

Principles are an important part of how we think about testing. They are good ideas. But take them literally, they can have the following problems:

  • Following them blindly can become dogmatic
  • They can be be too simplistic at times.
  • They have mutual contradictions.
  • They are often found in "X Principles of Testing" lists. Testers at times think, they are the only principles.

This is a little longer article than I had anticipated. It took me about 2 hours to write, which is more than I have spent on any of the articles in last 2 months. But I didn't want to split it up in multiple articles. In each section, I will list important Think Words .

Note: If you know the source of a principle or if you find a problem in attribution, please let me know. I'll be happy to correct it.

1. Testing can show only presence of bugs

Language is an ambiguous thing.

The wording of this principle is ambiguous if taken literally.

Testing can show presence of a bug. It gets fixed. Isn't the follow-up testing (re-testing/confirmation testing - pick your term) about showing the absence of this bug, as a goal, although it still might end up confirming its presence?

The above is NOT the purpose of this principle. Let's reword it a little bit for it to mean what it means (a little better atleast):

Testing can show only presence of SOME bugs, not the absence of OTHER bugs.

Hence, it is a case against "Bug-Free Objects". The phrase "Bug Free Software" is sadly still seen in the wild.

As attributed to Edsger Dijkstra, I located the original text as mentioned in Wikipedia:

"Program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence." - The Humble Programmer, ACM Turing Lecture 1972

There is a related statement also attributed to him:

?"If you want more effective programmers, you will discover that they should not waste their time debugging, they should not introduce the bugs to start with."? -?The Humble Programmer, 1972

I see a slight contradiction.

The first quote highlights imperfection in testing as a default. The second quote sets perfection as a target for programming. The latter is the same as saying that testing should never miss a bug.

I'd earlier written about my Infinity Model of Imperfect Model of Quality . I am including it for your reference again:

No alt text provided for this image

I think following is a better clubbed version of what we are discussing here:

Programmers will continue to introduce bugs. Testers will continue to miss bugs. Both are imperfect humans who are served an imperfect hand. What matters is what kind of bugs are introduced or missed, how often, what we are doing about it and whether there is an improvement with experience.
A related concept is the problem of False Negatives - problems which exist but were not found by humans/tools. Sadly, there is so much energy organisations spend on False Positives that this area does not get as much focus as it should.
Think word(s): Imperfection

2. Exhaustive Testing is Impossible

It's often used to convey that testing all values for a variable and considering all combinations/relations of it with the same for other variables, is not possible.

It's not often related to the infinity of outcomes and observed variables, though I think that needs to be kept in mind in the larger scope of what testing is about. Exhaustive testing can not be discussed by eliminating outcomes from the equation.

Understanding 0 is simple. Doing nothing is not a solution. We need to do something.

Understanding infinity is simple too - it is unachievable.

The trick is how do you do something and how you make it meaningful.

Although the principle is important. It by itself is too simple to explain the complexities of testing and in the worst case can be used as an excuse not to do testing with proper thinking because after all we can not do it all. This is also the problem of explaining what you do a tester and why.

A closer reality is that as you learn and get better at testing and you want to do better, within the space between 0 and infinity, there's a practical threshold which is not "exhaustive" but is infeasible to achieve because of cost and time constraints. That's where consideration of context and risk is of utmost importance.

An example is that when you think of special characters using Equivalence Class Partitioning (testing for Functionality), typically one puts them in the same partition, however when you think from security perspective special characters are, well, special :-). They have different meanings. Even their sequence has different considerations in different attacks and corresponding payloads. I'd write about this in #TestingNuggets series some day in detail.

Simplicity is preferable. Complexity needs to be embraced. Too simplistic is worse than too complex if the problem is not solved.
Think word(s): Context, Risk, Coverage, Simplicity, Complexity

3. Test Early

This is often associated with the exponential increase of cost in fixing a problem as the problem leaks from its point of origin to later stages.

This one in my opinion is the most misunderstood principle when taken at its face value.

Testing too Early can be wrong and costly too.

A better way of thinking about it is:

Test Early but not too early

Still better:

Do the right testing at the right time.

I see so much of misplaced expectations in this area. By throwing in terms like "Shift Left" especially in the world where dogmatic Agile is prevalent (rather than the philosophy of Agile which is beautiful as I see it), the problem is only worsened.

Questions:

Do you go for performance testing for a functionally unstable application?
Do you go for security testing (esp using a tool) without a basic performant application in place (without realising that such a test needs a performant application?)
Do you try finding bugs with costly techniques just because you want to test early?

What is to be kept in mind is:

The meaning of Early is contextual.
Think Word(s): Pragmatism, Context

4. Beware of Pesticide Paradox

Pesticide Paradox is a phrase most testers know about. It was coined by Boris Beizer and to paraphrase him in a more general manner: when you do good work at finding problems using a method X, the next time this method X does not deliver the same output, provided suitable actions are taken based on the problem patterns reported. That's the paradox part I guess - good work in testing (if not constantly) gives worse results with each iteration (rather than the common thinking that good work pays off).

Pesticide Paradox is often used to pass a constructive message that we need to continue exploring about more and new methods of testing. That's critical. However, it is also often used to talk against repeating the existing methods.

By itself it's not specifically a commentary on automated testing, as a human tester repeating the same exact method over and over again will also suffer from the paradox - which is often put forward as a point against written down test cases.

A side note: As testers, we take repetition for granted. It is often belittled. I'll write a separate article on how repetition is infact a complex activity at times and you will struggle achieving it when needed.
Repetition is not the problem. Blind Repetition (Repeating without focusing on repetition or without being knowledgable about repetition to achieve it effectively and efficiently) is the problem in my opinion.
In my opinion, Exploration-Exploitation Paradox is a better and balanced way of looking at this aspect of testing than Pesticide Paradox alone, as it acknowledges the value of repetition along with the role of exploration.

I've written in detail about this in my post - The Ambidexterity Continuum of Testing where I discuss in detail about the Exploration-Exploitation paradox. Here's the continuum diagram from the article:

No alt text provided for this image
In short, repetition and the strategies to achieve it in cost & time effective way are important along with exploration - finding new and better ways. Their relative importance changes at different points of time for a context.
Think-Word(s): Repetition, Exploration, Exploitation

5. Defect Clustering

This principle is not often discussed but is still a widely known principle. Defects love to exist in groups. Hence, if you find a defect in a module, you might want to change your execution plan/strategy/risk assessment to focus on that module as chances of more defects existing there are more as compared to the module where you didn't find anything.

Come on now. It can't be that simple.

What if you are not finding a problem because of Pesticide Paradox in a module? What if it's a random problem which you came across? What is the nature of this defect? What is the focus of rest of the testing at that point in time? Other considerations?

Defect Clustering is a possibility. It needs to be evaluated in context just like other things.

I look at defect clustering more from the perspective of Pattern Thinking. A certain kind of person, style of programming, group thinking etc lead to certain kind of problems introduced in test objects. If a defect indicates such a pattern, defect clustering can be a good principle to follow else it will introduce chaos resulting in missed opportunities.

Think Word(s): Context, Contradiction, Pattern Thinking

6. Absence-of-Errors Fallacy

The principle highlights the difference between Internal and External Quality interpretations. Irrespective of how well you think about your quality management and how high is the quality of your product, does not mean users are going to like the product. It looks very similar to the "Testing can show only presence of bugs" principle, but I interpret it a little different in the context of Quality rather than testing.

I'd put again my infinity model to highlight this:

No alt text provided for this image

The two quality cages in the above model highlight what all can go wrong and how Quality itself is just a concept.

We don't and can't work on everything that's Quality for all the users. We work towards what we think is what they need, what we want, what we can afford, what we can achieve. That also is still just a thought. Anything and everything can go wrong when we act on these thoughts.
What is Quality? Quality is happiness. So, it's as complex and vague to define, act on and measure as happiness.
Think Words: Imperfection, Translation, Interpretation, Constraints, Perspective

7. Testing is Context-Dependent

Note: There are concerns on the usage of suffix "dependent" vs "based" vs "aware" vs "driven" etc. You can read this in-depth article by Cem Kaner if you are interested in these finer differences or more. To me my concept of Pluralism in testing explains this thought process to me as I am not good at processing finer differences between words (here's my article on this: Pluralism and the Infinite Schools of Testing ).

I remember conversations with context-driven community during the early years of my career. They said context should drive testing. My concern was why it was being called context-driven testing. If we don't take context as the core ingredient of testing, is it still testing or an utter disservice? Considering context should be non-negotiable.

And No. Testing is NOT context-dependent. Rather:

Testing MUST be Context-Dependent.

The way this principle is found especially in ISTQB texts makes it sound like you don't have to do anything.

You'll need to make conscious efforts to respect the context.

The only problem? Whose context?

What you think is the context is NOT the context. It's your interpretation of the context. Others have interpretations too which could have contradictions or constraints for some pieces of your own interpretation. Whose context will win? At any point in time in the best case you are considering an amalgamated interpretation of the context with the hope that it is good overall interpretation.

On the same lines, although one can think of risk assessment as a part of context definition, it helps me to keep them as separate. So, here's another thought:

Testing MUST be Risk-Dependent.

This is in contrast to declaring Risk-based testing as another term.

I capture this in the heart of My Quality Cage in my Infinity model along with Value and Perception as seen below:

No alt text provided for this image


Think Word(s): Context, Risk, Value, Interpretation, Pluralism, Perception

Some Further Thoughts

To me more than the principles themselves, subjecting them to critical thinking just like everything else is more rewarding.

Principles are heuristics just like everything else.

Think-Words help me much better than established definitions and wordings as they help me in uncaging and translation . Following are the my Think-Words in this article:

  • Imperfection
  • Context
  • Risk
  • Coverage
  • Simplicity
  • Complexity
  • Pragmatism
  • Repetition
  • Exploration
  • Exploitation
  • Contradiction
  • Pattern Thinking
  • Translation
  • Interpretation
  • Constraint
  • Perspective
  • Value
  • Pluralism
  • Perception

Think about these words and find out for yourself if some of them trigger thoughts in your mind as well.

That's all for now.

Some related articles:

20 Years of Contradictions

Pluralism and the Infinite Testing Schools

The Infinity Model of Imperfect Quality for Fallible Testers

An Undefinition - What is a Test?

AEIOU - The Vowel Model of Thinking for Pluralistic Testing

Handling Complexity in Testing - What's Your Slice?

The Ambidexterity Continuum of Testing

Uncaging, the Perpetual Translation Engine and the Deltas

Think-Words for Testers

Metacognition - Biases, Problems, Abstractions and Variables

The Other Shade of Feedback and Staying Happy Nevertheless

Uncaging the Types of Testing

Dimpy Adhikary

Quality Analyst @Thoughtworks | Test Automation Specialist | ML enthusiast| AWS Cloud Practitioner | Mentor | Blogger

1 年

I came to know about those principles for the first time while preparing for ISTQB many years back. This is the best critique for those principles as per my knowledge Rahul Verma thanks for the insightful article.

Really nice article. I especially appreciate the blending of nuanced concepts that most treat as exclusive ideas or rigid rules, but which in this text are given appropriate consideration rather than dogma.

Jason Arbon

??♂? CEO, Testers.AI / Checkie.AI

1 年

“you want to do better, within the space between 0 and infinity”. Perfect.

Anantha A Subramanya

Assistant Vice President - Software Engineering at Moody's Ratings

1 年

Rahul Verma - Insightful article which made me re-think and question my testing principles.

要查看或添加评论,请登录

Rahul Verma的更多文章

社区洞察

其他会员也浏览了