"Fail Probability of test case #7 is 69%!"
During execution of test cases, wouldn't a statement like the one above, for each pending test cases, be helpful?
Also, wouldn't it be even more helpful if these probabilities are revised automatically every time a test case is executed and its outcome is recorded?
Well, I tried to summarize some of the benefits below.
- Test Prioritization: Fail probabilities assigned to each test case would enable testers to determine the order of execution in such a way that defects (especially the critical and high defects) are identified at the earliest. This will give the developers enough time to fix those defects.
- Stopping Rule: Creation of rules like "stop testing if all remaining test cases have fail probabilities < 10%!" before the test execution starts would be possible. This would help the testing team to avoid over-testing, by objectively and quantitatively deciding when to stop. This would also help in determining the extent of regression testing for a release.
- Effort Estimation: At the time of estimating the testing effort for a project and the number of resources required, usually a fixed percentage (e.g., 15%) of the total testing effort is assumed for defect re-testing. However, most of the time we under-estimate it and thus experience a tremendous amount of time-crunch towards the end of testing. With these fail probabilities; defect re-testing effort could be determined more accurately.
Makes sense?
Now, the question is, it is possible to calculate these fail probabilities?
And if so, how?
Very recently I helped a friend in the analysis of a completely different problem (not even related to software testing). An organization shared a list of their employees. Our task was to calculate the probability of attrition for each employees based on their demographic information, salary information and survey responses so that the organization can take necessary steps to prevent attrition.
This analysis was done using some statistical models and the accuracy of the models were very high in terms of predictability.
And, while doing this analysis, I discovered something else that would be help in software quality assurance!
I found that, at the time of test execution, the determination of fail probabilities for test cases, based on various attributes of the test cases, is the exact same problem!
We, at Testing Algorithms, are working on creating a framework where the fail probability of test cases (generated by our patent-pending automated requirement analysis and test case design solution) can be automatically calculated and revised during test execution.
If you are interested to know more, feel free to contact us. We would be happy to talk to you about this.
QA Architect | Test Automation, Scrum Certified
7 年I agree with Shrini here, a test that has a fail probability close to 0 % , can have a bug waiting for it due to updated code. These kind of predictive tech will just let that bug slip through . Such tech will be highly useful in projects where code is base-lined and is not getting much updated.
Post Grad Student @ CIM | Digital Marketing, Customer Experience
7 年Interesting, we recently completed an automated testing process for one of the world's largest CPG companies, it returned 80% automation, compared to the 27% they saw before the exercise
Incubating the Future Now
7 年Okay, statistically speaking, I see where you are going with this. A skilled tester may say they use their 'intuition' about which areas are likely to fail so they target these areas. How would statistics calculated by a machine do prediction about test failures in a way that is accurate, precise, deals with human mistakes in coding that are unknowable before test, and be actionable?
Award-winning Educator at Saint Louis University
7 年I agree with you. Thanks for sharing your thoughts!
Director of Engineering @ Ada Health ? Innovating & Evolving Leadership ? Engineering, Quality, Regulatory, Agile, Culture, Strategy, Coaching ? Mental Health First Aider ? International Speaker
7 年Interesting read. It's worth not forgetting that "test cases" probably only make up around 20% of testing though - they are focussed on checking our explicit expectations of the software, but we also utilise "test charters" for our investigative testing. And we also don't just do investigative testing of the software, but we do investigative testing of the designs and the idea of the software too - "test cases" don't help us with this... But for that bit of testing that we do use test cases for, I can see how predictions can be useful, but with the realisation that they are like estimates - they change frequently based on us uncovering more information.