MEASURING THE QUALITY METRICS
Siddharth Pandey
Engineering Lead | Expert in Test Automation & DevOps | Gen AI Explorer | Driving CI/CD, Cloud Transformation, and Product Quality
The most challenged and ambiguous thing I have ever encountered in testing is getting the Quality Metrics and sharing them with Stakeholders ( everyone who has the ownership in the project, includes Developers, QA, PM). Setting up the parameter always enable in high-performance delivery among the team. So, after years of experience, I’ve customised the high-level metrics gathering with some defined parameter which help me deliver the best.
I prefer the results-driven metrics and I’ve divided getting the quality metrics into 3 levels
Level 1: The Core
· Test run summary: It serves as the “Quality Gates” for the application, the prime objective of this document is to explain various details and activities about the Testing performed for the Project
· Defect + Status + Priority + Severity: This helps us in understanding, what should be prioritised
Leve2: Required
· Execution happening on Week / Sprint basis: This helps in aligning and understanding the better utilisation of resources
· Results per requirement: Test which we have and what requirement is it covering.
For ex. The 90 % of my testing revolves around Requirement 1 and only 5% of my testing revolves around Requirement 2 and Requirement 3. This helps in understanding are we over-testing and under testing. This helps us understand how to trace backtesting to our requirement.
· Defect Density: This helps in understanding the focus area, which application is having the maximum defect and what should be the point of testing.
Level 3: Something Extra
· Manual vs Automated: If any test from the manual can be moved to automated, not all test can be automated. Can create the automated smoke test suite. What test have more value as per the business perspective?
· Last Run: Keeping the log of the run, help identifying whether to keep the test or not, during development I’ve seen sometimes due to change in requirement entire workflow changes or get updated. Keeping the last run gives us information about what can be done to make our test suite better.
· Flapping: It means running the same test run on a different machine, variable, environment, browsers. It fails with one environment but passes with another environment. We can always go back and check, is this written correctly or is anything changed, does the environment and variable we have is it affecting the execution. This can be captured on “Result driven graph”. This help in identifying the issue with the test case, source code or maybe there is an extra problem with the requirement.
#MEASURING THE QUALITY METRICS
https://www.qualitykoder.com/measuring-the-quality-metrics/