MEASURING THE QUALITY METRICS

The most challenged and ambiguous thing I have ever encountered in testing is getting the Quality Metrics and sharing them with Stakeholders ( everyone who has the ownership in the project, includes Developers, QA, PM). Setting up the parameter always enable in high-performance delivery among the team. So, after years of experience, I’ve customised the high-level metrics gathering with some defined parameter which help me deliver the best.

I prefer the results-driven metrics and I’ve divided getting the quality metrics into 3 levels

Level 1: The Core

·     Test run summary: It serves as the “Quality Gates” for the application, the prime objective of this document is to explain various details and activities about the Testing performed for the Project

No alt text provided for this image

·     Defect + Status + Priority + Severity: This helps us in understanding, what should be prioritised

No alt text provided for this image

Leve2: Required

·     Execution happening on Week / Sprint basis: This helps in aligning and understanding the better utilisation of resources

No alt text provided for this image

·     Results per requirement: Test which we have and what requirement is it covering.

For ex. The 90 % of my testing revolves around Requirement 1 and only 5% of my testing revolves around Requirement 2 and Requirement 3. This helps in understanding are we over-testing and under testing. This helps us understand how to trace backtesting to our requirement.

No alt text provided for this image

·     Defect Density: This helps in understanding the focus area, which application is having the maximum defect and what should be the point of testing.

No alt text provided for this image

Level 3: Something Extra

·     Manual vs Automated: If any test from the manual can be moved to automated, not all test can be automated. Can create the automated smoke test suite. What test have more value as per the business perspective?

·     Last Run: Keeping the log of the run, help identifying whether to keep the test or not, during development I’ve seen sometimes due to change in requirement entire workflow changes or get updated. Keeping the last run gives us information about what can be done to make our test suite better.

·     Flapping: It means running the same test run on a different machine, variable, environment, browsers. It fails with one environment but passes with another environment. We can always go back and check, is this written correctly or is anything changed, does the environment and variable we have is it affecting the execution. This can be captured on “Result driven graph”. This help in identifying the issue with the test case, source code or maybe there is an extra problem with the requirement.

#MEASURING THE QUALITY METRICS

https://www.qualitykoder.com/measuring-the-quality-metrics/


要查看或添加评论,请登录

Siddharth Pandey的更多文章

  • Flaky Test, How to Prevent your Test Suit

    Flaky Test, How to Prevent your Test Suit

    Flaky Tests A Flaky test is one that has a non-deterministic outcome: it can pass sometime and fail other, for the same…

  • What is Shift Left and Shift Right Testing ?

    What is Shift Left and Shift Right Testing ?

    There was time when doing entire testing and maintaining quality was the job of a tester and QA team, but now we can…

  • Nightmare of Automation Tester ?

    Nightmare of Automation Tester ?

    When you hear about the best marketed restaurant in town, but they have below standard food and Hygiene ( Definitely…

  • Automating mobile Native app, Hybrid App or Web App

    Automating mobile Native app, Hybrid App or Web App

    When the app is in development and testing team is introduced to test and automate the mobile app. The first thing they…

    2 条评论
  • Performance Testing Reloaded

    Performance Testing Reloaded

    Welcome to performance testing reloaded If you are new to Performance Testing then please go to the article Performance…

  • Cucumber, What's Wrong ?

    Cucumber, What's Wrong ?

    Cucumber A Simple, human collaboration. As the official website say and that's true.

    4 条评论
  • Appium Vs Calabash

    Appium Vs Calabash

    If you want to test your mobile application and looking for open source tool then most probably you have come around…

  • Performance Testing for Newbies

    Performance Testing for Newbies

    Performance Testing: It's a Quality Assurance Methodology which ensures that Software Application / System over which…

    1 条评论

社区洞察

其他会员也浏览了