Want to improve/measure QA process.......This might help
QA teams usually follow strict deadlines, and pressure to deliver on releases faster. To be on top in QA tasks QA team might lead to overburdened and burnt out which can contribute to undiscovered defects. They also follow specific procedure in the development cycle and if it is not streamlined it can also lead to wrong direction, poor performance and delivery gaps.
There is popular saying by Peter Ducker “If you can’t measure it, you can’t improve it.” In order to improve QA process measuring it first is essential. Measuring involve documenting streamlined set of metric which will help provide broad insight into the impact QA team is having on business goals. To evaluate QA team’s performance here are some of the core measurements to be focused on
- Test Coverage
Test coverage is a measurement of testing performed by a set of test cases in terms of effectiveness and quality. It is done by comparing the tests written with respect to code base.
In test coverage goal is to test the entire code base by performing several types of testing such as Unit, functional, performance, integration and acceptance testing. Let’s touch each testing type briefly.
Unit testing covers each method/function as per expectations. It is usually done by developer during the development of an application. This helps to avoid critical defects. Functional testing is performed by testing each functionality based on the requirements. Whether each functionality mentioned in the requirements is tested or not is covered in Functional testing. Performance testing covers code testing by putting different workloads and responsiveness. Integration or system testing is done to see how product works as on body. When the actual user or stakeholders do product testing it is called acceptance testing.
Test coverage would also help identifying
- What requirements are missing test cases
- Create additional tests to increase coverage
- Test cases that are not being used or don’t increase coverage
- Where more resources are being used
- Assurance of tests quality
- Prevent defect leakage
- Better manage time, scope and cost
- Areas to be focused on
- Reducing gaps between requirements, tests and defects.
2. Broken tests
If there are any tests that are not providing useful quality feedback should be removed since they are just waste of time and resources. There can be different reasons tests should be removed such as
- Tests fail or pass intermittently because of poor test quality or poor execution
- Test cover some corner case (low level)
- Tests are not valid anymore
- Tests are not being executed.
3. Time-to-Test
Time taken by QA team to execute and provide report on a set of tests is an important indicator to check testing cycle efficiency. This is basically turnaround time from start of test run cycle to point when result report is received. Measuring this time will help identify the point where test suite is being dragged. Test case consuming more time can be automated or time consuming tests can be trimmed to cover most important piece of the application only and avoid testing minor low level piece of the application.
4. Time-to-Fix
When a broken piece of code is identified, it usually goes into management system (application) where it is tracked and maintained called defect, issue, ticket etc. Time taken from when an issue is being logged to when it is fixed is also an important indicator which show how QA and development communication is happening. Reducing this gap (defect being logged to being fixed) would increase efficiency of the team.
5. Escaped/Production Defects
Defects percentage produced by actual users is also an important indicator that should be tracked and can indicate measure of QA success.
How to reduce No. of production defects:
- Perform aggressive regression testing on functionality where code change occurred plus the entire related module such as if you make changes in one element of a form, perform regression on entire form and it’s related components.
- Automated the critical and High components of the app and execute them on daily basis. Also run them on non-production environments to identify the defect before it goes to production.
- Adopt the habit of conducting frequent code refactoring specially after examining the software requirements. The intent is to fix improper methods or names or variables and decreasing repeated code down to a single function or method.
- Try not to just fix the bug rather come up with procedure and plans to avoid it being happening again. Such as add the defect as test case, if defect is related to a common code which is being changed again and again advise QA team to pull related test cases in each release.
Here are some other key pints to focus on
- How many critical bugs found in production
- Gap in regression tests
- Time spent on Test cycles
- Bugs detected in Staging vs Production
- Test case automated vs manual
- Bugs quality (Valid vs Invalid)
- Bugs sent for clarification
- Bugs found early in the cycle vs later (Sooner bugs will have low fix cost)