Few pointers on making independent testing more effective...
Anil Kalose
Over 20 yrs of accelerated general mgmt experience in the Process Quality Assurance. Having directly led a team of over 30+ associates and budgets ( P&L ) in excess of $20 million.
This write-up is for only independent testing.
Software testing is a creative and intellectually challenging task. When testing follows what is given below, the creative element of test design and execution rivals any of the preceding software development steps.
Testing must be done by an independent party. Not the person or team that developed the software since they tend to defend the correctness of the program. An altogether different project team is BEST if available and willing to test, however keeping in the mind this difficulty, select someone from your team to test the module or feature to which they have not contributed directly or indirectly. There is a wrong notion that this takes more effort, as the tester has to spend time understanding the code and process. Calculate ROI for this activity, you will be surprised at the benefits. At least it will not bug you as partner/customer reported bug alternatively it saves your support period.
Assign the best personnel to the task. Testing requires high creativity and responsibility only the best personnel must be assigned to design, implement, and analyze test cases, test data, and test results. One should stop comparing the testers and their skills and importance to developers directly or indirectly and should be discouraged.
Testing should not be planned under the tacit assumption that no errors will be found. : See that test cases meet this requirement. It is a tendency to prepare test cases after coding. By this, you are not only fooling yourself but also inviting risks to the partner/customers or teams involved and creating untoward problems to the end-users. Test cases should not be biased with the code it should be prepared at appropriate phases as per requirements. “Test-Driven Development” approach is best for developers unless there is a substitute you have discovered better than this approach. For the development of test scenarios and test cases, it should be part of backlog grooming in agile. If non-agile should start immediately after the requirements are signed off.
Acceptance Criteria should be refined over time and this cannot be a one-time activity. But should be signed off or agreed before the start of the testing.
Have you taken care of partner/customer Needs: It is most likely that partner/customer comes back with bugs, defects, and errors though code has passed the testing done over here? What has gone wrong? Most of the time we juggle saying “you didn’t say this earlier...” Basically we have forgotten to take care of unstated requirements. You might say it is impossible to predict this. I do agree with you, but it could be as simple as to take his test plan, his test cases and validate them comparing with our test cases and plan and see if we have missed out anything.
You might find that Partner/customer is trying to test, what has not been told to you in requirement or he might be missing an important feature. So, it becomes essential to review or validate them before usage. If you don’t your software might fail or it might not test the critical feature
Test for invalid and unexpected input conditions as well as valid conditions.
The program should generate correct messages when it encounters an invalid test and should generate correct results when the test is valid.
The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found.
Testing is the process of executing the software with the intent of finding errors. Keep software static during tests. We are referring to independent testing and not unit testing. It is a high programmer tendency to touch the code and make some cosmetic changes while performing testing. The program MUST NOT be modified during implementation or testing. All defect fixes should be planned. However, if a new test case shows up it should be documented to the smallest details and pushed to automation if possible
Test cases design and Automation: Document test cases. Start working on test case design immediately after the sign off or as part of backlog grooming. While designing test cases look for the sufficiency of the test cases. You can save critical test case design effort in case if the partner/customer is doing testing and if he has a test plan and test cases already designed. Then why you duplicate and spend unnecessary time. Just request for test plan and test cases and validate them for suitability and applicability. Get this into the user stories during backlog grooming Or else you might be testing what is not required and/or might miss out on an important feature from testing. If you are writing the test cases, make sure you have Test Coverage Goals at hand and your test cases fulfill all of them. For readily available test cases, try mapping them for coverage goals
Provide expected test results: A necessary part of test documentation is the specification of expected results, even if providing such results is impractical. It becomes very imperative to document the expected test results in the form of Future tense and do not leave this to testers imagination
Traditionally, testing start by unit test and proceed till UAT. In between these two testings, there is no limit on different independent testings. Few examples are given below/
- Testing types include feature testing, consolidated testing, Iteration testing, Integration Testing, System Testing, Sanity Testing, Smoke Testing, Interface Testing, Regression Testing, Beta/Acceptance Testing,
- Non-functional Testing types include: Performance Testing, Load Testing, Stress Testing, Volume Testing, Security Testing, Compatibility Testing, Install Testing, Recovery Testing, Reliability Testing, Usability Testing, Compliance Testing, Localization Testing
Now imagine a service where multiple variants of the same product are in use based on the where and how the product is implemented. Complication multiplies with an increase of OS versions and types of OSs supported and further on the platforms it is supported on.
Point is there is no end. And one can find the need to or justification for the testing that you do. As pointed earlier in “Few Techniques for Better Product Agility” minimize the environments. Minimize environments considering all variants. Constraints ( eg android cannot be tested on Windows, etc) should dictate the need for the environment. Not the technical capability of the team or the organization or new partner/customer should dictate the need of the environment. These are constraints that need to be managed, not by introducing one more testing.
If possible, have one phase of development testing and regression testing for each build. Have this as part of the CI solution if CI is available.
What measures/ metrics to prioritize in testing: Measure the environment optimization, test case coverage, test execution coverage, leakage to the partner/customer, and test case automation. What not to measure? It should be defect metrics including the count of defects by itself. Defects are for developers to fix. Not for testers. Testers should track to improve the testing so that defects do not leak to the partner/customer. They should focus on environments, coverage, and automation. Try not to create competition between developers and testers by having conflicting metric reports between developers and testers. They should work together to make the output great. Not compete with each other or prove they are right. Developer proving testers are not doing the right testing or raising wring defects and visa versa. Don't compare with benchmarks also unless it is specifically derived only for your project.
Measure the changes you made to the test cases or the test data because of Client defects. This will indicate the goodness of your test cases or testing. Measure this even if you are not able to technically have a test environment the same as the Client's environment.
How many testers or efforts to be estimated?
A well-known rule of thumb that in a typical programming project approximately 50 percent of the elapsed time and more than 50 percent of the total cost were expended in testing the program or system being developed.
Depending on test coverage, set aside effort. For the first time effort estimation, a 35% -50% effort of development could help as a starting point. This includes test case design, development, and testing. Automation development is not included in this. Estimate the number of expected defects and plan for defect fixes and retests as part of the development effort. As the development progresses bring this down as we are referring to independent testing. Benchmarks may give an indication or pointer, but may not help actually, be it the number of developers vs no of testers OR benchmark on the effort distribution for various testings or test runs/cycles. These benchmarks change from organization to organization, or may even change project to project due to multiple assumptions. Generalization will create more problems. Have a number of independent testers based on test case automation and test case design & coverage explained above.
How much test case is good enough? There is no perfect answer to this. Test case related points are already covered above. However, there are certain techniques to optimize test cases. Use the theories like switching or morgan theory to test case design will go a long way during automation. Though optimized, based on defects from the partner/customer, add new test cases – automation or manual. Do continue to add new test cases and don’t take this as a one-time exercise. Few things even the partner/customer may not be able to articulate in the requirement. Hence looking for opportunities to add test cases continuously is a good habit. The same thing is applicable to the "good enough test data" also.
Thank You
References:
Cho C.-K.: An Introduction to Software Quality Control. The MITRE Corporation and the George Washington University, John Wiley & Sons Publication
https://www.softwaretestinghelp.com/types-of-software-testing/