Manual Tests & Automated scripts Traceability
Ali Khalid
Heading Data Quality @ Emirates Airline | Data Governance | DataOps | Transforming Analytics | Award winning speaker | Trainer | Coach
The product had been tested for years now by the testing team doing exploratory tests, and writing test cases for important areas. The application size increased day by day eventually reaching to more than a thousand test cases. That is when the testing team thought of delegating the ‘checking’ part of regression to automated scripts to free some time for real testing.
Many product teams coming towards automation have reached this stage and are looking for a way to shift written manual tests to automated scripts. The tests in many cases include some rich scenarios which the team wants to leverage, and are looking for scripting an exact copy. Naturally this comes with inherited challenges, some of which I am about to share on how we managed for one particular product.
Before moving on, some tools claim to automate manual tests from a word document etc. that is not being discussed here (plus I yet have to see that work!).
Test case to script mapping
Ideally all manual tests should be part of the automation suite as it is. However, differences are bound to creep in. To maintain traceability between tests and automated scripts, creating a mapping document would be a good idea. Essentially map every manual test to an automation test. For a discrepancy in test scenarios, mention the reasons with appropriate tags (for ease of filtering).
As the application evolves, changes in manual tests come in and scripts need to be updated. Having this document would
- The change would become way easier for the person updating scripts if any prior discrepancy was written with reasoning readily available.
- Secondly during regression it would be very clear which areas automation is not looking at and the manual tests might want to look into.
Scripts incapability vs sentient beings
There are always some steps in manual testing which the testing tool is not able to perform. Could be a physical activity outside the product, portion of the application not automatable, a very complex bunch of scripts needed to improvise in different application states. Instead of just leaving out the test altogether I usually recommend
- Alter the scenario to suit the script, salvage whatever you can, and forego what cannot be done.
- Break the test in two. For the second test use pre-populated data / test scenario to avoid the area not automatable.
The mapping document comes in very handy here.
Manual test steps in report
Test reports generated from automated scripts should be readable primarily by the manual testing team. Usually I see teams with test reports showing all the automation mumbo-jumbo right off the bat, creating lots of confusion for someone not involved in automation.
I strongly advise to include test steps as it is from the manual test case in the automation test report. Under each step should be the script read / write details the tool is performing. Non-automation folk can then make sense out of it, also it creates lots of ease for the automation team to fix issues.
Dual purpose
Apart from mapping differences from manual tests, this document was used by us to have an overview of the complete automation suite’s health. Scripts which we knew were faulty and needed updates, scripts needed in-depth investigation, scripts failing due to a reported issue, all these status updates were appended to this document.
Even if you don’t have manual tests to map to, still every automation project must have one spreadsheet with at least the fields listed. These are a huge time saver when managing batch runs / daily runs.
Care to share what you did to map manual tests?
Till next time, Happy automating!
Department Chair, ITM & Operations Department at Saint Louis University
8 年These are the pain areas that you have highlighted in your article. Creating test cases, traceability matrix, and automation scripts should be tightly linked. I suggest you to look at www.testingalgorithms.com and see how we solve this problem. Please watch some videos that have been posted here.
Podcasting with a Mission
8 年Yep, a very common pain point. Another way to solve this is to do away with the mapping entirely--use a BDD tool like Cucumber, SpecFlow or Behave, and use the specifications to drive the automation. It also solves the problem of: how do we know that the test case and the automation are in sync?
Test Automation Manager at Accenture
8 年Thanks for the great article. We also have similar traceability matrix in the form of an excel file which keeps track of manual test case to the automated counterpart, the type of test (functional, operational, stress, etc ), application area, date of latest exection, etc. This information is good when we plan for the next test cycle, to engage which tests will be executed. But we don't use this mapping to see which automated tests need updating due to some changes in the application. To bring to surface any discrepany, If time permits, we run the unit tests (testing the automated programs ) , otherwise we execute the programs themselves and surely they will fail the tests. The key here is to make sure the validation/verification is very robust (meaning make it harder to pass than to fail ). Of course having a properly layered automation architecture is also important to this approach.