Which Test Cases should be Automated?
Giffy Images

Which Test Cases should be Automated?

It’s time to perform automation testing — one of the most desired testing vertical of modern era and core type of software testing. But how can we decide that which test cases need to be automated and which are not?

Thorough testing is crucial to the success of any software product. If our software doesn’t work properly, chances are quite strong that it would be flushed out from the market in no time. Finding defects – or bugs – is time-consuming, expensive, often repetitive, and subject to human error. Automated testing, in which Quality Assurance teams use software tools to run detailed, repetitive, and data-intensive tests automatically, helps teams improve software quality and make the most of their always-limited testing resources. Test Automation tools such as Selenium and Puppeteer help teams test faster, allows them to test substantially more code, improves test accuracy, and frees up QA engineers so they can focus on tests that require manual attention and their unique human skills.

Well, thorough research is required before starting the automation of test cases. We need to plan and then execute our automation suite for the shortlisted list of test cases. I will be covering it in this article.

There are some following points which need to be discussed before automating the test cases:

1. How often is the test run? Would you benefit from having it run more often?

A common test, like making sure you can login, is simple and often easy to automate. It also would be beneficial to find out right away if you cannot login, and could be run after every single build (as in a smoke test). A test that only needs to be run once before release isn't likely to be worth automating.

2. How much data needs to be entered for the test? (Or, how tedious is the test)?

Adding one thousand entries to your database is a pain to do manually. But it's no sweat for an automated test. In general, the more data that's entered, the better the automation candidate.

 3. Is the output of the test easily measured?

If you can easily tell if a test succeeded or failed, then it's a good test to automate. If you need to cross-reference several items to make the call, it will be more difficult to automate the test. 

4. Is the output of the test purely objective?

If the tester is required to make sure a result is exact, an automated test can usually do that. If, for example, the tester is required to make sure an image looks "good" or even "readable," an automated test will have a very difficult time verifying the image.

5. Does the UI around the area being tested change a lot from version to version?

An automated testing tool can handle some changes to controls. But it likely could not handle the moving of a field to a completely different dialog. You'd have to modify the automated script to compensate for it.

6. Does the test utilize any customized controls?

If a test uses ordinary buttons, edit boxes, combo boxes, or grids, it can be an automated test. If you have custom controls, it will be difficult. (Not that custom controls cannot be automated. But you may get better results automating with the more standard parts first.)

7. Does the test invite exploration of corner cases or improvisation?

An automated test can only do what it is told. It is not creative and won't explore corner cases.

So, What Test Cases Can Be Automated? 

So, test cases can (and should be automated) if:

  • Tests are used repeatedly.
  • Tests involve a lot of data entry.
  • Tests clearly pass or fail. 
  • Tests deliver an exact result.
  • Tests use consistent UI and regular controls.
  • Tests are only to do what they're told — not check anything else. 

 What Test Cases Shouldn’t Be Automated?

Not every test case can be automated. Subjective test cases — test cases that are not testing a clear function — will still need to be done manually.

Here are some examples of test cases that shouldn’t be automated:

  • Tests that you will only run only once. The only exception to this rule is that if you want to execute a test with a very large set of data, even if it’s only once, then it makes sense to automate it.
  • User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
  • Tests that need to be run ASAP. Usually, a new feature which is developed requires a quick feedback so testing it manually at first
  • Tests that require ad hoc/random testing based on domain knowledge/expertise – Exploratory Testing.
  • Intermittent tests. Tests without predictable results cause more noise that value. To get the best value out of automation the tests must produce predictable and reliable results in order to produce pass and fail conditions.
  • Tests that require visual confirmation, however, we can capture page images during automated testing and then have a manual check of the images.
  • Test that cannot be 100% automated should not be automated at all, unless doing so will save a considerable amount of time.

How to Manage Automated Test Cases

The above are just guidelines. Even if your test meets all the guidelines for a good candidate for automation, it may still be difficult to automate the particular test. 

Automated testing is just one piece of the puzzle

Automating tests will save us some time, but there will always be the need to run some tests manually. And successfully managing both forms of testing is critical to your product's success.

Effective test case management can help us manage manual and automated tests.

 References

https://www.testingexcellence.com/choose-tests-automate/

https://smartbear.com/learn/automated-testing/best-practices-for-automation/

https://www.perforce.com/blog/alm/which-tests-do-i-automate

Ahmad Usman

QA Lead Graana

5 年

Good Article Saqlain Altaf Naseeb. Another point i must add for success of automated test still we need minds to write good manual tests which need to be converted to automated test.?

要查看或添加评论,请登录

Saqlain Altaf的更多文章

社区洞察

其他会员也浏览了