Test Automation in Agile

Automation is crucial due to rapid development and fast deliveries in most of the agile software projects. There are two primary reasons to automate things

  1. To get assurance on the completed functionalities that it's not affected much due to new changes
  2. Certain checks could not be done without code. If done it might take huge amount of time and not worth it and most importantly it's not repeatable.

Automation Planning and framework

Many teams spend too much time on analyzing what framework to choose. Automation in agile should ideally start when the dev starts working in first line of code. Now a days there are so many popular libraries and test frameworks available and hence spend little time on selection unless you have a special need on tools. Zero down a popular language on which your test team is comfortable to work with. Then select a popular open source library for instance you might choose Selenium or Playwright for UI Automation. Select a popular Unit Testing framework for a language choice you went with and start building things.

Create a repository in a version control system whatever is used by your organization.

Plan having some basic structure in your project and share it to all testers who will be using it. Start improving over a course of time iteratively based on the need.

Automation Review

Automation code need to be reviewed primarily from two perspectives

  1. Language syntax and best practices for readability, reusability and maintenance
  2. Functionality validation correctness for test

The first one could be done with the expertise of a person who has more experience/knowledge in good coding standards and practices. The second one is about testing the necessary assertions based on the functionality being tested. It's crucial to test the test code by varying the test data and check whether the test fails whenever necessary and passes whenever required. Test all the expected conditions whatever you are aware of or else reach out to Product Owner or SME's for more details.

The below is a condition for a bad automated test and it yields no value

  1. Test that passes when it need to fail
  2. Test that fails when it need to pass

Trigger your test pipeline and check any existing test cases are failing due to newly added test. These are the things need to be done prior to approving the PR. There should not be any compromise on that.

In case of multiple assertions in a test make sure it logs all the issues at once in case of failure rather than it logs and exits as soon as the first assertion got failed. Then we keep fixing it one by one and it takes more time.

We need to categorize the test based on module or test level like sanity so that we can control to run the group of tests based on need.

Start connecting early with the person who will be reviewing your code for the above and have constant sharing with those people whenever required. So that it's also easy for a person to review in a better manner.

Automation Logging

It's necessary to have some logging mechanism for our tests as like any other product we build exposes necessary logs.

A test case should log some useful information based on what is getting tested either it pass or fail. In case of passing scenario log a message on what is been asserted and in case of failure, provide what was expected and what was the actual. In failure scenarios it should help the person analyze quickly based on the log message. That's the point of logging. It's very hard to figure out logging all possible failures but as and when you encounter a new failure scenario add a useful log message so that it helps to analyze when it occurs again.

Add log levels and make sure debug level have all the necessary data and in info level we log only necessary things instead of flooding log messages.

Automation Backlog

This is crucial since we have so many things to automate but we don't have ample resource or time to do so. Hence compile the automation backlog and add it to your devops system as a Story or a Task for tracking. This not only will have features to be automated but other refactoring improvements as well. As soon as we sense to modify some existing thing on our test project add it in the devops and move it to backlog. So that during planning or grooming this can be considered and whole team is aware of it.

Automation code check-in frequency

Ideally the lowest time to check-in is good. For a good automation code we should be doing this within a day. Or else the reviewer has to spend too much time on analyzing what's been done and review become tedious and error prone. It's good if we could complete a test or couple of tests in a day. Or else break the complex test and check-in a helper method for that test which can be individually testable for review.

Automation Pipeline

We should have pipelines as soon as we have an automated Test ready. Or else locally run it every day or more if required. Create pipelines for all your active environments and make sure test runs everyday and we should see the report trend before stand-up. This should be a everyday ritual. We should act on the failing test case on everyday basis either to fix the test code according to change or raise a Bug for the regression issue and notify the team at the earliest. This is the main purpose of the automation suite. The automation suite should identify issues in automated checks as early as possible is the core purpose of automation.

And we must integrate our test suite with development PR's so that dev team could get early feedback. Make sure to run only the necessary tests for a particular dev pipeline to get faster feedback.

Vignesh Kumar Balasubramanian

Lead - Test Automation | ANZTB Board member

1 个月

Please add a section for APIs - most organisations use micro services/ APIs architecture, automating APIs will be more stable, helps to quickly validate regression across multiple environments and more ROI

要查看或添加评论,请登录

社区洞察

其他会员也浏览了