TEN LESSONS OF TEST AUTOMATION
NARAYANAN PALANI
??Platform Engineering Lead ???????? Certified AWS, GCP Architect ??Retail, Commercial and Investment Banking ??Best Seller ??FOLLOW
Automation is not about passing tests 100% but the real objective is to identify problems in code to prevent poor experience of customers. Let it fail and let us learn why the test fails!
As an engineering lead to fintech space, I am passionate in influencing engineers to write and maintain automation test code in day to day work using Playwright, appium, selenium, webdriverIO, cucumber, cypressIO, JUnit ; That helps in increasing the test automation coverage based on feature development and defect fixes. Maintaining test automation code for 30+ teams are not an easy task and improving the code quality for different set of teams come up with its own tactical challenges to overcome such as test data issues, code conflicts, flaky tests etc; Over the last five years of selenium scripting, I learnt ten strong lessons which helped me to overcome most of the technical challenges with strategic solutions. In this article, I have explained ten most useful lessons I learnt on web and API test automation.
Lesson1: Think outside the box
During my earlier testing roles, I was focused on writing automation code for the user stories and test combinations that are useful for the feature development. In short, If I am testing a login page, I am responsible to write tests around positive and negative test combinations such as valid login, invalid login and respective error messages. Over a period of ten year’s time I found that majority of the defects start appearing from edge cases of the functionalities such as API server downtime, complex test data setup in real time that creates an odd error message to user which would have not tested in testing phases, third party system failure which result in incorrect API responses in which case it would have been difficult to bring down the systems and test them in testing phases or the minor issues such as page load time in which end user hold a mobile with low RAM memory etc.
Hence whatever I have been repeatedly testing through Continuous Integration or regular automated test execution are merely relevant to 50% of the defects I am responsible to catch!
More I worked towards exploratory tests, more I got the opportunity to spot the defects on the application functionalities. But these defects are the difficult ones to convince developers or business analysts to get it accepted as a valid defects in first few days. Over a period of time, when they start getting influenced, that’s when these defects are getting fixed and application gets a clean and improved quality to reach it’s end customers.
"Every exploratory test is a step closer to discovering unbelievable defect -- that's what makes it so great."
Lesson2: Automate the Defect Retests
When (application) defects are identified from automated test failures, the first impression is to retest them when they are fixed and move towards a regression testing phase. But forgetting to automate the defect retest itself is a costly mistake. Reason being, when a defect has been retested, highly likely that the same branch of the defect fix code may get missed in later code merges or slipped to merge in the right code version to reach the end users. Unless this has been written as an automated test with right tags to get tested from there on, it will be an easy escape for this defect(later in the releases) to live with the code forever.
"Sometimes in life, you're not always given a second chance but if you do, take advantage of it-Defect Retest is a second chance that gives you an opportunity to break the application once again"
Lesson3: Its is not about quantity of tests, it is about quality of those tests
Test reporting and test automation coverage statistics are most common measurements in agile teams and quantifying number of tests being written is the first formula of measurement as well as first mistake team does on their automation efforts! Yes, when number of test automation scripts are counted periodically, it wont give insights around the overall scenario coverage, coverage of edge cases and possibility of improving the tests towards untouched code! Hence the measurement of test automation scripts needs to follow a pragmatic approach of test code stability; It means, how many times the test has shown a consistent pass rate when application code is not changed? If this stability has been reached, the next step is to explore possibilities of what more tests to be added?
"Consistency is everything. Nothing happens when a script is written. But it has to be consistently maintained, executed and updated time to time"
Lesson4: Automate the XHR Responses along with UI Tests
It is always recommended to introduce API Response verifications in addition to UI level assertions. It may be easy to maintain two different tests for API and UI but it is highly likely possible to miss defects if both are not asserted at the same time.
//API POST LEVEL VERIFICATION
cy.server();
cy.route({
url: "/users/**",
method: "POST",
response: { status: "Form saved!", code: 201 }
});
cy.get("form").submit();
//UI LEVEL VERIFICATION
cy.contains("Form saved!");
Writing test to verify the API responses along with UI texts or behaviour would help us finding defects when there are issues with API or UI code. Generally these type of defects wont get identified easily if API alone asserted in a separate test (since UI verification is missing in same test) or written to test UI only (since it wont compare the response with API response)
Lesson5: Automate web functionality and what else?
When test scripts are written by capturing objects or locators using tools such as selenium, cypress, UFT or any test automation tool, the first wrong assumption is that the automation test fail exactly during the changes appearing in the web pages. Reason being, when the objects are slightly misplaced or moved extreme right or bottom of the page, automated scripts still pass the tests due to object availability on the web pages. That’s where the strong need for visual testing arises! Tools such as Applitools, webdriverio-visual-regression service, cypress-visual-regression are wonderful tools that capture the difference when the page’s design altered due to various design issues, environment issues or code conflict occurs. Unless specifically written as a visual test, these defects are easy to get slipped from regular automated tests.
Example Visual Test from Cypress:
Feature: Sample visual test
Scenario: Perform visual test on a landing page
Given I open homepage
And I capture snapshot and compare "loginpage"
When I SignIn as user
Then the user name should be displayed
And I capture snapshot and compare "homepage"
Step Definition:
import { When, Then } from 'cypress-cucumber-preprocessor/steps'
// capture snapshot
Then('I capture snapshot and compare {string}', (string) => {
cy.matchImageSnapshot(string)
})
Git Repository to download this code: Cypress-test-techniques
?
We discussed with test automation, exploratory tests, visual testing so far. What holds the future of test automation after all?
Lesson6: Accessibility is a core future of test automation
Due to the recent changes to the law and regulations of major Governments across USA,UK,Canada, Australia and other countries, accessibility testing takes a priority when comparing to rest of the testing types. Equality Act 2010 and disability are taken seriously in international law through each countries own regulation hence testing applications designed for end user needs accessibility as a core check. When bringing accessibility to testing world, automation of those accessibility are not necessarily important. Reason being, most of the accessibility needs has to be tested using screen readers and assistive technologies.
Learn to test accessibility guidelines through keyboard tests:
Reusable accessibility tests you can take from my git repository for free: Web Accessibility Test Cases
Lesson7: Automate Accessibility Scans Regularly
Recently, I have implemented AxeDevTools in my test code and its running relentlessly to capture the application code violations against accessibility guidelines and getting a detailed test report in order to share it to developers to cross check the violations. But accessibility code scan itself is not an 100% automation of accessibility tests. After code scan, I spend good amount of time with screen readers to read the web pages with different keyboard shortcuts and test the behavior against WCAG 2.1 guidelines and analyse the confusing items and log defects to influence developers to fix the gaps and give better experience for end customers. Hence re-writing same defects fixes, object property updates (like aria-label, title, id, aria-labelledby etc) through automated tests are helping me to find failures when those objects are corrupted during the code merges later in the stages. Hence writing accessibility code scan tests, manual accessibility tests, automated accessibility regression tests are really useful for my code health checks towards accessibility guidelines coverage.
?
Example Accessibility Code Scan Test:
Feature: Login Page Accessibility Verification on a website
Scenario: check valid login page display with login fields and perform accessibility checks
Given I open homepage
And I perform accessibility audit using axe
Step definition related to accessibility audit:
领英推荐
Given('I perform accessibility audit using axe', () => {
loginOrangehrmPage.a11yAuditAxe()
})
Function related to accessibility audit:
a11yAuditAxe () {
cy.checkA11y(null, null, terminalLog)
},
?
Example Automated Accessibility Regression Test:
Feature: Dropdown functionality tests of home page using accessibility navigation
Scenario: Select sales manager job title in employees search list on PIM Tab using accessibility navigation
?Given I open homepage
When I SignIn as user
And I click on PIM tab of home page
And I navigate to jobtitle dropdown using keyboard functionality
And I click on dropdown of jobtitle
And I press enter on search button of pim tab
Then search results should be displayed successfully
Step definition used for the keyboard functionality:
//accessibility related methods:
navJobtitleusingKeyboard () {
cy.xpath(supervisorPIMtab).click().tab().focused(jobTitleDropdown_pim)
},
Lessson8: Migration of automation tools
Whenever a new tool become popular in the market, my SDeTs usually show interest in migrating the test framework towards a new test automation tool. This situation has come when wdio become famous and we were using selenium with Java earlier. So I took a tactical approach to migrate the code from java to javascript carefully to maintain the test coverage through webdriverio since the test scripts were small in number. But the similar situation came when cypressIO become popular in 2019 in which I took a strategic approach to let webdriverio scripts maintained for remaining regression cycles and write cypress tests only for Unit Integration Tests in which no such tool helped on component testing originally. So taking careful choices and decisions with test automation tools are the key for the longetivity of test automation in any agile teams. Now that cypress is excelling the progress with strong capabilities in its version 7.0, I have chosen to write new features and functionalities in cypress while existing Jenkins scripts run webdriverio tests and give status updates. So the key lesson here is that we don’t need to migrate all the existing code to new test automation tool since it may not be necessary always.
Lesson 9: Test code maintenance
'Reusability' is a gatekeeper for better maintained test frameworks. More test scripts written as reusable functions, more opportunity for the team to control the changes with minimal touch.
If your scripts are written in webdriverio, you may remember seeing the switch window API such as .switchTab to switch the focus to particular window.
//Webdriverio version 4 code to switch to target Tab to perform tests
browser.switchTab([windowHandle]);
//Webdriverio version 6 code to switch to target Tab to perform tests-switch back via url match
browser.switchWindow('google.com')
If it has been written within page level or step definition level of the code, when upgrading the webdriverio version to 6, choosing to use .switchWindow is not simple since each and every code needs to go to get amended and tested to verify the right focus on the target pages. If this has been written initially in a reusable function (and called to the step definition later on), it is very easy to make changes in just one line within the function hence most of your scripts would use the same reusable function to switch the focus!
Whatever the test automation tool we chose to write code, maintaining the test automation framework and reviewing the tech-stack is a real key to maintain a healthy test automation culture for long. Reason being, if we dont upgrade the tool versions, the code will become obsolete one day; If the code reusability is seeded in every functions of the code being written, it is easy to maintain when teams grow in numbers. I chose to use cucumber boilerplate for webdriverio during 2018 and that choice made me to write code in 30+ different test code repositories across different projects in these three years’ time. Whatever the code I chose to write, those core test engine functions such as click or enter are sourced from same reusable code repositories maintained centrally and added with additional functions time to time hence maintenance of the test code is taken care at two levels: Horizonal as well as Vertical. Agile teams are fixing the custom functions and data issues when failing at Horizontal level within the particular release based test code; Similarly reusable functions are fixed and maintained when there is a new or better feature made available from testing tools such as webdriverio or Cypressio with their later code versions.
?
Lesson 10: Innovation Value Chain
Back in the year 2017, I tried to adapt a test automaton toolsets model through Innovation Value Chain Model (Harvard Business Review Article).? After creating WebdriverIO based test automation framework skeleton in one feature team, I started adding more and more features to make it stable and versatile with tests such as BDD (Behaviour Driven Development), accessibility (using axe), visual testing, compatibility (using chrome, firefox, internet explorer and headless browsers) etc.
When this test automation framework is written in a team, I have carefully extended the same framework with reusable components (managed centrally through node modules) for new coming feature teams. Over a period of time, there were 30+ feature teams using the same framework and maintaining the reusable functions centrally which means, writing a test automation code takes least possible time rather then spending long hours for one particular function such as click event in which most of the team trying to achieve the same end goal.
As a result, any team writes a test for a new functionality, it takes few minutes to develop their complete end to end BDDs in few minutes in page object pattern since page objects, locators, selenium functions, custom functions are all centrally managed in node modules and managed across different teams for a better re-usability!
These ten lessons helped me to scale up the test framework from small team to most complex group of test chapters around web and API development. After learning 'how to script?', there is one habit has to come along with us to improve our code. That is 'practice'
"Practice reflects innovative brain in your clean code "
The more we practiced writing the code, the more we learn different methods to verify the application quality. Hence defect has no place to hide.
There were few learning strategies I followed in this period to help the QA community while I develop the automation test approaches -hope they are useful to you if you are an SDeT or full stack/automation QA in exploring test automation space:
?
-Narayanan Palani
?Like this article? Subscribe to Engineering Leadership and Digital Payments Hub to enjoy reading useful articles.
Interested in International Money Transfer with Transparent Fees across Multi-Currencies? Get Zing: Link
Disclaimer: Contents, posts and media used in this account of the author do not represent any organisation of any sort. Under no circumstances will the author be held responsible or liable in any way for any claims, loss, expenses or liabilities whatsoever.
?
?