Curious case of AI code generation and Test Automation Engineers
Generated by Microsoft Designer AI

Curious case of AI code generation and Test Automation Engineers

Well there are enough articles and debates about the ability of AI code generators like Microsoft CoPilot, Amazon Q and relevance of Software Developers in future. While these debates go on, the next obvious focus shift will be on a species of engineering organization whose importance/relevance is often questioned by industry(well, not by all), SDETs.

*SDET - Software Development Engineer in Test


Will AI code generation tools impact SDETs?

Yes.

So the role of SDETs becomes irrelevant?

No.


To understand the above statements better, we need to first understand the roles SDETs play in an engineering organization. In a typical fast moving organization, SDETs put on multiple hats from time to time:

  1. QA Craftsman
  2. Automation Expert
  3. Program manager for collaboration across PODs
  4. Early customer to Product and Design teams with usability feedback
  5. Engineer who builds tooling and frameworks to increase productivity
  6. Ops team who ensures stability of tests
  7. Product owner who runs Bug bash for the features

Glad to know this but it contradicts your above statement. How will AI code generation framework affect me ?

Disclaimer: As I mentioned earlier, I am not going to talk about human intelligence touch of SDEs but going to stick only with SDETs in this article.

Approximately 20% of the code written by SDETs are boilerplate codes. This is where AI is going to help increase the productivity by optimizing basic setup and boilerplate tests.

Opinion: In future, it may evovle to a point where it can help you analyze the product requirements, code base, tests etc to come up with most of the test code except your business / journey validations but it is going to be a long game given the security aspects of the code base and requirements. The compelling case for it will be accelerated time to market.

Even with all the advancements, the role of SDETs will increase considerably to provide centralized data provisioing platform, mocking infrastructure custome-made for your organization, infrastructure capabilities for service level and end to end automation tests.(needless to say, the ever evolving changes in the product and endless negotiations with dependent teams !)

Enough with talking, let's see some proof


Below examples are tried with AmazonQ as VSCode Extension:

Build base RESTful API automation methods :

First let's talk about a very generic use case:

I wrote below comment in my python file

// write generic methods to validate restful apis in python        

And Voila

// write generic methods to validate restful apis in python
import requests

def get(url, params=None):
    """
    Make a GET request to the specified URL.
    """
    response = requests.get(url, params=params)
    return response.json()

def post(url, data):
    """
    Make a POST request to the specified URL with the provided data.
    """
    response = requests.post(url, json=data)
    return response.json()

def put(url, data):
    """
    Make a PUT request to the specified URL with the provided data.
    """
    response = requests.put(url, json=data)
    return response.json()

def delete(url):
    """
    Make a DELETE request to the specified URL.
    """
    response = requests.delete(url)
    return response.status_code

def validate_get(url, expected_status_code=200, expected_key=None, expected_value=None):
    """
    Validate a GET request to the specified URL.
    """
    response = get(url)
    assert response.status_code == expected_status_code, f"Unexpected status code: {response.status_code}"
    if expected_key and expected_value:
        assert expected_key in response, f"Expected key '{expected_key}' not found in response"
        assert response[expected_key] == expected_value, f"Unexpected value for key '{expected_key}': {response[expected_key]}"
    return response

def validate_post(url, data, expected_status_code=201):
    """
    Validate a POST request to the specified URL with the provided data.
    """
    response = post(url, data)
    assert response.status_code == expected_status_code, f"Unexpected status code: {response.status_code}"
    return response

def validate_put(url, data, expected_status_code=200):
    """
    Validate a PUT request to the specified URL with the provided data.
    """
    response = put(url, data)
    assert response.status_code == expected_status_code, f"Unexpected status code: {response.status_code}"
    return response

def validate_delete(url, expected_status_code=204):
    """
    Validate a DELETE request to the specified URL.
    """
    status_code = delete(url)
    assert status_code == expected_status_code, f"Unexpected status code: {status_code}"
        

Well, you may not be very satisfied given it is a very generic use case, probably a simple search can give this result. Let's move on to our next experiment.

Write test cases for my FastAPI based webservices:

Context:

I have few FastAPI based webservices running in my local machine. (If you don't know about FastAPI yet don't worry, I am going to write a blog about it sooner)

  1. add_user
  2. update_user
  3. get_user
  4. get_users

I asked AmazonQ to write test cases for my FastAPI webservices project.

And Voila:

from fastapi.testclient import TestClient
from main import app  # Assuming your FastAPI app is in main.py
import pytest

client = TestClient(app)

DEFAULT_USER = {"id": 1, "name": "User1", "email": "[email protected]"}
UPDATED_USER = {"id": 1, "name": "User2", "email": "[email protected]"}

@pytest.fixture
def create_user():
    response = client.post("/users/", json=DEFAULT_USER)
    assert response.status_code == 201
    yield
    client.delete(f"/users/{DEFAULT_USER['id']}")

def test_get_all_users(create_user):
    response = client.get("/users/")
    assert response.status_code == 200
    assert response.json()[0] == DEFAULT_USER

def test_get_single_user(create_user):
    response = client.get(f"/user/{DEFAULT_USER['id']}")
    assert response.status_code == 200
    assert response.json() == DEFAULT_USER

def test_create_user():
    response = client.post("/users/", json=DEFAULT_USER)
    assert response.status_code == 201
    assert response.json() == DEFAULT_USER

def test_update_user(create_user):
    response = client.put(f"/users/{UPDATED_USER['id']}", json=UPDATED_USER)
    assert response.status_code == 200
    assert response.json() == UPDATED_USER

def test_delete_user(create_user):
    response = client.delete(f"/users/{DEFAULT_USER['id']}")
    assert response.status_code == 200
    assert response.json() == {"message": "User deleted successfully"}

def test_get_non_existent_user():
    response = client.get("/user/999")
    assert response.status_code == 404
    assert response.json() == {"detail": "User not found"}        

What about UI Tests?

I explored playwright tests for a website and as we seen in our earlier examples, it is covering bare minimum tests. While it can help you optimize your code for best practices, configuring test runs as a part of Github etc , in its current state it is bare minimal.

What it is suggesting when I asked AmazonQ to setup playwright automation framework?

npm init -y
npm install playwright        
const { chromium } = require('playwright');

async function setupPlaywright() {
  const browser = await chromium.launch({
    headless: false // Set to true for headless mode
  });
  const context = await browser.newContext();
  const page = await context.newPage();

  return { browser, context, page };
}

async function closeBrowser(browser) {
  await browser.close();
}

async function navigateToUrl(page, url) {
  await page.goto(url);
}

async function takeScreenshot(page, path) {
  await page.screenshot({ path: path });
}

async function fillForm(page, selector, value) {
  await page.fill(selector, value);
}

async function clickElement(page, selector) {
  await page.click(selector);
}

module.exports = {
  setupPlaywright,
  closeBrowser,
  navigateToUrl,
  takeScreenshot,
  fillForm,
  clickElement
};
        
const { 
  setupPlaywright, 
  closeBrowser, 
  navigateToUrl, 
  takeScreenshot, 
  fillForm, 
  clickElement 
} = require('./framework_setup');

(async () => {
  const { browser, page } = await setupPlaywright();

  try {
    await navigateToUrl(page, 'https://example.com');
    await takeScreenshot(page, 'example.png');

    // Example of filling a form and clicking a button
    // await fillForm(page, '#username', 'testuser');
    // await fillForm(page, '#password', 'testpass');
    // await clickElement(page, '#submit-button');

    console.log('Test completed successfully');
  } catch (error) {
    console.error('An error occurred:', error);
  } finally {
    await closeBrowser(browser);
  }
})();
        

Run your test:

node test.js        

This setup provides a basic framework for web automation using Playwright. It includes functions for common tasks like navigating to a URL, taking screenshots, filling forms, and clicking elements.

This is interesting, so is it going to bring down my sprint efforts from a sprint to few hours ?

As I mentioned earlier, it covers a lot of obvious tests but the role of SDET is not to test / automate only the obvious. Also these tests are mostly focussed on individual service level tests and not factored in end to end tests.

When the complexity of the code increases, it will help you with base setup which you don't have to spend time on but still you will be responsible for building business logic, test infra, data needs, circuit breaker cases etc.


End note:

While these offerings are still in nascent stage, it provides a compelling case for startups and even experienced developers to increase the productivity. Make this part of your arsenal because this is going to yield so many benefits to you.

#microsoftcopilot #amazonq #codegeneration #sdet #aitesting

要查看或添加评论,请登录

社区洞察

其他会员也浏览了