Stronger Assertions, Stronger Software: The QA Secret AI Doesn't Know
Strong Assertion - Strong Software

Stronger Assertions, Stronger Software: The QA Secret AI Doesn't Know

By Eduard Dubilyer, CTO of Skipper Soft

Introduction

As the CTO of Skipper Soft, a company specializing in scalable and high-quality software testing solutions, I frequently encounter a critical issue in test automation: teams rush to increase coverage but neglect the quality of their assertions. With years of experience optimizing QA strategies for complex systems, I’ve seen firsthand how poor assertions lead to unreliable software releases and costly maintenance. This article breaks down why high-quality assertions are non-negotiable in test automation.

A critically important aspect of any automated test is high-quality assertions. These determine whether a test genuinely verifies system behavior or merely creates an illusion of control.

Today, powerful AI tools like GitHub Copilot, Cursor, and ChatGPT help developers write code, including tests. However, these tools often lack a deep understanding of test requirements, leading to assertions that may not fully validate the expected behavior. AI-generated tests can sometimes be too generic, missing critical edge cases, or failing to capture the intent behind the code. While AI can assist, it cannot replace an experienced engineer’s intuition for edge cases and business logic validation.

This article will discuss why writing high-quality assertions is important, why AI cannot replace an expert, and what best practices should be followed in test automation. We'll use Jest for examples, but it’s essential to understand that these principles are universal and apply to any testing framework.

Why AI Won’t Replace an Expert in Testing

AI tools excel at code auto-completion but struggle with the nuances of software testing. While they can generate basic test structures, they often miss critical aspects such as edge cases, meaningful assertions, and deep validation of expected behavior. This leads to tests that appear valid but fail to detect real issues.

1. Misunderstanding the Context

Copilot and Cursor may suggest assertions based on existing code but don’t always grasp what exactly should be tested.

Example: Copilot may generate a weak assertion

const user = getUser(); expect(user).toBeDefined(); // Weak test! We need more details.        

Better:

expect(user.name).toBe("Alice"); expect(user.age).toBeGreaterThan(18);        

2. Circular Assertions

A common mistake is testing code by comparing it to itself. Such tests will never fail, even if the code is broken.

Example of a mistake:

const inventory = getInventory(); expect(inventory).toEqual(getInventory()); // Will never fail!        

Better:

const expected = { cheesecake: 1, macaroon: 2 }; expect(await getInventoryResponse.json()).toEqual(expected);        

3. Ignoring Asynchronous Operations

AI might forget to use await, leading to tests that incorrectly pass.

Bad test:

test("fetches data", () => { ? const data = fetchData(); ? expect(data.value).toBe("someValue"); });        

Correct test:

test("fetches data", async () => { ? const data = await fetchData(); ? expect(data.value).toBe("someValue"); });        

Best Practices for Writing Assertions

For tests to serve their purpose—catching bugs rather than just "turning green"—it’s essential to follow best practices. Failing to do so can have real-world consequences. For instance, a poorly written assertion in a financial application may allow incorrect calculations to go unnoticed, leading to financial discrepancies. In healthcare software, weak assertions could fail to detect critical errors in patient data processing, potentially causing serious harm. Ensuring that assertions are robust and meaningful helps prevent these kinds of issues.

1. Strict Assertions

Bad:

expect(user).toBeTruthy();        

Why is this bad? toBeTruthy() accepts any truthy value (1, {}, "abc", etc.).

Better:

expect(user.isActive).toBe(true);        

2. Using Specific Matchers

Jest provides decisive matchers that make tests more readable. Similar capabilities exist in other testing frameworks like Mocha, Chai, and Jasmine, reinforcing the universality of these best practices across different environments.

It’s better to use precise checks:

expect(array).toHaveLength(3); expect(obj).toHaveProperty("username", "admin"); expect(string).toMatch(/hello/i);

3. Testing Exceptions

Bad:

expect(() => myFunc()).toThrow();        

Better:

expect(() => myFunc()).toThrow("Invalid input");        

4. Asynchronous Assertions

Bad test (may be complete before the server responds):

expect(fetchData()).resolves.toEqual({ data: "someValue" });        

Good test:

test("fetches data correctly", async () => { ? await expect(fetchData()).resolves.toEqual({ data: "someValue" }); });        

5. Avoid False Green Tests

Jest allows verifying that at least one assertion is executed in a test:

expect.hasAssertions();        

Example usage:

test("operation fails with invalid data", () => { ? expect.hasAssertions(); ? try { ? ? addToInventory("cheesecake", "not a number"); ? } catch (e) { ? ? expect(inventory.get("cheesecake")).toBe(0); ? } });        

Additional Best Practices

1. Enforcing Assertions with Linters

Configuring a linter rule is a great way to ensure that every test includes at least one assertion. The Jest ESLint plugin provides jest/expect-expect, which enforces the presence of expect() in test cases.

Configuration:

{ ? "rules": { ? ? "jest/expect-expect": "error" ? } }        

This ensures that tests don’t falsely pass due to missing assertions, reducing the risk of misleading results in CI/CD pipelines.

2. Clear Distinction Between Test Failures and Automation Failures

A crucial rule in assertions: failing assertions should indicate product issues, while exceptions should indicate automation issues. This principle is especially effective in languages that support checked and unchecked exceptions, such as Java.

  • If an assertion fails, it means the product behavior is incorrect.
  • If an exception occurs, it means the test itself is broken (e.g., API request failed, bad test setup, missing dependencies).

By following this principle, debugging test failures becomes more efficient, as engineers can quickly determine whether they need to fix the product or the test itself.

Conclusion

Reasonable assertions are the foundation of reliable automated tests. Without them, tests might pass even when the code contains bugs.

AI won’t replace an expert. While AI can assist in generating code, it lacks critical thinking skills and may misinterpret context.

Following best practices increases test reliability:

  • Use strict assertions (toBe, toEqual, toHaveProperty).
  • Validate exceptions (toThrow).
  • Handle asynchronous operations properly (await expect().resolves).
  • Ensure tests execute assertions (expect.hasAssertions()).
  • Configure a linter to enforce assertions.
  • Differentiate between product issues (assertions) and automation issues (exceptions).

At Skipper Soft, we help engineering teams implement test automation practices that drive measurable business impact. If your team is struggling with unreliable tests, let’s talk.

要查看或添加评论,请登录

Skipper Soft的更多文章