Are you testing like its 1999?

Are you testing like its 1999?

So, here’s is a general QA process overview that hasn't changed much over the years. (note: I'll stop at the creation of automated script and won't go too much into the execution and results.)

1. The tester gets assigned a feature or enhancement to be tested.?

2. While the developers work on developing the functionality, the tester reviews the requirement.

3. Based on their knowledge of the domain and the application they would validate that the requirements are clear, address the edge cases. If not, they would ask questions and have the requirements clarified.

4. The tester then starts creating test cases that cover the requirements.

5. After the test cases have been written the tester then writes the manual scripts for the test.

6. Once the manual scripts have been written the tester moves on to generating automated test scripts for the tests they have written.

While there is a lot of focus on the last step 6, and frameworks and Gen AI is now being used to generate some of these automated scripts, there is still a lot of inefficiency in the first 5. These steps can take anywhere from a few hours to days and weeks to complete, depending upon the complexity of the requirements.

I want to explore how GenAI changes the way we think about requirements analysis and test case/script generation. Here is a new way of executing steps 1-5 using a LLM Enabled Agent that is provided additional context of the enterprise testing framework libraries and the application documentation.

  1. ????The tester gets assigned a feature or enhancement to be tested.?
  2. ?? The Agent picks up the requirement. With the context of the application documentation available to it, the agent analyzes the requirements for completeness, and updates JIRA (or whatever else you are using) with its observations and recommendations.
  3. ????The recommendations are reviewed and approved.
  4. ??The Agent then generates test cases, and manual test scripts for each test case.
  5. ??The Agent also enabled with the context of the testing frameworks being used generates the files (e.g. Cucumber, Selenium, etc.) needed for automated execution of the test.
  6. ??The Agent updates the Jira Requirements item and adds these tests and associates them.
  7. ????The tester reviews the test cases and executes the tests.

We have seen that this model reduces the test efforts by over 80%. Testers note that the Agents are sometime better at recommending edge and negative test cases. The agents are also often better at analyzing the requirements thoroughly for missed functionality. E.g. "Notify user of error" is an ambiguous requirement. How should the user be notified? What should the error message be.

Its time to rethink QA teams armed with Generative AI Agents!

要查看或添加评论,请登录

社区洞察