AI in Testing: A Three-Part Series on How AI is Disrupting Testing
Daniel Lessa
I help businesses unlock value by leveraging practical AI solutions | Ex Accenture
Artificial intelligence is quickly changing industries across the board, and software testing is no exception. Once a process dominated by manual efforts, testing has evolved with the introduction of test automation. Now, with AI stepping into the spotlight, we are at the beginning of a new era in testing—one that reflects a huge change in the software industry altogether.
This series of 3 articles is my attempt to put into “paper” my thoughts on the current state of AI in testing and quality engineering, the near-term advancements we can expect, and what lies beyond. In each phase—what I call the Co-Pilot Era, the Agentic Era, and the Full AI Testing Era—testers will need to adapt to new technologies and methodologies to remain relevant. This series could also serve as a basis for a maturity model in AI Testing.
Here’s a sneak peek into the future:
Lets now dive into the first and current "wave":
Part 1
What's Now: The Co-Pilot Era of AI in Testing
We are currently in what I call the Co-Pilot Era of AI in testing, a phase defined by excitement and experimentation as organizations begin to explore the potential of AI. This entry-level stage of AI adoption focuses on decentralized, task-specific automation, where AI assists in isolated parts of the testing process rather than orchestrating the entire workflow.
Defining the Co-Pilot Era
The “co-pilot” era can be seen as a natural starting point in AI’s integration into software testing. In this phase, AI is not driving the entire testing process but instead working alongside human testers, automating individual tasks like test strategy generation, test case creation, unit test automation and so on. It’s a phase marked by cautious adoption, where teams are still testing the waters to see how reliable and consistent these tools are.
This is expected as AI is new (and can be scary!), and there’s both excitement and skepticism about it can do. While many people and teams are excited to leverage AI, there is still a lot of uncertainty around its accuracy, reliability, and long-term effectiveness. AI tools are performing tasks in isolation—addressing small, specific problems rather than being fully integrated into a cohesive testing strategy.
Why Decentralized Tasks Define This Phase?
In the co-pilot era, tasks are decentralized. AI tools are often applied in silos, automating one element of testing, such as generating unit tests from code or analyzing requirements to suggest test cases. This is a reflection of the skepticism that still surrounds the use of AI in testing—many organizations are hesitant to hand over the entire process to AI, preferring to experiment with specific tasks to assess the effectiveness of these tools before moving toward more integrated solutions. This is also a consequence of small context windows and costs associated with using early LLM models.?
Human testers are still very much in control, with AI acting as a helpful assistant. The focus is more on augmenting the tester’s capabilities rather than replacing them. For now, AI handles repetitive, well-defined tasks, while testers maintain oversight, making critical decisions and performing the more complex, creative aspects of testing.
The Skepticism and the Learning Curve
Part of the reason for the slow, cautious adoption of AI in testing is the skepticism around its accuracy and consistency. While AI has shown impressive results, there are still concerns about whether it can be trusted to perform at the same level as a human tester consistently. Many teams are unsure about how to fully integrate AI into their workflows, and as a result, they’re keeping AI confined to individual tasks.
领英推荐
This phase is also about learning—organizations are experimenting with AI tools, figuring out where they provide the most value, and slowly building trust in the technology. It’s a necessary step, as it allows for the development of best practices, greater understanding of AI’s capabilities, and, over time, increasing confidence in its role in testing.
Some examples of AI powered testing in the “co-pilot” era:
Critical skills for testers to remain relevant in this stage:
In the Co-Pilot Era, testers are working alongside AI, leveraging its capabilities to automate specific tasks like test strategy creation, test case generation, and unit test automation. AI systems are still task-specific and largely dependent on human oversight, so testing professionals need to focus on understanding how to work effectively with these tools.
Key Focus Areas:
Additional Skills:
Why It’s Important:
In this era, AI can only do so much on its own, and human testers need to direct and optimize its capabilities. Mastering prompt engineering and Python allows testers to get the most out of AI tools while ensuring the results are accurate, relevant, and reliable.
On the next couple of weeks, I'll share my thoughts on what i consider the 2 next evolutionary waves of testing impacted by AI.
(For those of you who read till here, let me know your thoughts, if you agree or disagree and what other trends you are spotting in the testing industry! It is a fascinating time we are living in!)