AI in Testing: A Three-Part Series on How AI is Disrupting Testing
Evolution of testing: from AI assistants helping with individual tasks, to AI agents managing complex workflows to AI models fully managing testing

AI in Testing: A Three-Part Series on How AI is Disrupting Testing

Artificial intelligence is quickly changing industries across the board, and software testing is no exception. Once a process dominated by manual efforts, testing has evolved with the introduction of test automation. Now, with AI stepping into the spotlight, we are at the beginning of a new era in testing—one that reflects a huge change in the software industry altogether.

This series of 3 articles is my attempt to put into “paper” my thoughts on the current state of AI in testing and quality engineering, the near-term advancements we can expect, and what lies beyond. In each phase—what I call the Co-Pilot Era, the Agentic Era, and the Full AI Testing Era—testers will need to adapt to new technologies and methodologies to remain relevant. This series could also serve as a basis for a maturity model in AI Testing.

Here’s a sneak peek into the future:

  • What’s Now: AI assists with specific tasks like test case generation and automation, but human oversight is still key.
  • What’s Next: AI will soon orchestrate entire testing workflows autonomously, making processes faster and smarter.
  • What’s Beyond: AI will fully control testing processes, making real-time decisions without human intervention.

Lets now dive into the first and current "wave":


Part 1

What's Now: The Co-Pilot Era of AI in Testing

We are currently in what I call the Co-Pilot Era of AI in testing, a phase defined by excitement and experimentation as organizations begin to explore the potential of AI. This entry-level stage of AI adoption focuses on decentralized, task-specific automation, where AI assists in isolated parts of the testing process rather than orchestrating the entire workflow.

Defining the Co-Pilot Era

Human testers being helped by multiple AI assistants on isolated tasks

The “co-pilot” era can be seen as a natural starting point in AI’s integration into software testing. In this phase, AI is not driving the entire testing process but instead working alongside human testers, automating individual tasks like test strategy generation, test case creation, unit test automation and so on. It’s a phase marked by cautious adoption, where teams are still testing the waters to see how reliable and consistent these tools are.

This is expected as AI is new (and can be scary!), and there’s both excitement and skepticism about it can do. While many people and teams are excited to leverage AI, there is still a lot of uncertainty around its accuracy, reliability, and long-term effectiveness. AI tools are performing tasks in isolation—addressing small, specific problems rather than being fully integrated into a cohesive testing strategy.

Why Decentralized Tasks Define This Phase?

In the co-pilot era, tasks are decentralized. AI tools are often applied in silos, automating one element of testing, such as generating unit tests from code or analyzing requirements to suggest test cases. This is a reflection of the skepticism that still surrounds the use of AI in testing—many organizations are hesitant to hand over the entire process to AI, preferring to experiment with specific tasks to assess the effectiveness of these tools before moving toward more integrated solutions. This is also a consequence of small context windows and costs associated with using early LLM models.?

Human testers are still very much in control, with AI acting as a helpful assistant. The focus is more on augmenting the tester’s capabilities rather than replacing them. For now, AI handles repetitive, well-defined tasks, while testers maintain oversight, making critical decisions and performing the more complex, creative aspects of testing.

The Skepticism and the Learning Curve

Part of the reason for the slow, cautious adoption of AI in testing is the skepticism around its accuracy and consistency. While AI has shown impressive results, there are still concerns about whether it can be trusted to perform at the same level as a human tester consistently. Many teams are unsure about how to fully integrate AI into their workflows, and as a result, they’re keeping AI confined to individual tasks.

This phase is also about learning—organizations are experimenting with AI tools, figuring out where they provide the most value, and slowly building trust in the technology. It’s a necessary step, as it allows for the development of best practices, greater understanding of AI’s capabilities, and, over time, increasing confidence in its role in testing.

Some examples of AI powered testing in the “co-pilot” era:

  • Testing Vendors such as UIpath, Tricentis, Applitools, etc introducing AI features in their products.
  • Software vendors themselves adding AI features into their products, such as Salesforce (I anticipate this will become way more widespread).
  • Open source RAG implementations to assist on writing test cases, categorising defects or finding correlations between production incidents and the earlier stages of the SDLC.

Critical skills for testers to remain relevant in this stage:

In the Co-Pilot Era, testers are working alongside AI, leveraging its capabilities to automate specific tasks like test strategy creation, test case generation, and unit test automation. AI systems are still task-specific and largely dependent on human oversight, so testing professionals need to focus on understanding how to work effectively with these tools.

Key Focus Areas:

  • Prompt Engineering: In the co-pilot era, the ability to effectively communicate with AI through well-designed prompts is crucial. Testers should focus on writing clear and efficient prompts that can guide AI models to generate accurate results for test case generation, strategy design, and unit tests.
  • Python Skills: Since many AI-powered testing tools are built on or integrated with Python, it’s essential for testers to have a solid understanding of Python scripting to modify, optimize, and maintain the automation tasks that AI generates.
  • Retrieval-Augmented Generation (RAG): Testers should familiarize themselves with RAG techniques, as these are commonly used to pull relevant data and information from external sources during the testing process. Understanding how to integrate AI models with knowledge databases can enhance test case generation and strategy formulation.

Additional Skills:

  • Basic AI tool usage (e.g., ChatGPT for test generation).
  • Understanding automated test tools and frameworks (e.g., Selenium).
  • Strong foundational knowledge of manual testing to verify AI-generated outputs.


Why It’s Important:

In this era, AI can only do so much on its own, and human testers need to direct and optimize its capabilities. Mastering prompt engineering and Python allows testers to get the most out of AI tools while ensuring the results are accurate, relevant, and reliable.


On the next couple of weeks, I'll share my thoughts on what i consider the 2 next evolutionary waves of testing impacted by AI.

(For those of you who read till here, let me know your thoughts, if you agree or disagree and what other trends you are spotting in the testing industry! It is a fascinating time we are living in!)


要查看或添加评论,请登录

Daniel Lessa的更多文章

社区洞察

其他会员也浏览了