Thinking outside the box: our challenge isn't about testing; it's about the requirements

Thinking outside the box: our challenge isn't about testing; it's about the requirements

Let’s begin with a quick and simple experiment. I asked ChatGPT the following question: Can you outline three distinct definitions that describe the concept of software testing? Here are the responses I received:

That works for me. I get that if you follow different testing philosophies, you might not agree with these definitions. But since there’s no one-size-fits-all definition, I’m good with it.

In a nutshell, I'd say testing is all about making sure the application meets the requirements and finding bugs. I know that is a ridiculous oversimplification, but I’ll stick with it.

How do I know what to test when the requirements are unclear, incomplete, or missing? And how can I tell if a bug is expected behavior if it’s not documented anywhere?

There's no doubt that methods like exploratory testing help tackle these problems. They allow testers to get the work done without requirements, relying on heuristics, experience, and creativity to guide their testing.

But you can’t rely on exploratory testing for everything. A lot of companies need a more traditional approach where test planning, design, and specification are based on the requirements. Plus, there's a strong push towards automating everything.

Even with all the advancements in the testing field - professional recognition, new methodologies, better tools, automation, and lately Artificial Intelligence - bad requirements are still a big problem for us.

I believe this will only get worse in the future. Let’s use our crystal ball and fast forward a decade (maybe a bit more). As AI makes incredible advancements and can autonomously create, maintain, and execute tests, the question arises: who cares about autonomous testing if it’s testing the wrong things because the requirements are ambiguous and incomplete?

Let's consider today's reality: even the smartest, most experienced, and creative testers struggle with crappy requirements, incomplete Jira issues, ambiguous Confluence pages - you name it.

Do you believe AI can handle poorly defined requirements better than we can? Well, I don’t!

A while ago, I read an article on StackOverflow titled The Hardest Part of Building Software is Not Coding; It’s Requirements, which I strongly recommend you check out.

In the article, the author discusses that AI has excelled in areas with clear, finite rules, such as chess, where it has surpassed human performance. In contrast, self-driving cars face far more complexity due to the unpredictable nature of real-world driving, which involves countless variables and edge cases. The author also highlights that in software development, technical specifications aim to guide designs with clear rules, much like the precision in chess. Ideally, these specs detail user behaviors and system flows, but often we receive vague wishlists, rough wireframes, and unclear requirements. Frequently, requirements also change or are overlooked, complicating the development process.

Everything starts with understanding the problem first

Automation and AI can take care of the boring, repetitive and time-consuming testing tasks, but if the problem isn't clearly defined, what's the point?

In my opinion, the one major contribution of the testing team in the AI era will be ensuring the quality of the requirements and having a deep understanding of the problem space.

In the era of GenAI, where everything begins with a prompt, the quality of the input will significantly impact the outcome.

In this new workflow, testers need to elevate their critical-thinking skills and empathy for end-users. They can no longer afford to work with a superficial understanding of the task at hand (or system under test).

As AI and GenAI reshapes our work and thinking, we'll take on the role of guiding AI tools by providing context and details about system behavior, while the AI handles the testing (enabling us to focus more on creative and complex problem-solving).

In this scenario, the requirement specification is king. The way it is written and the level of detail matter. Understanding how it integrates into the broader context also matters.

I anticipate a revival of Behavior-Driven Development (BDD) and a broad adoption of Gherkin-style (Given, When, Then) specs. Or at the very least, a future refinement or evolution of these methods.

Think about it: these approaches are easy to learn and implement, and they express behaviors in natural language with a straightforward syntax (which I assume is easier for the machine to understand). Plus, they encourage team collaboration to ensure a shared understanding of the problem space.

For example, in this interview, Linus Torvalds, the creator of Linux, offers many fascinating insights into the future of programming with AI. He discusses the possibility of a future where natural language itself could become the code. He says: “with AI, it’s conceivable that coding might eventually involve using natural human language. Instead of precise syntax and structured code, developers might describe functionality in colloquial terms, and AI would translate these instructions into executable code. This high-level abstraction could be the ultimate convergence of human intention and machine logic”.

As AI progresses and we increasingly rely on AI agents and LLMs to handle routine tasks, testers must adapt to this emerging reality.

Contributing to the crafting of high-quality requirements and guiding AI to produce the right results will become crucial skills in the near future.

What’s your take on this? Share your thoughts and comments here.

Ranjini K.

Helping Creators Automate 80% of Tasks in 48 Hours.

1 个月

Requirements elucidation supersedes AI testing prowess - quality genesis triumphs. Cristiano Caetano

回复
Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

1 个月

The true challenge lies not in automating tests but in crafting unambiguous requirements that capture the essence of user needs. AI can excel at pattern recognition and generating code based on defined parameters, but it lacks the human capacity for nuanced understanding and interpretation. This means testers must evolve into requirement engineers, meticulously defining problem spaces and ensuring clarity before AI tools even enter the equation. You talked about in your post. Given that AI models are trained on vast datasets, how would you mitigate the risk of bias amplification when generating test cases from potentially skewed input data? Imagine a scenario where an AI is tasked with creating tests for a facial recognition system trained on a dataset predominantly featuring individuals of European descent. How would you technically use to ensure that the generated test cases adequately address potential biases and promote fairness in the system's performance across diverse demographics?

回复
Veerle Verhagen

Tester | Keynote speaker | Workshop facilitator | Core skills advocate

1 个月

"In this new workflow, testers need to elevate their critical-thinking skills and empathy for end-users." I love this because in my opinion, this should already be among the most important characteristics of a good tester. If anything good ends up coming out of the AI craze, it might be a new-found industry-wide appreciation of core skills, which are now often considered of only secondary importance.

要查看或添加评论,请登录

Cristiano Caetano的更多文章

社区洞察

其他会员也浏览了