Agile Testing Practices That Are Often Forgotten
In the not-too-distant past, testing was regarded as an activity that came into play once development work was completed. However, today, nearly all organizations claim that testing is an integral part of their development life cycle. While we welcome this change, business leaders remain concerned about poor or ineffective testing, increased costs, extended cycle times, and the inability to demonstrate improved software quality, especially when critical defects are detected by customers.
Ironically, many software service organizations boast about having a team of "Super Genius Testers" and achieving "Miracle Outcomes." Yet, they fail to meet basic business expectations. As a result, organizations find themselves setting up war rooms and firefighting teams after production. This phase incurs millions of dollars in expenses just to avoid losing a competitive edge. This begs the question, "What's not working?"
In this two-part series, we will analyze some of the many possible reasons for these shortcomings.
Forgetting that Agile is a methodology: One common issue arises when team members fail to calibrate the minimum processes required, the essential documentation (such as test strategy, test design techniques, and writing test cases), and how to maximize collaboration among stakeholders. When asked about the value delivered, the response often revolves around terms like Kanban boards, stand-up calls, burn-down charts, and sprint planning meetings. However, these terms fail to correlate with the value delivered and the ability to demonstrate continuous improvement.?What tends to be forgotten is that Agile itself is a software development process—a set of methodologies and practices that emphasize flexibility, collaboration, and iterative development. Therefore, Agile is a process that requires creating a minimum set of documents consisting of a test strategy, test plan, test cases, and the use of test design techniques.
领英推荐
Reliance on buzzwords: There is an over-reliance on buzzwords like "Shift-Left Testing," "Extreme Automation," "Self-Healing," "Quality Engineering," "SDET's," and "BDD with Cucumber." Compounded by the fact that team members use different terms for the same thing or the same terms for different things, these buzzwords often lack clear definitions, making it difficult to guarantee quantifiable outcomes. Unfortunately, each individual team member and team within the organization may have their own interpretation of these terms. This results in individuals being celebrated as heroes while the organization suffers. What is forgotten is that these terms hold little value if they are not supported by demonstrable continuous improvement and an enhanced customer experience.
Neglecting to shift focus from defect detection to defect prevention: While defect detection is crucial, it should never take priority over defect prevention. Agile emphasizes the importance of cross-functional teams delivering value at each stage. However, when teams fail to practice defect prevention and prioritize defect detection, it becomes an anti-pattern for Agile. Teams focused on defect prevention actively collaborate with the business and development teams to design and share tests that act as quality gates in the development environment. Conversely, being part of an Agile team but not sharing tests with the development team and running tests solely on the QA environment leads to the late detection of defects. What test teams often forget is that they are an integral part of the development team, and collaboration goes beyond participating in daily stand-up calls. They must remember that true Agile involves continuous improvement in quality, whereas continuous defect detection goes against Agile principles.
Forgetting that automation encompasses more than just UI, and SDET's are not necessarily automation gurus: Despite test automation being mainstream for over two decades, businesses frequently find that automation coverage is inadequate, tests are unreliable, they are out of sync with development schedules, and the cost of maintaining automated scripts is significant. It is also not uncommon to see automation initiatives starting from scratch across projects because existing scripts were not designed to scale and provide enterprise-level reusability.
What the automation team forgets is that the majority of automation efforts focus on UI automation, neglecting the automation of the services layer. Except for a small number of highly evolved organizations, unit-level automation is rarely practiced. Similarly, they overlook the fact that SDET's (which is a role, not just a skill set) are better utilized to build test infrastructure that enables the development team to validate the quality of feature functionality early in the development life cycle. This can include tasks like software architectural analysis to build tests specific to architectural patterns and tactics, automated unit tests, improving code coverage, automating services testing, and integrating tests into the continuous development pipeline.
Given the dynamic nature of software and the absence of a universal one-size-fits-all software development process, successfully applying Agile to different project sizes and models poses a challenge. In Part Two of this article, we will explore the techniques businesses employ to overcome these challenges and enable their test organizations to deliver quantifiable value.
HR Professional with Expertise in HRBP/ People Operations/ Shared Services @ QualiZeal
1 年Well articulated attempting to show clearly the reality of agile methodology that is surrounded by the hype of ever-emerging buzzwords. Thanks for sharing this.