Dysfunction: The QA Conundrum
David Strickland
Software Engineering Leader Specializing in Legacy Codebase Transformation & Team Revitalization
Early in my career, I worked as a systems administrator at a parts fabrication company. They used large laser cutters to cut parts out of flat sheets of metal. Since there wasn't enough systems admin work to keep me busy full-time, I also assisted the QA technician when I had downtime. He would send me to the shipping and receiving department to randomly select a part from the items being prepared for shipment. I didn't pick one from every shipment, just whichever one I chose at the moment. I would bring the part back to the tech and we would then measure it down to the micron level. He wasn't checking whether the part met the order's tolerances; he wanted to know exactly how far off it was. His job wasn't to ensure each part was correct. As Quality Assurance, his job was to assure management that the production process was achieving the level of quality the company aimed for.
One of the dysfunctions I frequently encounter on Scrum teams is the "QA Conundrum." According to many companies' definitions of "done," every story must be tested by QA, and yet the developers aren't "done" until the end of the sprint. This raises a question: how can QA test during the sprint, and how can the team complete anything on time?
The core problem is that QA has shifted from "Quality Assurance" to "Quality Assured." At the fabricator, QA knew they were involved too late in the process to affect the product being tested. Their focus was on ensuring that the production process itself was capable of producing quality parts, not on verifying that individual products were acceptable. In software development, we've similarly recognized that involving QA only at the end is too late to meaningfully impact the quality of the product. However, rather than shifting QA's focus to assessing the process, we made them part of the development process and tasked them with verifying the final product.
Scrum teams do not include a separate QA role, there's no effective way to incorporate QA without it being involved from the very start—essentially, QA must be part of the development itself. Scrum addresses this by having the developer responsible for ensuring quality. When the developer finishes a piece of business value, it is considered "done" and ready to ship. It has been thoroughly tested, vetted, verified, and any other necessary actions have been taken by the development team to make sure it is ready for production. If issues are found after development is complete, this indicates a flaw in the development process that needs adjustment.
I usually see this QA Conundrum evolve from three main sources:
Not a Bug
This reminds me of an experience back in elementary school. I got paired with the teacher's pet for a project. The pet wasn't available to help and didn't contribute anything, so I did the entire project myself. The day before it was due, the pet came over and asked if I had done any of the work. Proudly, I showed them what I'd completed, only to have them criticize everything and insist it was all wrong and needed to be changed before submission. The next day, when we turned in the project, the pet told the teacher that I had done it all alone and ignored their suggestions.
Too often, product owners are either too busy to provide guidance or unsure of what they want. Rather than collaborating with developers to create quick prototypes or talking to users, they simply wait to see what the developers produce and then criticize it during the review. QA and product owners then say it was "common sense" or "obvious," and suddenly a story that was "done" now has numerous "bugs."
If no one provided input during the decision-making process and the developer worked autonomously, it isn't a bug regardless of what it is.
Many of the so-called "bugs" I see are just opinions—those of the product owner, QA, or UI/UX team. Opinions are valuable; they help create great products. However, opinions raised in hindsight during the sprint review are not bugs.
Works on My Box (WOMB)
WOMB bugs are real bugs, but they are boundary bugs, and they are some of the worst. You can't predict or test for them effectively. Cross-functional teams are supposed to deliver complete business value—from user story to business value realized. Problems arise when that isn't happening. Implicit contracts exist between teams like DevOps, API, or database teams, and boundary bugs often occur when these implicit contracts are violated.
领英推荐
Developer Error
Yes, developers make errors, despite our reluctance to admit it. You read the requirement, estimated it, discussed it with four people, and then maybe had a personal issue that distracted you. When you finally implemented it, you missed something. Developers aren't machines; along with the incredible creativity and brilliance of humans comes the occasional mistake.
The QA Conundrum
When we get to the demo or acceptance testing, we often leave with a laundry list of "bugs." Some are just opinions mislabeled as bugs, others are boundary issues with other teams, and some are genuine development errors. The typical reaction is to push QA earlier into the development process—but this usually only happens halfway. We might stop sprint work a few days early to let QA catch up or split the sprint so that everything coded in one sprint goes to QA in the next.
If QA is to be moved earlier in the process, then they should be involved from the start. Ideally, we should train developers to be their own QA and trust them to handle all aspects of quality. Before regulations like SOX, developers were expected to be their own QA. We would put code straight into production—sometimes even coding in production—and if something went wrong, we got a call from the executives. This forced us to be extremely careful. I'm not advocating for a return to that chaotic approach, but the mentality is still useful. It's better than starting a story without acceptance tests, then hoping QA will catch issues later. This mindset has become more common developers focusing on getting as many stories to QA as possible, rather than realizing business value.
Avoiding the QA Conundrum:
Not a Bug: If it isn't a bug, it's not a bug. Write a new user story that clearly articulates the business value of, for example, changing square buttons to round buttons, and then prioritize it in the backlog.
Works on My Box: Eliminate silos and let developers deliver value all the way to production. Trust them with the entire process and hold them accountable when they don't achieve it.
Developer Error: This is the smallest category, yet it's the only one we seem to address with fixes. More processes won't solve this problem, but three things can: tools, training, and trust.
The solution to quality issues in software development is rarely more process. More quality checks, more meetings, and more management seldom solve the problem. Software development is as much an art as it is a process. Support the art: trust your developers, give them the tools and freedom to deliver value, assume they will do their best, and watch the results.
Coach Certificado | PMP | Experto en Gestión de Proyectos
5 个月Having a backlog for quality control may seem useful, but in practice it could complicate team collaboration. The key is in how it is managed and communicated. I invite you to read the book "AI and Scrum" to explore more about how to optimize these processes.