Testing in Agile: A Quick Tutorial on Defects, Bugs and Everything in Between

Testing in Agile: A Quick Tutorial on Defects, Bugs and Everything in Between

Defects and bugs are probably the most confusing topic of Agile testing. Should they be documented, sized and prioritized? Do they have to be fixed in the current sprint? What's the difference between a bug and a defect? Can number of bugs and defects be used as a metric of QA performance?

Here's a simple guide.

Most of the confusion around defects and bugs is caused by lack of common terminology. You don't have to agree with my definitions! Feel free to create your own, or use the word "bug" where I use "defect" and vice versa, but at the end of the day you should be able to differentiate between issues found during new development (I call them "defects") and issues found in your product that are pre-existing (I call them "bugs"). There's a third category of issues that were caused by recent development and really should have been found in feature testing, I call these little nasty things "bugs that are in fact escaping defects".

I hope it's pretty clear that the defects must be addressed during the same sprint they were found in. Otherwise, your story will not match the acceptance criteria and definition of "done", and you won't be able to close it. This is the classic "JUST FIX IT" situation.

Never size defects! The original story already has a size that includes all work needed to get it done - design, coding, manual and automated testing, defect fixing, documentation and whatever else you might have in your definition of "done".

It's up to you whether to document defects as separate items or not. If the team has great informal communication and things are done based on conversations and collaboration, then you can probably skip the paperwork around defects. If your team requires more formal interactions (remote team members, different time zones, hectic environment), consider creating a sub-task or a defect linked to the original story for tracking purposes. Again, no sizing.

Never count defects as a metric of code quality or QA effectiveness. This will only create convoluted incentives and artificial divides between developers and testers. Let the team figure out their quality. They should be able to confront each other and keep each other accountable for the quality of their work.

The only thing that really matters is this: when the team closes the story at the end of the sprint, it must be defect free. Think about it, it makes perfect sense! Why would you create a new piece of functionality and release it into the world when you know it's already flawed?

To reduce number of defects, make sure the team works as one from the very beginning of the sprint, developers and testers plan their work together, collaborate and focus on quality.

At the end of the sprint your goal should be ZERO DEFECTS!

Bugs are pre-existing issues found in production. The older your product is, the more bugs your system will have. I once worked on a legacy product that had 3,000 reported bugs in the tracking system. The nature of these "bugs" can be controversial. Sometimes your customers try to use the system in a way it wasn't designed for. Sometimes the load, volume or scale is more than the product can handle. Sometimes old issues that have been in the system since its inception become exposed through a different workflow. Sometimes you shoot yourself in the foot by making the application so flexible, your users do things you would never expect them to do. And sometimes a bug is just a bug - a system failure that happens under certain (maybe rare) circumstances.

To give even better definition, bug is a condition in which the system's actual behavior doesn't match the user's expectations. But guess what, sometimes the user's expectations don't match our (developers and PO) expectations.

Should we aim for ZERO BUGS?

No! Never try to fix all bugs, unless your system is very small and very new. Bugs have to be prioritized by the Product Owner and sized by the team. Always consider the ROI of fixing a bug vs developing a new feature - how many customers reported the issue? What's its severity? How often does it happen? What will happen if you never fix it (in many cases - nothing. Just leave it alone and focus on more important things). Also, look at the risk. Old bugs and design flaws can be dangerous - you can fix one thing and break ten others.

Treat bugs as product backlog items and pull them into your sprints according to priority.

Remember - as long as there is software, there will be bugs. They are inevitable, especially as your system becomes older and grows in scale.

Measuring number of reported bugs might be helpful, but not as a metric of development quality. To me it's more of a reflection of how much the users' expectations diverge from our understanding of the system. It's also a clue for how much the company should invest in support to keep the customers happy.

The third type of issues are bugs that have been recently introduced into the system. Let's say in Sprint A the team worked on a user story. Then in Sprint C a user reports an issue and after analysis the team realizes the bug was introduced in Sprint A. This happens all the time - we fix something or develop a new feature, and inadvertently break something else. Ideally, the defect should have been caught during Sprint A, but for some reason it wasn't. Instead it escaped and came back as a production bug.

We track these nasty beasts as "escaping defects". In Jira we mark them as "bugs", but then add a label that says "Escaping Defect". The team analyzes the root cause of each bug and tries to come up with a plan on how to improve their process going forward.

The number of Escaping Defects is a good measure of the team's quality. Aiming for zero escaping defects might be not realistic, especially in legacy systems with tons of dependencies and vast wildernesses of code written many years ago by people who are no longer with the company, but tracking the numbers across multiple sprints should demonstrate improvement based on learning, automation and refactoring.

How do you track, categorize and treat bugs and defects?

Please like and share with your network! Comment, share your thoughts and send me your questions, follow me on LinkedIn and Twitter to read my articles on practical Agile.

My other posts in "Testing in Agile" series:

Colin Wilcox MBA

Director / Director of Engineering / Head of Software Engineering / Engineering Manager/ Agile Leader

4 年

Very interesting. A lot of what you are saying was treated as common sense? when I started coding in the late 70s but seemed to become less so over time.? Glad to see these ideas have surfaced again not before time.great read?

回复
Enrique Weber

Global IT Project Manager at Duracell

4 年

Great stuff. Thanks for sharing. I like your definition of "defects" & "bugs", I normally use them like that, but it's still confusing for some people. Defects/Bugs is the one single thing that I see more challenging in Scrum, even worse if for compliance reasons the product requries (mandatory) a UAT - whether that should or not be part of the scope of the sprint... most of the times it all comes down to your DoD.

Yanic Boisvert

Analyste fonctionnel

7 年

Great article. On my end, I always use the word "bug", but I do exactly what you are saying, fix them in the sprint or put it in the backlog. What I like in your article is this "bug is a condition in which the system's actual behavior doesn't match the user's expectations." A bug is a bug! For me there are 5 definitions of a bug : 1. The software doesn’t do something that the product specification says it should do. 2. The software does something that the product specification says it shouldn’t do. 3. The software does something that the product specification doesn’t mention. 4. The software doesn’t do something that the product specification doesn’t mention but should. 5. The software is difficult to understand, hard to use, slow, or—in the software tester’s eyes—will be viewed by the end user as just plain not right. 4 and 5 are very subjective and you can alwasy debate them with the customer / tester, but you should always have this mindset. Always put the customer hat on your head!

Karen Dowling Yeaton

Principal Specialist - AEC at Autodesk

7 年

Interesting article Firstly I agree with Bruce Katz there is no need for different terms. Defect is a defect. The need to call them something different dependent upon at what time during the development process you happen to find them, is an attempt to divide up your technical debt in a manners that allow you manage the debt. The bottom line however is management of technical debt. The idea that every defect found during the sprint should be fixed, is a great one in a perfect world where we have all the time and money we need. Since we don't have either we must come up with processes that allow us to fix the issues that are most important to our customers. That may or may not include every defect found during the sprint. It may instead include all the defects found outside the sprint they were actually crested in. To me bottom line is having a clear understand of the impact to the customer thst the defect represents All defects are not equal such thst they can be prioritized simply by when they are found. That's only one aspect to the process and not necessarily the most important one.

回复
Bruce Katz

Software Quality Assurance Professional and Inventor, providing leadership and vision for Software Quality Assurance and Testing.

7 年

Katy, I get what you're saying, but I professionally disagree and have a problem with adding new terminology, especially where I believe none is required. A bug or defect (or whatever name folks choose to use) is still, as its most atomic level, an identification of an observed behavior that is unacceptable to an observer (a person, system, or instrument, etc.). How or what you do with that information and how you address the defect is (should be) defined by your process. I personally don't see the value in calling a defect discovered during early architectural sprints one thing, while a defect discovered in a given sprint, or post-deployment should be called something else. To me, and my way of thinking, they are both a defect / deviation from acceptable behavior that was observed by someone or something, that needs to be addressed. And your process will identify how to do that. Please tell me I'm missing something, as I am trying to understand the value this offers. Thanks

要查看或添加评论,请登录

Katy Sherman的更多文章

社区洞察

其他会员也浏览了