Should You Fix That Bug?
Brody Brodock
Quality Monk focused on Clinical Data and System of Systems (SoS) interoperability
Do You Really Want to Fix That Bug
You find a bug, so you must fix it, right? Well, maybe not. Maybe you should stop and think about the consequences of fixing that bug before you slot it into your work effort.
The attributes of a written defect include the measurements — severity and priority — at least they should and if they don’t you have a different problem. They are the subjective and objective measurements of the defect. Severity is the answer to the question, “How bad is it?” While priority is the answer to the question, “when do we dix it?” These measurements are the classic push and pull of any development group — I have resources, time, and scope — how do I get to the release criteria line?
Subjective and objective measurements
I said these were objective and subjective measurements, but what did I mean by that? More importantly, who is responsible for figuring out how to classify them? It is really quite simple, the writer of the defect should set the severity, the product owner or designee should set the priority. Usually this means the Quality Analyst will set the severity, and usually it is the Product Manager who sets the severity — your organizational structure may vary, but this is the normal way of things.
Why did I say the QA is setting the objective portion of this measure? Because they are using actual indicators. An application crash, CRUD failure, data corruption, etc. are all Severity 1 (assuming that is your highest). While misspellings, misalignments of grids and fields, incorrect brand coloring, etc are generally Severity 4. There are of course mitigators to these ratings but generally those mitigators are documented and certainly would be documented in the defect. For example, a misspelling of a measurement could very easily be a Severity 1 if the it causes clinical confusion. Abbreviating Microgram as MG or mg would be a big mistake. So, your QA/QE has her set of criteria for different levels of defects and is responsible for objectively assigning that value.
And the product owner setting the subjective measure? Well usually, the QA/QE sets it to the same as the Severity, for the Product Manager to adjust as they see fit. They have the eye of the customer and the Project Manager’s mission in mind. A defect on a portion of the functionality that is only used by Support might just be a lower Priority than a low level defect that is used tens of thousands of times a day and is highly visible.
But what if they are polar opposites? Fine, happens all the time. Let’s look at a simple case. The developer has misspelled the name of the company on the splash page. Okay, as a QE I am going to write that up as a Severity 4 — and if I put on my PM hat I am going to say Priority 1 — show stopper. Both roles are correct in their adjudication, the defect is a ‘dumb’ mistake, but from an objective view it is insignificant.?
How about the reverse case? An application crash that is a Severity 1 but is a Priority 4? So sure, every 500 transactions the application crashes, I have to restart it and restart where I left off. As long as I can figure out what last best saved was maybe that is okay. Maybe it is only used by L3 support, so again — not a critical thing — even if it is damned sloppy coding.
It might not be a good idea to fix a bug.
Okay, so you have collaborated on your defect and each has taken a stand. Your S1/P1 is a show stopping situation, but what about your S4/P4? The thing is, it is a documented defect and is probably considered low hanging fruit. So why not fix it? Before you go traipsing off to resolve that S4/P4 maybe you should think about your risk.
领英推荐
What do I mean by that? Well, there are three measurements I would like you to consider before you start fixing those low level defects. These three measurements have industry standards that will indicate highly performant and non-performant teams.?
First one is Fix Failure rate — the number of times that you have to return a defect because it wasn’t fixed. While there are a number of reasons the defect might be Fix Fail it really doesn’t matter — the defect was “worked on”, returned for “fix verification” and subsequently returned. I have seen rates as high as 50%, and as low as single digits. I would argue that 4-5% is world class and 50% is a three alarm fire.
Second is the Defect Introduction rate — the number of defects that are introduced when another one is fixed. So in working on a defect, if you add a new defect then you are creating more work. Defect introduction rates vary depending on your Unit test discipline. But, a good rate is going to be a bit higher than your fix failure rate. Again, world class would be under 10%, and fire drill will be near the 50% range.
Lastly, there is the recidivism rate — the zombie bug rate — the thing has been fixed so many times you can’t believe it is back again. Fixing your bug, you have reverted a previously fixed bug to being active. There really is no good rate for this, but if it is above a few percentage points then you should be looking at your regression program. I have seen these rates to be close to zero all the way to 20%.
What do these things tell you? I worked on a project where the Recidivism, Fix Failure, and Defect introduction rates were 20%, 45%, and 50%, and in some cases a defect would generate all three results. The chance of a defect being resolved is not good and that type of project churn is the mark of a project death march.
Now in the above example, what is the probability that a S4/P4 will be resolved without issue — and what is the probability that defect or defects that are introduced in the fix will be of lower Severity and Priority than the original defect — can’t happen. So, if you don’t fix it the first time you will have an excellent chance of having to fix a defect that is of higher Severity and Priority — wasn’t that a waste of time?
So every time you fix a bug there is a risk that it won’t be fixed, will introduce a new defect, resurrect an old bug — make sure the bugs you are fixing are worth the risk.?
Something that quality wonks will not like to hear.
If your team is fixing these types of bugs, then maybe you shouldn’t be writing them. Maybe I should put this a different way, if you can’t find a better bug to write then maybe you aren’t looking hard enough or testing the in areas you should be testing in. Also, if you are writing these types of defects to the detriment of more severe defects then you may be cluttering up the tracking system.
There is a risk here, if your team is not able to tell what a significant defect is, then they may mis-categorize a higher impact defect as a low impact defect and not documenting the defect may cause your team to not do due diligence on the defect and release it to your customers. That, is the argument for documenting everything — but I would argue that the team should be finding better bugs.
In summary
Finding defects is difficult work, finding good defects is even tougher, but focusing on the important ones will lead to better software. Focus on the trivial defects will cause project clutter and irritate the team. Do think about the potential Priority of a defect so you know when to write that Severity 4, (misspelling your corporate name) but if you know your PM is going to mark your S4 as a P4 then why bother? If you don’t know what your fix failure, recidivism, and defect introduction rates are — you aren’t going to know where you need to improve — or the risk fixing that bug.