What is the cost of an escape — quality failure explained
Brody Brodock
Quality Monk focused on Clinical Data and System of Systems (SoS) interoperability
I have been talking a lot about team metrics recently, and have had some posts on the various aspects of testing, mostly describing it as Verification and Validation (they are sometimes interchanged). But, why bother testing at all? In agile methodologies you don’t need to test because the features are tested by code, through unit testing. A new item is checked in and Continuous Integration runs, all the unit tests run, and Shazam — your code works.
Except, it isn’t that easy. It isn’t that simple, and it isn’t that straightforward. The analogy I am going to use is, bugs hide, and they hide under unturned rocks. Don’t turn that rock over, you won’t see that bug.
Another way to put it is that there are methods of test, purposes to a test method, and mechanisms of testing that are not identical and will only uncover specific types of defects. If you are doing GUI testing then you will uncover GUI bugs — if you are doing API testing then you will uncover API bugs — if you are doing Performance testing… The point being that differences in testing provide different results and can improve your escape rate — which you want as low as possible for high and severe bugs.
There are 6 core attributes in the ISO Software Quality Model and 27 sub-attributes to the model. It is a decent model and has its value but I am not asking that you go overboard with all your requirements, analysis, testing. Just that you think about the attributes when you are working on a function or feature. Some will apply, some won’t, but even thinking about them will extend your coverage.
They are:
—Functionality Suitability
—Performance efficiency
—Compatibility
—Usability
—Reliability
—Security
-- Maintainability
--Portability
If I were the king of the ISO universe I would add another attribute: Safety. Especially if you are working in the health IT arena. And, this may not be the model for you, but it is certainly a model that you can adapt for your organization.
A couple points about this list of attributes/sub-attributes — you will notice that there are a lot of words that end in “ility” these are your soft requirements, your “ilities” and they are probably even more important to consider than your functional requirements.
Each of these attributes have methods of testing that will uncover bugs that are under those rocks. Each one has multiple methods that will uncover different classes of bugs. For example, nowhere in this list is CRUDE (Create, Read, Update, Delete, Exchange) yet CRUDE testing is implied in multiple attributes: Functional Suitability, Performance Efficiency, Compatibility, and Security. This is just a framework for you to hang your requirements and testing methods.
If you are measuring your escapes, then you can conduct RCA’s on the significant escapes and identify where in this model you could have caught the defect before it escaped.
Let’s break down an escape into SDLC segments:
领英推荐
Business Need,
Requirements,
Architecture,
Design,
Coding,
Testing,
Deployment,
Support
Let’s consider each of these efforts sequential and consider a defect that escapes from one to the next an escape with damage. And then, let’s attempt to quantify the damage that each escape causes — this is an intellectual exercise that is a rough estimate but in my experience it is reasonable. The one thing I would say is that if we were trying to do this correctly we would qualify the Severity and Priority of the escapes and use those (or that) value as a factor to the estimate. But, keep it simple for today.
Business Need to Requirements: $0
Requirements to Architecture: $1
Architecture to Design: $10
Design to Coding: $100
Coding to Testing: $1,000
Testing to Deployment: $10,000
Deployment to Support: $100,000
Now you are going to say that there is no way a defect escaped to production will cost you $100,000k. Okay, what would you estimate the cost? But, consider this before you estimate that cost.
The cost of identifying the error (it wasn’t obvious or you wouldn’t have released it)
The cost of finding and implementing a solution
The cost of testing that solution
The cost of recovering and fixing the damaged data
The cost of deploying this solution in a patch or hot fix
The lost opportunity in development, (we assume that you have new features to release) business, customer satisfaction, and lost revenue.
Here is where you can apply the severity/priority factor — that $100,000 would be for a high or critical defect escape, whereas a low severity priority factor would significantly reduce that number — but it isn’t zero. If you have done research on the cost of your defects feel free to challenge my assumptions and numbers, I welcome the conversation.
Points you should take from this post. Testing as a profession is difficult and requires skills that are not the same as a developer. Testing methods are varied, and different testing methods will identify specific classes of defects — and will not detect many other classes of defects. You should measure the areas that are generating escapes and put talent to the task of capturing those classes before they deploy, maybe even before they are coded (Acceptance Criteria). And lastly, if you don’t measure it, you don’t know to change it — whatever it is.
Wrote this on Memorial Day, so going to raise a glass to Becky Jo Bristol and Mark Yamane. Becky was killed on Rhein-Main Air Base in August of ’85. Mark was a Ranger who was killed October ’83, in the Grenada operation Urgent Fury. I knew Becky through John (her husband) and was in Search and Rescue with Mark — he was a friend. She was sweet, kind, and cultured. He was tough as nails and focused but had a huge heart.