When to Stop Testing
When to Stop Testing
Stop testing when the testing provides no value.
If no one is going to review the results or use the information to make a decisions, those are good signs that the testing provides no value. Of course this may be difficult to recognize.
Some time ago while working with a product development team, one of the tasks assigned was to create an ongoing reliability test plan. This was just prior to the final milestone before starting production. During development we learned quite a bit about the product design, supply chain, and manufacturing process. Each of which included a few salient risks to reliable performance.
Investigating previous ongoing test plans
Being new to the group and knowing the project was a small evolution for an existing product, I suspected there was a previous plan already in place. Now, the motivation was not to replicate the previous plan. My intent was
- To understand what data previous testing produced
- How that data was being collected and presented
- And, how the information helped the team make decisions.
The new plan should be consistent and useful where it worked well and maybe an improvement where it didn’t work so well.
I also asked a couple of people that had previous created ongoing reliability tests how they approached the task. Each one said they basically replicated the design qualification test plan and conducted the full suite of testing each month.
That is a lot of testing, it’s expensive and just think of all the data that has been collected over the past few years. Really wish I had know about that treasure trove of data during the development process as we could have avoided a few tests, improved others and focused on the largest risks based on that data.
Finding the Data
The first person that told me about the existence of previous test plans and that the testing was accomplished monthly. She said the data was probably on the team’s share drive. Since everything was on the shared drive we spent a few minutes looking, with no luck.
Then off to other engineers and managers on the team that would be most likely to know the whereabouts of the said data. Everyone I talked to over a month of searching was aware of the testing and that the data was somewhere. It was turning into a quest and not sure if it was becoming a search for the holy grail or not.
To make a long story very short, with the help a country manager (the manufacturing and testing was being done in China) and the financial engineer(?) or person, we found the data after two months of searching. The person collecting and organizing the data did a wonderful job and the data was complete and well presented, including the raw data.
I asked when the last time anyone asked for the test data, and he said I was the first in the five years he had been maintaining the database.
Using Test Data
The requirement to create and run testing that evaluated the product’s performance and durability was written into the product lifecycle and development guidelines. At some point in the past the testing was considered worth the expense to create test plans and pay for samples, testing, and data collection.
And, somewhere along the way, the value of that data diminished to have little to no value. No one in the development team nor the manufacturing team took the time to review or even monitor the data.
Of course I grabbed the data, did a few simple plots, discovered in the historical record indicators of most of the excursions of higher than expected field failures (most would have been prevented or minimized if someone had looked at the test data and made a decision to do something about it.) Then revised the test plan I had created by eliminating many of the tests as they did not show any failures or adverse variation over multiple generations of design. And, increased the samples and frequency of a few based on larger than expected variation in the data.
More importantly, stopped about two thirds of the existing testing sequences. First, as no was really needed to look at the results, second, the process and supply chain demonstrated years of stability and capability. Those tests didn’t show any indication of risk of failure.
That left a manageable set of meaningful tests tailored to each product’s set of unique risks. Those become useful to monitor and take action to improve the existing and future products.
When to Stop Testing
Ideally before setting the testing in motion. Not just ongoing reliability tests, any test. Anytime we take a product out of the process of development or manufacture to conduct an evaluation, it takes time and resources to do so. Therefore, the testing and the results if collected to support a decision by someone that knows they are going to use the data to make a decision, then drive on.
Sure there are all kinds of tests and some are inexpensive, exploratory or quick. The impact of no meaningful information is minor. When the testing is expensive, time consuming, etc then the value of the results had best warrant the investment.
Do not design and conduct a test unless there is some purpose. If it is done because we always do it, that is clear signal to stop and ask a few questions.
Second if the testing is established as a routine and ongoing test, when it no longer serves a meaningful (valuable) purpose. Stop the test.
Testing appears to be like government agencies. Once established they exist as any living being with a will to survive and will find ways to continue despite having outlived any useful purpose. Tests are not entities in and of themselves (not sure about government committees, so no comment there). They are tools we use as engineers and managers to understand characteristics of our products or processes.
Look at your existing testing and sort out for each test
- What is the purpose?
- What do the results mean?
- What question or decision does the data support?
- What is the value of this test?
If the answers to these questions indicate the test either will not or does not have sufficient value given the investment to create the data, then stop the test.
Fred Schenkelberg is an experienced reliability engineering and management consultant with his firm FMS Reliability. His passion is working with teams to create cost-effective reliability programs that solve problems, create durable and reliable products, increase customer satisfaction, and reduce warranty costs. If you enjoyed this articles consider subscribing to the ongoing series at Accendo Reliability.
Test Engineering, DFT and Management Consultant / Instructor and President at A.T.E. Solutions, Inc. and BestTest Group
9 年Good post, Fred. As a test engineering consultant, I support your view, but I think we need to differentiate between tests. Not all tests are the same and as you point out the purpose of the test must be defined. While some tests have limited lives, such as design verification tests intended to prove the design is correct, other tests that look for specific defects, especially built-in tests (BITs) may be appropriately applied even 20 years after deployment. This is why test design is crucial and we must overcome the use of test as a foul-letter word that everyone is expected to know its exact meaning. If I may toot my own horn, I recently posted on this issue at https://www.dhirubhai.net/pulse/test-diagnoses-strategy-metrics-new-perspective-part-1-louis-y-ungar.
Technology Leader -- Healthcare Software, HR Software, Blockchain
9 年Very crucial, especially because products go on for years and some core features continue and other features keep changing.
Improving product quality & process speed while reducing cost & waste to achieve operational excellence
9 年I think this resonates well for product development orgs looking to lean out the development process and reduce total lead time from concept to launch.