Drive to Knowledge Newsletter 23-1a
Peter Holst
CEO at FPrin LLC, an Engineering Services Consultancy supporting medical device development with analysis and design
The Drive-to-Knowledge Newsletter is designed to investigate the benefits and methods of a Knowledge-Driven Product Development process for medical devices.
This is the first issue of our monthly LinkedIn newsletter with our thoughts on topics that we think are interesting and relevant in the domain of medical device development.
We hope that you enjoy it. We welcome feedback and comments.
---Peter
The analytical model says it should work.
The empirical data doesn’t agree. What now?
Let’s suppose that we have followed a process that includes building an analytical model and gotten to a point where we think we have pretty good understanding of what to expect.
We proceed to build an engineering, proof-of-concept, prototype or test apparatus. We design an experiment and take data. But the data doesn’t align with our model-based expectations. What now? “Bad” data? “Bad” model? “Bad” prototype? Measuring the “wrong” thing?
Stop! if you are tempted to ditch the model and push ahead with a purely experimental investigation. We have seen that model abandonment can begin a long and painful iterative trial-and-error approach. It's time to improve upon the model, the experiment, or both.
Revisiting the analytical model and underlying assumptions – explicit assumptions and (especially) implicit assumptions – often makes sense. Sometimes we make assumptions without even realizing it. Is there something we have assumed that is not actually included in the physical model?
领英推荐
Revisit the physical model and experimental design, setup, and execution. Sometimes we inadvertently measure the wrong thing – maybe there is a room temperature effect, for instance. When in doubt, debug a physical setup subsystem-by-subsystem.
Careful review of the experimental setup, procedures, and data collection methods is often useful. Look for any potential confounding variables or flaws in the experimental design that may have affected the results.
Of course, the analytical model will give reproducible results (same inputs, same outputs), but it is harder to make that claim for the physical model. In a test system it’s possible that we think the inputs and or test conditions are the same, but in fact they are almost certainly not identical. Even repeats on the same test item can yield different results (see “precision”, among other possible causes).
Perhaps the outputs are sensitive to some other stimulus – operator, temperature, humidity, degradation or change in the samples under test or the fixture. Replication is crucial in science to ensure the validity and reliability of results and it is very difficult (or impossible) to make claims of accuracy when results are not reproducible – i.e., the system is not precise.
Consider a "bounding case" analysis. Set aside all unknown losses, efficiencies, etc., and try to define a conservative expectation of the range of possible results. Energy in, energy out for example. If results fall outside this envelope, it’s possible the analysis or experiment are falling victim to a unit conversion error, or your understanding of the system is fundamentally flawed.
Seek input and opinions from colleagues. Another set of eyes is great when things aren’t quite making sense. Colleagues can often provide insights into potential reasons for the discrepancy and suggest alternative hypotheses or experiments to investigate further. The act of distilling down your observations into a concise set of facts to relay to a colleague is itself an act of troubleshooting!
Evaluate alternative models or approaches. If working with a dynamic system it is often insightful to excite or drive the system with a well-defined input – step, or ramp. If the system responds poorly to relatively simple inputs it is unreasonable to expect it to respond well to more complicated inputs.
If the contradiction remains unresolved, it might be necessary to revisit and refine the model or modify the experimental setup. We might have to go back to drawing board. This isn’t necessarily a bad thing because we have, almost certainly, learned something.
It’s been said that all models are imperfect, but all models teach us something. This statement can be applied to both analytical and physical models. The challenge is in building and using models in a way that teaches us something useful!
By critically examining both the model and the experiment, seeking expert opinions, and following the scientific method, it's possible to gain a deeper understanding of the problem and make progress toward a resolution.
Product Development, Strategy, Innovation, Startups (Materials Science & Chemistry)
1 年Good luck with the news letter. The idea behind any modeling is to minimize the number of experiments one needs to do or predict the potential outcome of an experimental measurement or provide an explanation to an experimental observation. Definitely right models accelerate the product development!
Streamlined Compliance for Medical Device Development
1 年Good points for everyone involved in R&D! Whenever we find a difference between our models and what we measure in the lab, there's something to learn to increase our understanding of the system. Teams that only do the lab experiments will likely miss out on this learning.