6 Questions You Need to Ask about those Simulation Results
In the spirit of my previous post on questions you should ask about experimental designs, here are 6 questions you need to ask when evaluating the credibility of simulation results. Decision makers at every level will increasingly be called on to make decisions based, at least in part, on the outputs of computational simulation. When you are in this position it is important to establish the credibility of those results and weigh them accordingly. Consider these questions in the spirit of conversation starters rather than legalistic requirements to be checked off.
- How many model parameters are there? What are their values? How did you choose those values? What uncertainty characterizes that choice? Based on your uncertainty in the inputs, what is your uncertainty in the outputs?
- Why is this model being applied within its domain of applicability? What would give you an indication that those assumptions are breaking down? How could you test for that breakdown?
- What is the next higher-fidelity simulation that you could apply to this problem? Have you run a lower fidelity code or have results from an engineering correlation? Where do they match the present results, where not? Why do they differ?
- How has the correctness of the code implementation been verified? How has the correctness of the specific calculation been verified?
- How do the outputs of interest change with variation in the model parameters (i.e. sensitivity study) or model form? What parameters are the results most sensitive to?
- What software quality assurance and configuration control methods are used for the simulation and analysis software? What regression testing is accomplished as code changes are made? What is the coverage of the unit test suite?
The two references I'll point you at for further reading have a couple nice remarks emphasizing the preferred focus on conversation and discussion:
This Handbook primarily provides guidance to a more complete discussion of the details surrounding M&S-based analyses. NASA-HDBK-7009
Conducting a PCMM [predictive capability maturity model] assessment and sharing it with interested parties and stakeholders engenders discussions that would not have occurred without the assessment. Such communication is a highly significant consequence of an M&S maturity assessment. SAND2007-5948
I'll leave you with this encouragement (my apologies to Niels Bohr):
Deciding is very difficult, especially about simulations.
Engineering Developer at Applied Flow Technology (AFT)
10 年Great job at making a brief yet comprehensive list! A lot of people take simulation results for granted. I am a proponent of doing simplified hand calculations and double checking that the results follow the theory.
Senior Manager - Field Engineering at Avangrid
10 年6 "Sets of Questions" actually. Either way, I think the Validity of Assumptions and the Methods of Verification are still the 2 most essential factors to consider in general.