Are ‘wrong’ simulations useful?

Are ‘wrong’ simulations useful?



When brainstorming ideas for a new brief or project, it’s often said that ‘there are no wrong answers’. This is to encourage people to participate in thinking outside of the box in order to get as many assorted ideas thrown into the hat as possible. Can the same be said when you’re facilitating a modeling project? After all, just like a brainstorm, it is usually the first step in addressing business challenges.

Some of the ideas from a brainstorm are going to be discarded, of course. Some of them will inspire more thought, and some of them might be the golden nuggets – just what you were looking for. Without the process, however, many of them would never have seen the light of day. So, if you don’t approach modeling in the same way, are you potentially missing some of these ‘golden nuggets’?

One of our senior simulation consultants, Dr. Naoum Tsioptsias, recently had a research paper published in the Journal of Simulation, co-authored with Dr. Antuela Tako and Professor Stewart Robinson . It explored the question, ‘Are “wrong” models useful?’

Many of us can identify with the problem of ending up with a model that doesn’t quite meet our expectations. Should these models be considered “wrong” or discarded? The paper asks what does “wrong” actually mean? And can a “wrong” model still provide us with helpful insights?

We sat down with Naoum to find out more about the research, and whether its findings offer any good advice for avoiding common pitfalls when building models.?


Can you tell us a little bit about why you wanted to investigate this topic and what you were hoping to learn?


There is a famous quote by George Box that states that “all models are wrong, but some are useful.” The research questions whether there is something we can learn from those models that are considered “wrong”. We decided to focus on discrete event simulation, but the takeaways can apply to other simulation types, like system dynamics and agent-based.?

We started out by understanding how we define a model as “wrong”. It is something that is very open to interpretation indeed, as a model that doesn’t behave as required may raise some concerns. But how did we end up with that model in the first place? We wanted to find out what is considered “wrong”, but also, given the time and effort spent on the input of ideas, discussions and content development, could we still derive some usefulness from such cases?


So what is a “wrong” model?


“Wrong” may be a matter of perspective. This is because different people are going to be approaching model development with different expectations. Modelers need to constantly balance whether a model is adequate, valid enough, and representative. There are a lot of factors that may result in a “wrong” model.

As part of this research, we interviewed twenty two simulation consultants that had extensive experience in creating models. Throughout their careers they had identified cases where the model they produced may not have been considered ideal. In total, our interviewees collectively recalled fifty four instances where a simulation model was considered as “wrong” by either the modeler, the client or both.


So, if the definition of “wrong” is open to interpretation, what can people do if they find themselves with their version of a “wrong” model?


The research helps to identify that the best mindset to enter into these types of project is to try and get the most out of the journey of creating a model, not just focus on the destination. Further to thinking of a model as a means to finding answers to a specific question, they can also facilitate discovery and the evolution of ideas. A simulation that isn’t right for one scenario or user may be suited to another, so there needs to be creative application to make this work. The key is to try and take value from the whole process.

Ideally, we want to end up with a model that everyone can use, but if, for any reason, the results are not working towards the initial goal, it’s time to see what else can be learnt. At this point, it is important that everyone who was involved in the development process – modelers and stakeholders – revisit it with an investigative mindset to look for other possible benefits.?

Questions that could be asked include:

  • Have I gained a better understanding of the process than I had before?
  • Have I managed to still find a benefit by fixing a smaller issue (e.g. a bottleneck) instead of the one I set out for?
  • Can I use this model to visually explain the problem to stakeholders or even use it for teaching purposes?
  • May we see further potential for this model that may not have been fully utilized?
  • How can we improve the process in the future?

In the fifty four instances where “wrong” models were identified in the research, many were still able to be utilized by understanding that it is much more about employing it as an approach to problem solving rather than just a means to an end.?


How has this research impacted how you approach your work at Simul8?


It has been very helpful. It has enabled me to keep in mind a checklist of common pitfalls throughout a simulation project so that I can avoid them.

For instance, processes may get withheld due to data requirements before building a model. In this case, it is important to remember that you may not necessarily need all of that information, as it is not about building a replica of a process. Rather, focus first on obtaining the information needed to drive your system. By starting small and then building it up as you go, you may find that you can actually make a variety of decisions quite quickly, while also gaining a better understanding of what to focus on.

Another case would be if you start to see that the model cannot provide an answer to the question that you initially set out to answer, that doesn’t mean it cannot provide other insights. In many instances where it is not feasible to change the model, it’s important to take the time to reflect on how we can still put it to work.?




周峥

Wuxi Xunhe Information Technology Founder

1 年

Those who focus on EVERYTHING is an idiot in my sense. hahahah

回复
周峥

Wuxi Xunhe Information Technology Founder

1 年

Every Models are wrong, so what is the sensible way is to ask: How wrong is simplified model so that they have little or none value for decision support.

回复
Artem Kholodov

Developer of simulation models AnyLogic

1 年

Чтобы реализовать любую сложную имитационную модель надо в ядро команды привлечь двух сильных русских математиков с опытом работы с имитационными моделями. Их основная цель – понять проблему, применить теоретико-множественный подход и построить систему виртуальных состояний с всеми соответствующими переходами. Они сами все определят, что будет базисом модели – случайные процессы, нечеткие мержества, нейронные сети. Остальное сделают многочисленные ремесленники. А текст данной статьи это полная психологическая чушь, которую я воспринял?как личное оскорбление, когда увидел, что мне пришло на почту. Я специально в комментарии выложу два текста: русский оригинал и перевод. Заинтересует – найдите русского он вам переведет.

回复
Artem Kholodov

Developer of simulation models AnyLogic

1 年

?? To implement any complex simulation model, it is necessary to involve two strong Russian mathematicians with experience working with simulation models in the core of the team. Their main goal is to understand the problem, apply a set–theoretic approach and build a system of virtual states with all the corresponding transitions. They will determine everything themselves, what will be the basis of the model – random processes, fuzzy entities, neural networks. Numerous artisans will do the rest. And the text of this article is complete psychological nonsense, which I took as a personal insult when I saw what came to my mail. I will post two texts in the comments specifically: the Russian original and the translation. If you are interested, find a Russian he will translate for you.

回复

要查看或添加评论,请登录

Simul8的更多文章

社区洞察

其他会员也浏览了