Let’s talk about the uses as well as the misuses of Transport Models
For any professional engaged in analytical work, public debate about ethics and misuse of your toolkit must always invoke mixed feelings. There is an instinct to keep your head down and focus on your own work. But at the same time, a professional must not appear to be dis-engaged with these issues.
For me, Yaron Hollander’s report (‘Who will save us from the misuses of Transport Modelling?’ https://www.dhirubhai.net/pulse/who-save-us-from-misuse-transport-models-yaron-hollander), simply gives an opportunity to talk about issues which are close to my heart as a transport modeller, and a debate I’m always happy to have.
Yaron’s approach of looking at specific types of misuse is interesting, but in my opinion the report jumps into the debate halfway through. The report describes 10 behaviours which it labels as ‘unethical’. If they were undertaken with malice and clear intent to mislead then this would surely be difficult to rebut (leaving aside the debate about the extent to which they take place, which is promised in a later publication). However, each of the ten is in truth a failure of the modeller to clearly explain limitations in the modelling, and/or a failure of clients to understand and keep those limitations in mind when using the results. For example ‘3: Blurring the caveats in a summary report’, or ’7: Avoiding clear statements on … social and economic trends’.
So our most obvious response to these problems is to re-double efforts to explain these limitations and caveats. However, the biggest danger I would suggest is that we lose sight of the good that a model can do to support decision-making, by allowing discussion to focus on areas of doubt rather than seeking areas of agreement. We should keep in mind the fundamental question: ‘Why do we use models in the first place?’
This may seem old hat, but I sincerely think that it is dangerous to talk of ‘the ethics of modelling’ without carefully laying out what role the model is there to fulfil. How can we (or our clients and the public) judge if certain uses of model data are appropriate, if we do not routinely remind ourselves of what the model was intended for?
Of course, opinions about what the model is for will shift according to who you are, and how you got involved in the first place. Our clients are diverse, many are ex-modellers themselves, but ultimately there are sponsors and politicians who paid for the model. They will naturally see the model as a tool to get a specific decision made, and often there is already political capital invested in the outcome. What a modeller sees as a sensible caveat may seem to the politician an inconvenient truth.
A second set of stakeholders with strong views are the public. Sadly, we often encounter them as objectors to a scheme, who will see the model as a tool being used to justify a bad decision: they know in their hearts the scheme is wrong, so the model must be wrong. Every caveat revealed is potential ammunition, and many of the behaviours described in the report could arise from this. In the terms of the report this may seem unethical behaviour, but it is hardly surprising and not something that can be avoided. Challenges are inevitable, but do need to be filtered and given sensible-weight in any debate.
As for the modellers, I’d like to think that we see the model as an ‘honest broker’, providing a consistent and objective means to understand a planning problem. We know that the whole process of modelling can help to improve a scheme by offering insights previously unavailable, sometimes providing certainty about benefits, but perhaps suggesting a re-think is needed. We also know that honesty and clarity about the meaning of the results AND the caveats is essential, but we fear that the big messages will get lost amongst too much detail.
My own conclusion from all this is that modellers do need to emphasise communication and explanation at all times, but this means proudly talking about what the model CAN do and what it CANNOT do. We need to approach each new modelling project with a clear and open statement of the limitations, and ensure this is heard and considered. And we need to ensure both the strengths and weaknesses remain in view when we present results.
As the report notes, this is a time of unprecedented demand for modelling, so we must surely be delivering a service that’s needed. Let’s talk about the insights and improvements to transport planning that we can achieve through modelling, and use that as a firm basis to discuss the limitations.
Experienced Rail Professional
9 年Thanks a good article. Agree onus is on thr modeller to communicate the risks and limitations of their work. One of the ways the industry could improve is teaching more junior staff why we model. This might seem obvious but we tend to focus on the technical side and forget answering the clients questions. For example we're very poor at teaching graduates how to interpret and present results. The industry lags others here i.e. many reports are still full of very bland tables which don't easily provide the answer for the layperson.
Economist-Econometric Modeller within Transportation, regional and Urban Economics, founder of EconOration.
9 年Majority of transportation models are causal in nature and are based on accepted economic theories and estimated using approved frameworks and assumptions. Naturally the relative effectiveness of them in explaining the reality is limited to their implicit specifications. Since models are generally utilized for predictive purposes, their precision are yet challenged by inherent uncertainty about the future. As Richard, I believe that utilization of models should be regulated so that they follow a certain framework within which we can handle data and assumptions issues and produce reasonable predictions.
Independent Research Analyst: Analytics Cambridge
9 年The good use of models goes beyond the simple "its produced by the model, so must be right". There is recognition that there are limitations in the data and assumptions about the future. But the model allows an ability to have these discussions and see what differences changing assumptions make - how robust are the results? So it provides a framework for tests.
Independent Consultant with 30 years' experience in Transport Modelling, Operational Research, analysis and consulting.
9 年Yaron - just to clarify that when I say we should be 'proud', I mean in the sense of 'confident', and that the confidence should mean we're not afraid to see apparent weaknesses and caveats aired (that's opposed to the sense of pride which comes before a fall...). 'Lowering expectations' is quite a challenging phrase, but again I think we may be thinking along the same lines. I could certainly go with 'challenging why expectations are so high, asking if the expectations match the REAL task, and if not adjusting so that the task requirements, the expectations and the capability of the modelling are all in line'. Not as catchy though. (Should also point out that I'm generally referring to data analysis and use of observed data as part of modelling, as I absolutely agree that wherever we can learn more by looking at empirical evidence, it should be done as a matter of course.)
Expert-conseil en mobilité / Strategic Advisor in mobility
9 年Models, just like plans, can serve many different purposes. Yes they are crucial inputs to the planning process, but there are a wide variety of "plans" which can be made : agendas, policies, visions, designs, strategies, etc. Properly understanding the intended use of a particular model (or type of model) can help immensely in ascertaining the limitations of it applicability. I think that much of the problem is structural : given how much effort goes into certain types of models (large "integrated" ones), there is a tendency to think that they should be applicable to a wide variety of analysis situations at more refined scales. And then there is always the belief that 'big data' is going to lead to better models. There is no guarantee that more data and more complex (or "complete") models will necessarily lead to "better" forecasting. In fact, we need to recognize that it is not our plans but rather our actions which create the future that we are trying to forecast. We probably should be more circumspect about our ability to forecast human behaviour in the future. I would like to see the behavioural underpinnings of models more directly discussed. What exactly do we mean by 'value of time'? What are the implications of these assumptions?