Are metrics a threat to scientific progress?
Tim Cahill
Managing Director | Strategy Consulting to Research Organisations | Driving Outcomes in Australian Research |Top 1% higher education sales professionals on LinkedIn
This post is based on a talk I gave late last year at the APO Annual Forum (video available here).
"Scientific revolutions are inaugurated by a growing sense... that an existing paradigm has ceased to function adequately in the exploration of an aspect of nature to which that paradigm itself had previously led the way.”
So said Thomas Kuhn, physicist, historian and philosopher, in 1962.
In the intervening years, there have been massive changes to the world of academic work. Not least of which, it is estimated that academic literature grows at a rate of around 8-10 per cent annually. That means that in the period between Kuhn’s observations and now, scientific literature will have doubled no less than six times. To put this in some perspective, the global population growth was estimated at around 1.62 per cent in the same period.
The growth of publishing has been accompanied by an explosion in the number of journals and supported by ever increasing public spending on research. It is no surprise then that we by necessity come to rely on metrics to account for all this increasing activity too assist with resource allocation and to provide tools for accountability.
The evidence shows that the current suite of metrics is now probably shaping the progress of science itself, challenging Kuhn’s notion of scientific progress.
We now have an emerging empirical body of evidence that demonstrates how the focus of research assessment on peer review and citation metrics actually limits the type of work that academic researchers are willing to undertake.
It’s referred to as the ‘evaluation gap’; the gap between the narrow focus of what is measured on the one hand, and the broader and multiple missions of universities, on the other.
Delivering broader social benefits, applying knowledge to solve pressing issues, and contributing to policy development and community engagement, are ignored as researchers focus their attention on performing against evaluation criteria.
A couple of examples:
A recent study, titled “Accounting for Impact? How the Impact Factor is shaping research and what this means for knowledge production” examined how the Journal Impact Factor influenced the knowledge production practices of three types of research groups in two University Medical Centres in the Netherlands.
It found that the Journal Impact Factor was an important consideration, particularly for the translational and applied research groups, in their knowledge production practices as an indication of the quality and innovation of their work to peers. More importantly, it found that these scientists, while they did not always agree with use of the Journal Impact Factor, felt they must accommodate demands to demonstrate the impact of their published work to peers in order to progress their career. This meant often prioritising citation impact over the translation of research into clinical settings.
A second study, titled “Racing for what? Anticipation and acceleration in the work and career practices of academic life science postdocs” included interviews with postdoctoral life scientists in Austria to understand the changing demands of their work and career practices over time.
It found they were acutely aware of the highly competitive nature of their work with a focus on individual achievement and a continually accelerating working pace. It also found that these perceptions impacted research practices and production with quantity and speed being valued over quality in this fast-paced and individualised environment.
In both these examples, academics are not just responding to incentives, but the incentives are clearly shaping what work gets done and what gets ignored. In other words, metrics are actually starting to influence not just the behaviour of academics, but also the progress of science itself.
But this is not to say that we shouldn’t be using metrics. Robust and carefully designed metrics can successfully drive our research sector in the directions we want it to go. For example, the focus on research quantum in Australia in the 1990s successfully lifted academic production, and ERA increased the quality of venues where they published. These were both good outcomes.
So, what is to be done now? How do we design metrics that foster positive outcomes and promote the best of science and research?
One paradigm shift (to borrow Kuhn’s term) would be to move away from a culture of competition – between academics and between institutions. The whole system, including the metrics we use to measure it, are based on mechanisms of individual reward – journal articles, being lead author, single person competitive grants and fellowships, the idea of the Chief Investigator, promotions processes, patents and our entire IP system, all take the individual as the unit of measurement. This promotes a culture of competition and likely leads to a large amount of duplication, missed opportunities and wasted effort. We need to look no further than the current ARC and NHMRC success rates, which are between 15-18 per cent, for evidence of this. When an entire system is built on an inefficient mechanism, the waste adds up.
Presumably, if metrics can shape scientific progress for the worse, they can also shape it for the better. Is competition the most efficient and effective mechanism to deliver our scientific endeavour? Does it imbue the kind of research culture that we want to have? I suspect the answer is no, and that metrics that maximise collaboration and consolidate our investment would deliver better science and research.
Management Consulting | Systems Thinking | Human-Centred Solutions
6 年Love your brain, Tim.
Managing Director | Strategy Consulting to Research Organisations | Driving Outcomes in Australian Research |Top 1% higher education sales professionals on LinkedIn
6 年Pauline Zardo
Managing Director | Strategy Consulting to Research Organisations | Driving Outcomes in Australian Research |Top 1% higher education sales professionals on LinkedIn
6 年Amanda Lawrence