Almost four decades ago, Canadian scientist Daniel Osmond compared the challenges of applying for research funding to Alice’s adventures in Wonderland. In 1982, funding systems were illogical, brutal and exhausting, not unlike Alice’s encounter with the temperamental Queen of Hearts.
I say “were”, because surely things have improved since then? Sadly not. Funding systems today are still, as Osmond put it, “a bloody mess”, complete with a persistent gender bias that leaves Alice – or Anne or Angela – out in the cold. (Dave, on the other hand, is doing fine, if ARC and NHMRC winners lists are any indication).
So, why is it so hard to improve funding systems? Surely a scientific approach could be used to create a fair, efficient and sensible system? The answer is simple, but maddeningly ironic: the systems for funding science are human processes and science is ignored.
Universities and funders resist evidence like, well, lots of areas of government and industry. Translating evidence into policy is hard, and it’s often complicated by those in charge who (whether they admit it or not) prefer the status quo to remain as it is.
Systems for funding research are often designed by eminent professors, the same professors who have thrived in the system and so design similar systems to reward people like them.
This is survivor bias, a well-known problem in epidemiology, but overlooked in funding. We are failing to support and foster talented young scientists because of the lack of diversity in funding, both in terms of a diversity of people and a diversity of ideas.
Unscientific thinking
Funding systems are obsessed with the scientifically baseless idea of pursuing “excellence”.
When it comes to scientific research, nobody can define what excellence is; I challenge you to find a coherent definition of excellence on any funder’s website. And yet, most funding systems claim that they reward it.
For example, would a well-executed randomised trial for a new COVID-19 treatment that showed no effect be regarded as “excellent”? To me, that’s research at its best, but too often “excellence” means breakthroughs, which means careful science that finds a “negative” result goes unrewarded.
Another problem is that many scientists are unscientific when it comes to funding. If they don’t win funding, they decry the system as bad.
A 2020 study that assessed researchers’ attitudes toward the use of a lottery to allocate research funding in New Zealand, for example, found that those who won funding were more likely to think it was a good system, even though they knew it was randomised.
When it comes to funding, scientists do not think like scientists.
Let’s be honest, many would be happy for funding to be rewarded based on a karaoke contest if they won. And given the lack of association between peer reviewers’ scores and the outcomes of a funded project, a karaoke contest might be just as good (and with less paperwork).
If scientists cannot put their own biases aside when it comes to funding systems, there will be little motivation to advocate for change.
Agents of change
The reality is we are unlikely to get an evidence-based funding system any time soon, because the evidence just isn’t there. I am only aware of two studies in the last decade that used ARC or NHMRC application data, which found that fellowship funding was more reliable than people funding, and that interdisciplinary research has a lower chance of success.
When my team and I once spoke to the head of one of these agencies about designing a more efficient system, they told us that their peer review system was like a “Rolls Royce”. But we weren’t allowed to look under the bonnet to assess the current costs and reliability. It was the emperor’s new car.
Funding agencies can make bold claims if they never have to test their systems against alternatives. But if there ever were a trial of two alternative systems, would scientists be willing to participate? I suspect many would not want to be part of a funding experiment that risks them losing out.
The onus to change is not just on funding agencies. I know that many people inside the system are working hard to make improvements. I think the key stumbling block is scientists.
Until scientists think more like scientists when it comes to funding, we’ll never get a smart, fair and evidence-based system. We’re stuck with a human system tainted with human blind spots and biases.
Article by Adrian Barnett
Photo Credit: