Imagine that you spend three months writing a grant application. Writing is pain. Walks and coffee break the monotony. After weeks of deep thought — and careful review of your words — you hit submit. The nebulous gears and bureaucratic levers of the National Institutes of Health (NIH) suck up your draft — bzzt! — and assign it to a study section. Many months later, your grant is rejected. You take to Twitter, fingers buzzing on keyboard, and gripe.
Your rejection may have nothing to do with good or bad ideas. Perhaps the grant was rejected because a figure legend “creeped into the 0.5in margin.” Or because you had “not received previous NIH grants.” (Hilarious.) Or, perhaps on the same day that your grant is rejected, you are awarded a Nobel Prize (Nice).
So chin up. Most extramural NIH grants have acceptance rates slightly higher than 20 percent. And scientists, on average, spend nearly 10 hours per week on fundraising, according to self-report survey results published in 2020. Take solace in the fact that many others are suffering, too.
What are the solutions for your misery? If the goal of grant competitions is to hand out gold medals, then perhaps it’s best to eliminate the fluff, theatrics, and competition — and just dole out money equally, or randomly, to scientists. Surely, this would leave more time for research. Or at least we could minimize application requirements — including shorter, less detailed proposals — to make it easier to submit grants in the first place.
Both options have been proposed by serious scientists. And both could have unintended consequences.
That’s according to a new arXiv preprint from Kyle R. Myers, a professor at Harvard Business School. Myers’ preprint explains how rejected grants could generate positive value, because the act of writing a proposal forces scientists to refine their ideas, think through experiments, or explore new directions.
Science moves forward, even for the losers.
The preprint extends prior work by Kevin Gross and Carl Bergstrom, who modeled science funding contests using auction and tournament theory. Their conclusion: As contests become more competitive — or pay lines get lower — scientists collectively spend more time writing, even though the total amount of money stays the same. Gross-Bergstrom assumes that benefits are only garnered by proposals that win money; unfunded proposals are a waste of time.
Empirical evidence supports Myers’ model.
In a 2018 survey, scientists said that “at least one third of the effort spent on applications is scientifically useful,” Myers says. And "for each additional hour scientists report spending on fundraising, they report spending 6 minutes less engaged directly on their research,” Myers says.
In other words, more fundraising does not mean less research.
In contrast, “when scientists report spending an additional hour on teaching or administrative duties, they report spending 24 minutes less on their research." (When it’s time to point fingers, look no further than your department’s dean.)
The data Myers uses to support his model is, unfortunately, not causal. They are mere associations between the amount of time scientists spend on various tasks, as reported by scientists themselves. But this is part of a larger problem in science: There is precious little empirical data about how science is actually done.
Federal agencies should fund more randomized, controlled trials into how scientists spend their time and manage their work (a topic of a recent blog by Tyler Cowen). Incremental answers in this area could have a major impact, but we honestly have almost no data to support one grant competition structure over another.
The Myers model has obvious limits, too. Not all of the time that scientists spend writing grants is useful. Reference letters, budgets, and spruced up CVs, surely, do not advance research. There is also an upper limit for scientists in terms of time spent writing versus improvement of ideas. After a certain point, grant writing becomes less useful, and ideas stop getting better. Many grant competitions also do not provide in-depth or nuanced feedback; many failed applications are only read by a small number of people, and so the ideas within them are unlikely to spread more broadly.
“The main takeaway of my model is that it’s very difficult to assume anything about how good or bad a grant competition is. When you see raw statistics about success rates and time spent on grant writing, the takeaway is that you shouldn't infer whether that means things are going well or not,” Myers says.
For now, the Myers model is influencing how we at New Science structure our fellowship applications. We want our funding contests to be purposely useful for scientists. We believe grant-making contests should eliminate bureaucracy while encouraging deep thinking from applicants. Scientists probably don't spend enough time thinking about projects at a 10,000 foot view, and competitive grant writing forces them to do that.
Unlike the NIH, we don’t care about font sizes or formats. Our grant writing process is also collaborative; applicants don’t just scream their ideas into a nebulous void, without getting in-depth feedback. If we see promise in a proposal — regardless of its clarity or specifics — we will spend weeks working one-on-one with applicants to sharpen ideas.
And our process seems to be working. About 10 percent of applicants who were rejected from our summer fellowship later sent emails to thank us, and to say that the proposal and interviews were helpful for their science.
New Science is embracing competition and iterative feedback as fundamental tools to accelerate science. We want to identify and fund promising ideas. In-depth research proposals seem like a good way to do that, and damn the complaints.
Thanks to Alexey Guzey, Sasha Targ, Kyle Myers and Andy Halleran for reading drafts of this.
Cite this essay:
McCarty, N. "Rejected Grants Are Good for You." newscience.org. 2022 August. https://doi.org/10.56416/137juw
One of the downsides I've noticed writing grants is that, while I'm spending time thinking about science, it's often in a very narrow frame of what the particular grant is asking for and/or the goals of the funding agency. As opposed to a more general frame of "what would advance science" or "what would advance health technology", which I might be doing when I am, for example, writing a review paper or having a discussion with colleagues. Perhaps the issue with this is that compromises on what you really want to work on are terrible: https://www.lesswrong.com/posts/DdDt5NXkfuxAnAvGJ/changing-the-world-through-slack-and-hobbies#How_can_hobbies_compete_at_all_with_jobs__My_theory__compromises_are_terrible
So I think one way to optimize the usefulness of a competitive grant writing procedure is to make the application review process very open-ended as opposed to narrowly defined.
Sorry, I'm having a little trouble following the citations -- where is this quote mentioned?
> In a 2018 survey, scientists said that “at least one third of the effort spent on applications is scientifically useful,” Myers says. And "for each additional hour scientists report spending on fundraising, they report spending 6 minutes less engaged directly on their research,” Myers says.
The 2018 survey is from Schneider, not Myers, and doesn't have the mentioned quote (unless it's not verbatim?) Also, I couldn't find the second quote in Myers' preprint. Am I missing something?