Toggle Menu
March 28, 2023

Why Science Funders Should Try to Learn from Past Experience

Science funding agencies feel political pressure to fund only research that is easy to explain and defend to lay members of the public and to Members of Congress, the vast majority of whom are not trained as scientists. And the kind of research that tends to be easier to politically defend involves incremental projects that follow a well-worn path and whose payoffs are certain and immediate.

Perhaps the most famous example of this political pressure came from Senator William Proxmire, who from 1975 to 1988 handed out 168 “Golden Fleece Awards” to government-funded research projects that he found ridiculous. Some of Proxmire’s examples were genuinely amusing, to be sure, such as an NSF grant that purportedly awarded “$103,000 to compare aggressiveness in sun fish that drink tequila as opposed to gin,” or a National Institute of Mental Health study (to quote the New York Times) on “what went on in a Peruvian brothel. The researchers said they made repeated visits in the interests of accuracy.” [The Golden Fleece Awards led to the excellent counter-example of the Golden Goose Awards.]

Unconventional, provocative, or even seemingly-irrelevant ideas often spark the biggest scientific breakthroughs, precisely for the same reasons that they are overlooked by cautious grantmakers: they’re breaking new ground, often in serendipitous and unpredictable ways or in ways that are actively opposed by the existing scientific establishment at the time.

Congress and the public should embrace this fact about the source of scientific breakthrouths, and should give science funders more political leeway to take more risks and fund outside-the-box ideas to truly capitalize on American ingenuity.

Image

***

That said, we have to deal with a central problem: even with the benefit of hindsight, there are probably too many cases where science funders passed over scientific work that should have been funded.

Consider the following examples:  

  • Douglas Prasher’s work on cloning the gene for fluorescent protein was an essential contribution to Nobel-prize winning work. But as he told NPR in 2008, he couldn’t get NIH funding, and after a series of other jobs, he ended up driving a courtesy car for a car dealership in Alabama.
  • Katalin Kariko’s early work on mRNA was a key contributor to multiple Covid vaccines. But she got demoted at the University of Pennsylvania because she couldn’t get NIH funding.
  • Robert Langer at MIT got rejected on his first nine grant proposals, and even when he secured a position at MIT, he was rejected by the NIH many more times for grants related to biodegradable polymers.
  • recent story in Science: “In 2017, three leading vaccine researchers submitted a grant application with an ambitious goal,” but NIAID reviewers turned it down, because “the significance for developing a pan-coronavirus vaccine may not be high.”
  • The same month that Ardem Patapoutian won the Nobel Prize in physiology, he tweeted, “I received another disappointing un-fundable score for my @NIH grant today.”
  • The team that discovered how to manufacture human insulin applied for an NIH grant for an early stage of their work. The rejection notice said that the project looked “extremely complex and time-consuming,” and “appears as an academic exercise.”
  • Carolyn Bertozzi (Nobel 2022) said that “the freedom to pursue an unlikely observation made the glycoRNA discovery possible. ‘That’s what HHMI provided,’ she said. ‘If I were a junior scientist who stumbled into this and put out an NIH grant, we’d get laughed out of the study section.’”
  • Carol Grieder has a story about getting rejected by an NIH panel on the same day that she won the Nobel prize for the work under consideration.
  • Mina Bissell has said: “If you have an original idea or you’re really making a huge jump, you should expect not to get funded. If you do, it means people already largely understand it. . . . NIH is becoming more adventurous now, but I couldn’t get NIH money for 15 years; thankfully I get funded now. NSF gave me my first grant, and without the DOE Office of Biological and Environmental Research, who decided early on I had something important to say, I would have had to give up some of my radical ideas.”
  • Both NIH and NSF refused to fund the work of Leon Cooper, a Nobel Prize Laureate in Physics, on neural networks. His research, which resulted in a large number of publications, was eventually funded by the Office of Naval Research (ONR), which uses a different method of evaluating funding proposals.”
  • When Craig Venter developed whole genome shotgun sequencing, “he applied for an NIH grant to use the method on Hemophilus influenzae, but started the project before the funding decision was returned. When the genome was nearly complete, NIH rejected his proposal saying the method would not work.”
  • Leroy Hood invented the “automated DNA sequencer that made the human-genome project possible.” But it was with private funding: “In the early 1980s when we conceptualized the instrument but were just getting ready to develop it, we put in a number of grants to the National Institutes of Health in Washington. They got some of the worst scores the NIH had ever given. People said what we wanted to do was impossible, or they said, ‘Why do this? Grad students can do it more easily.’”
  • Mario Capecchi, who won the Nobel for launching the field of gene targeting, has written that while his 1980 application to NIH got funded, it was only for other study aims, and that the NIH reviewers were “unequivocally negative” about the gene targeting work. He writes that “despite this clear message, I chose to put almost all of my effort into the third project. It was a big gamble. Had I failed to obtain strong supporting data within the designated time frame (4 years), our NIH funding would have come to an abrupt end.” Fortunately, he was successful, and his 1984 grant application received the comment: “We are glad that you didn’t follow our advice.”
  • When Patrick Brown proposed to create genetic microarrays in 1992, he “felt it was one of the best grant proposals I have written,” but “it got the worst priority score of any grant, not only of any grant I’ve ever written, but any grant I’ve ever SEEN,” because it was “too ambitious.”
  • When Ed Boyden at MIT came up with the idea for expansion microscopy, as he put it, “people thought it was nonsense,” and “nine out of my first ten grants . . . were rejected.” Through a “set of links that were, as far as I can tell, largely luck driven,” a private foundation (Open Philanthropy) made a nearly-$3 million grant to Boyden in 2016 to support his idea.
  • Dennis Slamon (a prominent breast cancer researcher) discovered a genetic link for a type of breast cancer, leading to a successful therapy (the drug Herceptin). But as the New York Times reported: “when Dr. Slamon wanted to start this research, his grant was turned down. He succeeded only after the grateful wife of a patient helped him get money from Revlon, the cosmetics company.”

       Indeed, the problem might be getting worse. For example, Roger Kornberg told the Washington Post in 2007 that his 1970s research on DNA “would never have gotten the necessary funding” if he had come along in the 2000s: “In the present climate especially, the funding decisions are ultraconservative. If the work that you propose to do isn’t virtually certain of success, then it won’t be funded.” James Rothman (Nobel 2013) told an interviewer that he was glad to have started his work in the 1970s: “I had five years of failure, really, before I had the first initial sign of success. And I’d like to think that that kind of support existed today, but I think there’s less of it.” 

Right now, a key problem is that science funders and program officers are penalized only for Type I errors—recommending a grant award that strikes someone else (whether the media or an agency leader) as irrelevant.

But we need to avoid Type II errors—declining to make a grant award that would have been highly impactful.

Otherwise, we’re putting a thumb on the scale: Don’t recommend or fund a grant if it is too outside-the-box.”

The result is that the best scientists find themselves the most frustrated with the system.

Midjourney prompt: “a scientist who looks depressed and sad because his grant just got rejected; solar-punk”

Am I just repeating a handful of anecdotes that don’t represent the broader picture?

Could be. But I doubt it.

After all, there is no standard collection of these anecdotes. I’ve been collecting them one at time when I randomly stumble across an example, usually in a written or oral interview. I’d bet there are many more examples that could be found if someone systematically interviewed 1,000 top scientists.

Which is why we need better evidence. NIH should fund a team of independent scholars (such as the Meta-Research Innovation Center at Stanford, or the National Academies) to do a systematic review of what Michael Nielsen calls an “anti-portfolio,” that is, a comprehensive list of notable scientific projects/people whose grant proposals were rejected.

Then, as much as feasible, construct a comparison group of grants that were considered around the same time. If possible, acquire the original proposal materials and peer review comments/ratings for both the missed opportunities and for the comparison group of similar grants that got funded (or not).

Next, the research team should assess whether there are any patterns in the missed opportunities vs. the comparison group—i.e., can we explain, at a level beyond random noise, why opportunities were missed? Were there any key predictors available at the time that could have been recognized by funders? Or was it just the luck of the draw? Either finding would be useful.

Finally, the research team should draw policy conclusions from where and when science funders missed out on funding the early stages of great scientific work.

For example, what experiments with peer review, solicitations, new types of grants or programs, etc., should a funding agency try out so as to increase the likelihood of funding groundbreaking work as part of a broad portfolio? How can a funding agency identify the programs, program managers, and peer reviewers that are performing well, and scale up what works?

NSF should be encouraged to participate in the study as well. Some argue that NSF is better able to fund “high risk” research—with its merit review that doesn’t look at the investigator’s pedigree, the flexibility given to program officers, the use of rotators from the field, and the organizational culture. It would be enormously valuable to have more empirical evidence on those questions.

***

In short, we need to tolerate greater risk-taking, so that our science agencies will more often fund Nobel-level lines of work rather than rejecting them as too ambitious. But to get to that point, we need to learn from experience. Otherwise, we are missing out on a huge opportunity to improve science funding.