Toggle Menu

High-Risk, High-Reward Research

Nearly everyone agrees that it can be too hard to get funding for high-risk, high-reward science. Particularly at NIH, the usual expectation is that a researcher comes in with preliminary data showing that their research is likely to succeed.

But scientific breakthroughs virtually never arrive on schedule, after a long line of predictable data.

As Nobel Laureate Roger Kornberg said, “In the present climate especially, the funding decisions are ultraconservative. If the work that you propose to do isn’t virtually certain of success, then it won’t be funded. And of course, the kind of work that we would most like to see take place, which is groundbreaking and innovative, lies at the other extreme.”[1]

There are a handful of programs that purportedly address this issue, but they are too small to make a difference, and as yet, there is no evidence that they fund different projects than otherwise.  

Science funding agencies like NIH and NSF need to do more to fund high-risk, high-reward science. For one thing, they could adopt peer review mechanisms that might be less oriented around the status quo:

  • Limited Lotteries.  Numerous scholars suggest using a limited lottery as a tie-breaker for highly qualified proposals that are basically impossible to tell apart.[2] The Swiss National Science Foundation and the Novo Nordisk Foundation (as of 2020, the largest private foundation in the world) are trying this out.[3] An agency could do the same, and better yet, could do so by randomizing which set of proposals are subject to the lottery in the first place, so that we would be able to see the difference between the two approaches (lottery or not).
  • Golden Tickets. An agency could give reviewers a “golden ticket” such that they can guarantee an application gets funded even if other reviewers disagree. There are at least two private foundations in Europe that are trying out this approach.[4]
  • Bimodal Scores. Highly novel ideas might have a few champions but some naysayers as well. When peer review scores are highly bimodal, this might be a key indicator of a high-risk but high-reward project. An agency could experiment with blinding study section members to everyone else’s comments, and then funding some projects that have both high and low ratings.
  • Program Officer Discretion. A federal agency could experiment with giving program officers more discretion to bypass peer review ratings, and fund projects that they think are highly valuable. This would be a test both of peer review and of the existing program officers’ judgment.

Most importantly, agencies need to incentivize failure. As venture capitalists know, the high returns that they expect from some investments could never be achieved by 100% of their investments. In other words, many investments will fail, and demanding 100% success would mean investing solely in Treasury bonds.

The same is true of science. When politicians and agency officials expect near 100% success, they are ensuring that only modest and incremental science will get funded. A true high-risk, high-reward program should mandate that most of the investments turn out to be scientific failures.

[1] Quoted in Ferric C. Fang and Arturo Casadevall, “NIH Peer Review Reform—Change We Need, or Lipstick on a Pig?,” Infection and Immunity (Mar. 2009), available at

[2] For example, see Ferric C. Fang and Arturo Casadevall, “Research Funding; the Case for a Modified Lottery,” mBio 7 no. 2 (2016), available at; Elise S. Brezis, “Focal randomization: an optimal mechanism for the evaluation of R&D projects,” Science and Public Policy34 no. 10 (2007): available at; Kevin Gross and Carl T. Bergstrom, “Contest models highlight inherent inefficiencies of scientific funding competitions,” PLoS Biology (2019), available at

[3] See;

[4] Thomas Sinkjaer, “Fund ideas, not pedigree, to find fresh insight,” Nature (Mar. 6, 2018), available at

Relevant Articles

  1. An Unnecessary Obstacle to Experimenting With Peer Review
  2. Reforming Peer Review at NIH
  3. Why Peer Review Should Be More Like Meterology