Toggle Menu
December 12, 2022

INTERVIEW: Jason Shepherd

Jason Shepherd is an Associate Professor in the Department of Neurobiology and holds the Jon M. Huntsman Presidential Chair at the University of Utah. He obtained his BSc (Hons) at the University of Otago, his Ph.D. at the Johns Hopkins School of Medicine and postdoctoral training at the Massachusetts Institute of Technology. He is the recipient of the Peter and Patricia Gruber International Research Award in Neuroscience, the International Society for Neurochemistry Young Investigator Award, the Chan Zuckerberg Initiative Ben Barres Early Career Acceleration Award, the Research to Prevent Blindness Stein Innovation Award, the NIH Director’s Transformative Research Award, and is a National Academy of Sciences Kavli Fellow.

What do you think of how peer review works at NIH?

For a regular R01, the system is selecting for conservative science. It’s not selecting for innovation. Part of that is because of the constraint on funding and the paylines. And so the system has evolved into a requirement that you have enough preliminary data for all your aims to convince your reviewers that it’s going to work. So you can’t take risks, because they will ding you for aspects where you don’t have enough feasibility data.

That is the main issue with the regular R01 system.

The other aspect is stochasticity. You have 3 reviewers who delve deep into the grant, and the rest don’t really read the full proposal. The scores are based on those 3 reviews, and then it’s the luck of the draw as to who you get there.

Ultimately, we’re all human, and there is limited bandwidth for going through all these grants. The rest of the study section ends up being swayed by those initial reviews.

The other thing is it’s so slow. You submit a grant and it gets reviewed 4 or 5 months later.

How would you solve the problem of having only 3 main reviewers?

There are two aspects to solving this problem: One is to mandate that everyone reads the full proposal. The other is to limit the number of grants reviewed at that study section. So to do that, you’d have to have more study sections with more focus.

What do you think about funding “higher risk” research?  

The NIH has played around with this a little bit. There are the so-called “high risk, high reward” grants, where reviewers are explicitly told that there is no preliminary data needed. It’s more about the idea and the track record.

In practice, however, I think it’s hard for reviewers to get out of that mentality of needing to see preliminary data.

My experience is that when I started my lab about 10 years ago, we made an initial observation that was kind of a crazy discovery. On one hand I was trying to juggle the traditional R01 with trying to pursue this unusual observation. Especially for early investigators, that first R01 is really critical, and it can be harshly judged because you don’t have enough of a track record to play the grantsmanship game. I ended up resubmitting the same R01 four times, and by the time it got funded, it was so boring that I didn’t even want to do the experiments.

But I kept pursuing the unusual idea, and I wasn’t able to get funding until I published a paper on the work.

I do think that this lack of creativity is an issue, especially for people starting a lab. You may have startup funding that isn’t restricted to a particular topic, but you still need to be conservative in your research so that you’re prepared for future R01 submissions.

The regular R01s are reviewed for the project. NIGMS has the MIRA award that is more about the person rather than the project. Reviewers can’t be as nitpicky about whether the preliminary data is good enough. That seems like a useful approach.

How are we doing with funding basic science?

Basic science is hard to sell. But translational neuroscience has not led to breakthrough therapeutics. It’s often some surprising finding or just trial and error that leads to therapeutics. So my bias is that we often don’t know when discovery is going to happen. I’m not even sure NIH should fund major clinical trials itself. It’s arguably not NIH’s expertise to bring something to the clinic.

Is it possible that some of the “best and brightest” are driven out of science because they are drawn to important problems where it isn’t simple to churn out publication after publication?

That’s sort of what I was getting at. Early on in my career, I was willing to fail at the traditional R01 to pursue my more creative/unusual idea. Ultimately I’m still sitting here because it ended up working out, but I’m sure that for some folks, it doesn’t.

For a lot of people, the R01 and tenure become the goal, rather than the science.

Are progress reports useful?

In my experience, no. I’ve never ever had a program officer write to me and say, “I don’t like this new direction.” In fact, I’ve never had any feedback, positive or negative.

On Twitter, I saw a poll that asked about whether you would put figures in your progress report. Half the people said no, but I thought everyone did! Clearly there’s not a standard format. I remember panicking about this on my first R01, but it didn’t matter.

Once you have the money, you can be creative with it, if you want to take the chance. That is, once you get a grant, no one is really going to care that you answered Aim 1-5. They’ll be impressed if you publish something, and no one checks whether it matches the original line item 10 from the grant.

I could see progress reports being useful for different mechanisms, such as a U01 where you have actual milestones.

Is there anything else should NIH do differently?

Once people get on a study section, they’re doing their best to be diligent and do a good job. Something that is frustrating for folks is that only half of the grants actually get discussed, and hopefully when they are discussed, the reviewers give them a fair chance. When it’s not discussed, it’s really only the 3 reviewers who give feedback. Lots of opportunity for someone to be wrong. If there are discrepant preliminary scores, that’s when it SHOULD be discussed, because that’s when you want to make sure mistakes aren’t being made.

The other thing – you have these criterion scores based on categories like significance, and that’s what you see, but you don’t see the overall score that people give. Oftentimes the overall score isn’t the average of those criterion scores. There’s only two categories that drive the overall score – Approach and Significance. NIH is trying to look at this to figure out how to improve it.

Mario Capecchi always talks about the fact that he could never get his work on transgenic mice funded by NIH, because they didn’t think it was possible. That’s another example where someone didn’t give up, but used other funds to keep chugging away on that problem, and then solved it and launched a whole field.

We should find more ways to reward that kind of creativity.  

In the past few years, are there any papers you wish you could have published?

This is probably my favorite paper in the past five years: Moore et al., “The role of the Cer1 transposon in horizontal transfer of transgenerational memory,” Cell (Aug. 6, 2021).