Russ Poldrack is a professor of Psychology at Stanford University, Associate Director of Stanford Data Science, and director of the SDS Center for Open and Reproducible Science. I first came across Poldrack several years ago after someone described him to me as a “superstar in neuroscience”; not long thereafter, the Laura and John Arnold Foundation awarded him funding to launch the Stanford Center for Reproducible Neuroscience.
How would you describe your research to a layperson?
Our research aims to understand how people exert control over their own behavior. For example, imagine that you are driving a car and waiting for a stop light to turn green. Just after the light turns green, a pedestrian runs out in front of your car. How does your brain quickly change your behavior so that you stop the car before hitting the pedestrian? We use brain imaging to understand the brain circuits that enable this kind of self-control, and how they relate to other forms of control and decision making.
We also develop tools to help improve the reproducibility and transparency of neuroscience research. This includes infrastructure that helps researchers share their data in an effective way (http://openneuro.org, http://neurovault.org) and software that helps researchers analyze their data as robustly as possible (http://fmriprep.org).
Describe your experience with funders (private, NIH, others). What do funders do right? What could they improve? If you could restructure how federal funders (NIH/NSF/etc.) are organized or how they operate, what would you change?
I have been funded by federal agencies (including NIH and NSF) for many years, as well as having funding from a number of private foundations over my career. I think that in general the program officials at federal agencies care deeply about the science, and really want to fund the best research that will move our knowledge forward and help solve societal and health problems.
However, they are challenged from opposing directions. From the top, they must deal with stifling bureaucracy that often seems to have no justification other than to make the lives of both program staff and grantees miserable. On the other hand, their funding decisions must rely heavily on peer reviews from researchers in the field, and these can sometimes be unreliable or biased against new ideas.
The most painful part of the granting process in my experience is the period between when the grant has been reviewed and when final funding decisions are made. There is an almost complete lack of transparency about what happens during this period, and the final decisions can sometimes feel inscrutable. I think that increasing the transparency about how this final step in the funding process works would remove a significant amount of anxiety from researchers’ lives.
Surveys show that scientists say they spend upwards of 44% of their time on proposals, reports, IRBs, budgets–that is, administrative and regulatory requirements. Is that consistent with your experience? Is there anything that could be streamlined?
Yes, definitely. I appreciate the need for oversight of grant spending and protection of human and non-human research subjects, but the administrative requirements are often ridiculous. Perhaps the most glaring case is the almost yearly changes to the format of the biographical sketches that must be submitted for NIH and NSF grants. Each of these small changes whittles away at our time and diverts our mental energy away from more interesting pursuits, with no clear rationale other than the need for bureacrats to show that they have done something. There are also cases where the present regulations are completely out of step with reality, such as the prohibition against including URLs in NIH grant proposals.
Unfortunately I think that these annoyances may simply be a fact of life for public funding of science. Many of the regulatory and administrative systems seem to be driven by past abuses of the system, and while such abuses are rare, it’s hard to imagine that they will go away. When combined with political actors who try to make examples of these abuses, it seems that bureacratic risk aversion might be an unfortunate fact of life, but anything that could be done to remove bureaucratic hurdles would help greatly increase the efficiency of scientific research.
If you had no constraints in terms of funding or the need to publish, is there anything that would be different about your research?
Probably not that much. While I would love to avoid the bureacracy around grants, I think that the writing of grant proposals is actually a really useful mechanism to help clarify our ideas and formalize our thinking for others to appreciate. The main way in which not having to worry about funding would change things in my lab is that it would help provide more stability for the research staff. The constant scramble for funding on a 3-5 year cycle means that long-term staff scientists can never be comfortable that their position will be stable in the long term.
With regard to publication, I should first say that the pressure that I feel is mostly on behalf of the trainees in my lab. Given that I have a tenured faculty position, the pressure that I feel on myself to publish is largely self-generated. If hiring and tenure committees were to care more about the quality than the quantity and profile of publications, then my trainees could worry less about getting papers published in high-profile outlets and focus more on simply doing science. That said, I think that clearly communicating our ideas with the field at large is essential to advance knowledge, so we would still continue to publish even if the incentives changed.
There is a lot of animus today against commercial publishers, and I agree with much of it. However, I generally feel that peer review of our papers pushes our work in helpful ways (despite the occasional annoyance by Reviewer #2), so I would be loath to give up altogether on journal publishing until we have a new system that can provide similarly useful feedback.
If you could change the organization or management of universities, what reforms would you recommend?
I think that the organization of universities according to colleges and departments, while understandable and useful in many ways, tends to interfere with the hiring of people who straddle academic disciplines, because no one department feels that such a candidate is “one of them”. Given that so much of the interesting scholarship today is happening at exactly those intersections, it’s essential to find a way to promote research that breaks down traditional academic boundaries.
I also think that the academy needs to focus more heavily on transparency and reproducibility in its assessment of scholarly merit and impact. Right now, there is a significant bonus for researchers who do “fast science” – that is, who publish lots of papers in high profile outlets. This incentive has likely played an important role in the reproducibility crisis that has enveloped many areas of science over the last decade. I think that we need greater incentives for researchers to slow down and get the answers right. There is currently a group led by the National Academies called the Higher Education Leadership Initiative on Open Scholarship (HELIOS) that aims to help better align incentives for open and reproducible science across universities, and I’m looking forward to being a part of that effort.
Given the goal of improving the practice and funding of science, is there anything else I should have asked you?
One increasingly important issue in improving the practice of science is related to software development. Nearly every trainee in the biological sciences today has to write code in the process of their research, be it for experimental control or data analysis. However, very few researchers have any training in software engineering. We know that even expert developers make a non-trivial number of errors in their work, so the number of errors in amateur scientific code must be substantially higher. A particular problem is that we are much less likely to catch bugs when they impact our results in a way that confirms our hypotheses compared to when the bugs disconfirm our ideas – a problem we have called “bug-hacking”. There is a growing set of resources to help researchers write better code, but we need a greater appreciation of the potential impact of these errors in order to motivate their adoption.
Just for fun, what’s an article in the past few years that you wish you could have written?
As for actual articles: Nearly any paper written by Tal Yarkoni. For example, he published a wonderful paper in Behavioral and Brain Sciences in 2020 titled “The generalizability crisis” which outlined the ways in which our commonly used statistical models don’t match our goals, leading to statistical results that overstate our ability to generalize to new situations. It’s a tour de force of logical analysis and creative application of those ideas.
As for hypothetical articles: I wish that we could have demonstrated by now that formal ontologies of mental function (like our Cognitive Atlas) could be used in combination with brain imaging data to help us understand the large-scale organization of the mind. We have ongoing work that might help us make the case in the near future, but it’s been a really difficult challenge that I wish we could have cracked by now.