Research funding and incentives

As a regular applicant and evaluator of research grants, during the process I often think about what incentives funding rules are creating. For instance, the ARC places weight on project criteria such as ‘significance and innovation,’ ‘appropriateness of methodology,’ ‘national benefit’ as well as the standing and qualifications of the applicant or research team. However, what weights you place on these criteria really shapes how researchers think about their career and the projects they choose because they are looking to where the next grant will come from. Place weight on project and you look to projects that are easy to specify and can be completed within the requisite time period. Place weight on the team and teams want to ensure that they have a good publication record and reputation. But do these concerns translate into real incentive issues.

As reported in Slate today, one careful study has provided some important evidence on this. 

To assess the importance of incentives in stimulating innovation, MIT economists Pierre Azoulay and Gustavo Manso, together with UC-San Diego professor Joshua Graff Zivin, analyzed the research output of life scientists chosen as “Medical Investigators” during 1993-95 by the Howard Hughes Medical Institute, which provides long-term and flexible funding to award recipients. They measure HHMI investigators’ output against that of researchers who receive Pew, Searle, Beckman, Packard, and Rita Allen Scholarships—also prestigious early-career awards whose winners are probably of a caliber comparable to HHMI recipients. However, because these programs provide less funding than HHMI, award winners rely for the most part on National Institutes of Health support to pay for their research.

The HHMI and NIH funding incentives are a study in contrasts. HHMI gives five years’ worth of research funding, renewable at least once as long as reviewers see effort, not necessarily results. (After a decade, however, researchers do need to produce something for further renewal.) NIH funding typically expires after a few years. HHMI picks “people not projects” while the NIH does the opposite, tethering funding to particular experiments or analyses. Finally, the NIH also often demands preliminary results before funding a project, more or less ensuring success, but nevertheless encourages researchers to take baby steps in their work rather than leaps into the unknown.

If HHMI winners find more breakthroughs, it could be because there are better incentives for doing so. Then again, it’s also possible that the HHMI selection committee is just better at picking innovators than the committees at other award programs. To account for these possible differences, the authors use a statistical technique to match HHMI investigators to a group of other award winners that had virtually identical research publication track records prior to receiving their awards. By comparing two groups of researchers that looked so similar before receiving their awards, it’s more likely that which award they received was a matter of luck and random chance as opposed to a difference in the quality of the award-selection processes.

Despite comparable pre-award performance, the two groups diverge in the years that follow. HHMI winners are almost twice as likely to produce studies that are highly cited by other researchers. They are also more likely to produce research that introduces new words and phrases into their fields of research, as measured by the list of “keywords” they attach to their studies to describe their work. The downside is that they also produce more stinkers—studies that never get cited by anyone. But, again, that’s part of the exploration process. There’s also some tentative evidence that HHMI scholars experiment more than their NIH-funded counterparts: Their research is cited by scholars across a wider range of fields, and their keywords change more often across studies, both suggesting broader experimentation.

The study is here (for those who can access it). It may surprise you to learn that even ARC areas (such as Laureate or Future Fellowships) place as much weight on the project as on the team. This study suggests we are short-changing our rate of return on public research funding by doing this.

One thought on “Research funding and incentives”

  1. As I understand it the HHMI grants are much more loosely structured — essentially they allow recipients to do more or less as they please. So if, for example, they notice something in passing, they can pursue it rather than ignoring the phenomenon.
    Essentially the problem is that the amount of paperwork involved in a traditional grants process is a high disincentive to those wishing to pursue hunches. Whereas in the HHMI model it’s a natural consequence. Hence the variability.

    Like

Comments are closed.