The Importance of Peer-Review in Journal, Department, and Individual Research Rankings

Preamble

I recall that some time in the mid 2000s, when the Research Quality Framework (which preceded the current ERA) was being discussed, Arden Bement, the director of the National Science Foundation, was asked what he thought. He responded as one would expect of a serious researcher, by saying that the only method he knows for judging academic outcomes is peer review.

In fact, Peer review is the gold-standard in science. We simply don’t trust any finding, method, conclusion, analysis, study that is not reported in a peer reviewed outlet. Yet there has been rapid growth in the use of indices in ways that have not been tested through peer review and which are being used to measure journal ranking, individual academic performance, and even the standing of departments.

Here, I argue that we should return to published methods that have been tested through peer review. The unchecked use of non-peer reviewed methods runs the risk of misallocating resources, e.g. if university promotion and appointment committees and bodies like the ARC use them. Even more troubling, is that non-peer reviewed methods are susceptible to manipulation; combined with the problem of inappropriate rewards, this has the potential to undermine what the profession, through peer-review, regards as the most valuable academic contributions.

Economics journal rankings

In the last two decades or so, there has been an explosion in the use of online indices to measure research performance in economics in particular (and academia generally). Thomson-Reuters Social Science Citation Index (SSCI), Research Papers in Economics (RePEc) and Google Scholar (GS) are the most commonly used by economists.

These tools display the set of publications in which a scholar’s given article is cited. While SSCI and GS take their set from the web as a whole, RePEc––hosted by the St Louis Fed–– is different, in that it refers only to its own RePEc digital database, which is formed by user subscription. Further, RePEc calculates rankings of scholars, but only of those who subscribe. Referring to its ranking methods, the RePEc web page states:

This page provides links to various rankings of research in Economics and related fields. This analysis is based on data gathered with the RePEc project, in which publishers self-index their publications and authors create online profiles from the works indexed in RePEc.

While it has been embraced by some academic economists in Australia as a tool for research performance measurement it is important to note that the RePEc ranking methodology is not peer-reviewed. This departure from the usual strong commitment to the process of peer-review by academics is puzzling, given that there is a long history of peer review in economics in in the study of, you guessed it “Journal Ranking”.

A (very) quick-and-dirty modern history

Coates (1971) used cites in important survey volumes to provide a ranking; Billings and Viksnins (1972) used cites from an arbitrarily chosen ‘top three’ journals; Skeels and Taylor (1972) counted the number of articles in graduate reading lists, and; Hawkins, Ritter and Walter (1973) surveyed academic economists. (Source: Leibowitz and Palmer JEL 1984, p78.)

The modern literature is based on a paper by Leibowitz and Palmer in the Journal of Economic Literature,1984. In their own words, their contribution had three key features

…(1) we standardize journals to compensate for size and age differentials; (2) we include a much larger number of journals; (3) we use an iterative process to “impact adjust” the number of citations received by individual journals

Roughly speaking, the method in (3) is to: (a) write down a list of journals in which economics is published, (b) count up the total number of citations to articles in each journal; (c) rank the journals by this count; (d) weight the citations by this count and, finally; (d) iterate. The end result gives you a journal ranking based upon impact-adjusted citations.

The current best method, is Kalaitzidakis et al Journal of the European Economics Association, 2003, hereafter KMS. This study was commissioned by the European Economics Association to gauge the impact of academic research output by European economics departments.

KMS is based on data from the 1990s and, as far as I am aware, has not been updated. No ranking can replace the wisdom of an educated committee examining a CV. However, KMS at least comes from a peer-review process. Unlike simple count methods, it presents impact, age, page and self-citation adjusted rankings, among others.

But even KMS-type methods can be misused: One should be ready to use the “laugh test” to evaluate any given ranking. KMS deliberately uses a set economics journals, roughly defined as journals economists publish in and read. It passes the laugh test because, roughly speaking the usual “top five” that economists have in their heads (AER, Econometrica, JPE, QJE and ReStud) do indeed appear near the top of the ranking, and other prestigious journals are not far behind.

The Economics Department at Tilburg University has included statistics journals in its “Tilburg University Economics Ranking”. The result? “Advances in Applied Probability” beats out the American Economic Review as the top journal: Their list can be found at https://econtop.uvt.nl/journals.php, but you need look no further than their top five to see that this does not pass the laugh test:

  1. Advances in Applied Probability
  2. American Economic Review
  3. Annals of Probability
  4. Annals of Statistics
  5. Bernoulli

Would I be remiss in suggesting that a statistics-oriented econometrician might have had an input into this ranking? Yes I would oops!

Finally, let us turn to the new RePEc impact-adjusted ranking. A laugh-test failure here, among others, is the inclusion of regional FED journals: Quarterly Review, Federal Reserve Bank of Minneapolis is ranked 14–just above the AER; the Proceedings Federal Reserve Bank of San Franscisco is ranked 16 ahead of the Journal of Econometrics at 19 and; Proceedings of the Federal Reserve Bank of Cleveland is 24, ahead of the European Economic Review at 29.

The RePEc top 5 is:

  1. Quarterly Journal of Economics
  2. Journal of Economic Literature
  3. Journal of Economic Growth
  4. Econometrica
  5. Economic Policy

It would be interesting to investigate whether macroeconomists and policy scholars had influence here.

My conclusions

If we are going to use ranking methods be very careful. Use methods that have emerged over decades of rigorous peer review, like the European Association’s 2003 study by KMS. And stick to their method rigorously lest we all have to retrain in statistics.