The Importance of Peer-Review in Journal, Department, and Individual Research Rankings

Preamble

I recall that some time in the mid 2000s, when the Research Quality Framework (which preceded the current ERA) was being discussed, Arden Bement, the director of the National Science Foundation, was asked what he thought. He responded as one would expect of a serious researcher, by saying that the only method he knows for judging academic outcomes is peer review.

In fact, Peer review is the gold-standard in science. We simply don’t trust any finding, method, conclusion, analysis, study that is not reported in a peer reviewed outlet. Yet there has been rapid growth in the use of indices in ways that have not been tested through peer review and which are being used to measure journal ranking, individual academic performance, and even the standing of departments.

Here, I argue that we should return to published methods that have been tested through peer review. The unchecked use of non-peer reviewed methods runs the risk of misallocating resources, e.g. if university promotion and appointment committees and bodies like the ARC use them. Even more troubling, is that non-peer reviewed methods are susceptible to manipulation; combined with the problem of inappropriate rewards, this has the potential to undermine what the profession, through peer-review, regards as the most valuable academic contributions.

Economics journal rankings

In the last two decades or so, there has been an explosion in the use of online indices to measure research performance in economics in particular (and academia generally). Thomson-Reuters Social Science Citation Index (SSCI), Research Papers in Economics (RePEc) and Google Scholar (GS) are the most commonly used by economists.

These tools display the set of publications in which a scholar’s given article is cited. While SSCI and GS take their set from the web as a whole, RePEc––hosted by the St Louis Fed–– is different, in that it refers only to its own RePEc digital database, which is formed by user subscription. Further, RePEc calculates rankings of scholars, but only of those who subscribe. Referring to its ranking methods, the RePEc web page states:

This page provides links to various rankings of research in Economics and related fields. This analysis is based on data gathered with the RePEc project, in which publishers self-index their publications and authors create online profiles from the works indexed in RePEc.

While it has been embraced by some academic economists in Australia as a tool for research performance measurement it is important to note that the RePEc ranking methodology is not peer-reviewed. This departure from the usual strong commitment to the process of peer-review by academics is puzzling, given that there is a long history of peer review in economics in in the study of, you guessed it “Journal Ranking”.

A (very) quick-and-dirty modern history

Coates (1971) used cites in important survey volumes to provide a ranking; Billings and Viksnins (1972) used cites from an arbitrarily chosen ‘top three’ journals; Skeels and Taylor (1972) counted the number of articles in graduate reading lists, and; Hawkins, Ritter and Walter (1973) surveyed academic economists. (Source: Leibowitz and Palmer JEL 1984, p78.)

The modern literature is based on a paper by Leibowitz and Palmer in the Journal of Economic Literature,1984. In their own words, their contribution had three key features

…(1) we standardize journals to compensate for size and age differentials; (2) we include a much larger number of journals; (3) we use an iterative process to “impact adjust” the number of citations received by individual journals

Roughly speaking, the method in (3) is to: (a) write down a list of journals in which economics is published, (b) count up the total number of citations to articles in each journal; (c) rank the journals by this count; (d) weight the citations by this count and, finally; (d) iterate. The end result gives you a journal ranking based upon impact-adjusted citations.

The current best method, is Kalaitzidakis et al Journal of the European Economics Association, 2003, hereafter KMS. This study was commissioned by the European Economics Association to gauge the impact of academic research output by European economics departments.

KMS is based on data from the 1990s and, as far as I am aware, has not been updated. No ranking can replace the wisdom of an educated committee examining a CV. However, KMS at least comes from a peer-review process. Unlike simple count methods, it presents impact, age, page and self-citation adjusted rankings, among others.

But even KMS-type methods can be misused: One should be ready to use the “laugh test” to evaluate any given ranking. KMS deliberately uses a set economics journals, roughly defined as journals economists publish in and read. It passes the laugh test because, roughly speaking the usual “top five” that economists have in their heads (AER, Econometrica, JPE, QJE and ReStud) do indeed appear near the top of the ranking, and other prestigious journals are not far behind.

The Economics Department at Tilburg University has included statistics journals in its “Tilburg University Economics Ranking”. The result? “Advances in Applied Probability” beats out the American Economic Review as the top journal: Their list can be found at https://econtop.uvt.nl/journals.php, but you need look no further than their top five to see that this does not pass the laugh test:

  1. Advances in Applied Probability
  2. American Economic Review
  3. Annals of Probability
  4. Annals of Statistics
  5. Bernoulli

Would I be remiss in suggesting that a statistics-oriented econometrician might have had an input into this ranking? Yes I would oops!

Finally, let us turn to the new RePEc impact-adjusted ranking. A laugh-test failure here, among others, is the inclusion of regional FED journals: Quarterly Review, Federal Reserve Bank of Minneapolis is ranked 14–just above the AER; the Proceedings Federal Reserve Bank of San Franscisco is ranked 16 ahead of the Journal of Econometrics at 19 and; Proceedings of the Federal Reserve Bank of Cleveland is 24, ahead of the European Economic Review at 29.

The RePEc top 5 is:

  1. Quarterly Journal of Economics
  2. Journal of Economic Literature
  3. Journal of Economic Growth
  4. Econometrica
  5. Economic Policy

It would be interesting to investigate whether macroeconomists and policy scholars had influence here.

My conclusions

If we are going to use ranking methods be very careful. Use methods that have emerged over decades of rigorous peer review, like the European Association’s 2003 study by KMS. And stick to their method rigorously lest we all have to retrain in statistics.

15 thoughts on “The Importance of Peer-Review in Journal, Department, and Individual Research Rankings”

  1. I am not a big fan of the iterative procedure underlying the LP and KMS method because they essentially only count citations from journals in the same list, so you leave out citations from other disciplines. Hence, applying the laugh test, all the top journals in interdisciplinary journals on health, law, urban studies, history, etc., get completely marginalised in the KMS ratings. Any new journals are also left out. Its an insider-method proposed and peer-review accepted by…… insiders.
    Yet I agree with your basic peer-review argument. Things like the ESA rankings or, in the Netherlands, the Tinbergen Institute rankings get you much better lists than the KMS: you ask all the senior economists to supply a preference order and you aggregate it for the whole country. Its not just peer-reviewed, but even democratic. It picks up new high-level journals not yet in the citation systems and gives a discipline-valuation of interdisciplinary work.
    Its the secret committee adjustments that transform the ESA list into an ERA list that you really have to watch out for …..

    Like

  2. @Paul
    It is true that the list is journals in economics which does leave out journals in other disciplines. The problem with adding, say all Health journals as you propose is we end up with a ranking like the U. Tilberg one which is monumentally biased towards statistics.
    I would love it if all law journals are included because one of my fields is Law and Economics. But this suffers from the same problem.
    Note that journals rise and sink over time. The Journal of Law and Economics was ranked 18th in the 1984 JEL study. Now it has dropped to over 40. The reason is the field is cited less in the top econ journals, as focus shifted away from this field in the profession. A similar thing happened with IO journals. I bet that the Journal of Health economics will rise substantially in any new application of KMS. But when interest there wanes, it too will fall as it should.
    Finally, you can always publish health economics in the leading general journals…. but it is a lot harder to do because of the intense competition at the top.

    Like

  3. Rohan,
    the KMS method can easily be applied with a far larger set of science journals and then just reporting the relative rankings of the economic ones. Its the judgment call not to do this that effectively means being cited by top journals in other fields counts for nothing for this list, which is basically self-serving and wrong.
    Peer-review aggregation of preferences, as with the ESA, is far better because it leaves these judgment calls up to open peer review rather than hidden in the bowels of an appendix. If Law and Economics is deemed less important, fewer economists will rank those journals high. In fact, it potentially responds much quicker to changes in the opinions within the profession. And the shared opinion within the profession about the top journals is pretty consistent in these things so it doesnt matter for the top 5 or so, only lower down. Besides, the KMS list throws up some unlikely top journals.

    Like

  4. The ESA process was not a peer review process.
    it was an opaque aggregation of a survey of economists in Australia. As far as I know no one submitted the ESA method for ranking journals to a journal for peer review.

    In economics there is a long and serious literature on the methods for measuring the importance of journals (note that KMS gives a an index of significance rather than simply a ranking). This literature is
    about the method employed in the ranking rather than specifically about the ranking. This literature predates us all.

    A paper in this area is accepted or rejected essentially based on the proposed method and its theoretical properties.

    Peer review is all about the method and bringing what we know from social choice theory, mechanism design, and index theory into the conversation.

    Presently the state this literature has turned to axiomatic foundations of journal rankings. So there was a paper in econometrica looking at such axiomatic foundations and there are a number of papers floating around covering this.

    At the centre of all of this literature is the method used in LP and KMS. These form the backdrop for the whole literature. These seem to have important axiomatic foundations and break some axioms that seem reasonable. Nevertheless, it is the present state of the art in the economics literature.

    I know for a fact that some of the top social choice theorists were involved in refereeing the more important papers in the literature. These referees I am sure were interested in the axiomatic foundations presented in the paper and wether they make sense for us as economists.

    Typically, the implementation of the method and bringing it to data is incidental in the minds of editors and referees.

    Like

  5. @Paul What you are suggesting is not actually peer review but non-peer review. Remember that what we want in this exercise is to get the opinion of our peers-fellow economists who are experts-as to the value of our work. The journal ranking is a proxy for what our fellow economists think is the importance of each journal: This is why the journal list is limited to economics journals.

    To take an extreme example, suppose that people in interpretive dance love how you do economics and cite you a lot. Should we hire you, give you grants, promote you on this basis? Perhaps we would be concerned that they aren’t sufficiently specialized to judge.

    The problem with adding a plethora of non-economics journal is that this ruins the proxy of peer opinion of the journals that we want to elicit in the ranking exercise.

    Remember, that leading up to KMS has been forty years of peer-reviewed ranking studies. Scholars in the literature have thought very carefully about journal inclusion in the exercise.

    Like

  6. ah, we have stopped talking about what is actually in these lists and gone into semantics again! A survey of peers reviewing a whole list of journals and coming up with their considered opinion is not peer-review but a commissioned paper appointed via a small committee of the European Association and subsequently put into the journal of that association virtually by invitation (i.e. the KMS paper) is a gold-standard peer-review? And wouldnt it be even better if we went for some unimplementable axioms? Lala-land.

    Like

  7. It seems that we all agree that peer-review of journal articles is the way to go, including those which are aimed at ranking journals. Based on this consensus, it is logical, should someone object to the current literature on rankings, that they write a paper proposing a better method, submit it to a journal and subject it to peer review.

    Like

  8. who are our peers?
    peers are within fields, within subjects, or interdisciplinary?
    As a micro theorist, is only Rohan may peer? or Paul is also my peer since he is an economist? or some public health researcher knows health econ literature but no other econ also my peer?

    Like

  9. Paul,
    I think you’re being unfair.

    The literature on journal ranking and impact in economics is interesting. It draws from microeconomics and index theory.

    There are two approaches.

    The iterative approach, which is a fixed point approach, has been widely adapted in search algorithms outside of economics. Its computational properties are well studied. Its economic properties are not so well understood but some people have shown various interesting things about it and others in recent papers have established some negative results. All of this is part of a rich economic literature that draws on economic ideas and techniques from mainstream economics: the cathedral at the center of the market place for ideas, as it were.
    There is another approach. It starts from the properties that we want a journal index system to have and develops that index based on these properties. This is an axiomatic approach similar to what is done in social choice theory. Some have successfully done this and of course the authors develop an index that can be easily put on a computer. The main criticism of this literature is that it does not change the ranking of journals by much compared to the KMS ranking.
    Recently I saw a discussion of a new approach based on the Geary-Khamis index for PPP. So the idea is how do we compute the value of one paper in journal A in terms of another journal B. I don’t know what’s happening with that approach.

    Paul, the point is that we unlike any other area have a rich literature on journal ranking. For me that means that anyone who is interested in this should attempt to read this literature, and if they have a problem understanding the ideas, like me, behind some of the proposed methods, then perhaps they should visit a library or two and brush up on their economics. In fact, I found myself learning something about social choice theory while thinking about the economics literature on journal ranking.





    Like

  10. SC,
    It depends on the intended audience. The notion of peer really depends on who you are trying to talk to.

    If you submit a paper to a general journal or to a specialized journal your peers are different.

    But as in all social classifications and relations what we see is a result of endogenous interactions. Even terms that arise in language arise endogenously from such interaction. So peer is an equilibrium/stability concept.


    In terms of ranking of economics journals or an index of economics journals. Well, peers are kinda obvious…no?*


    * Anyone who can teach two of the following: micro macro econometrics at third year in australia level </semi snark>

    Like

  11. @Paul

    PS: the survey method you suggest we adopt (like ESA) was proposed by Hawkins et al 1973. Fourty years ago. The published peer reviewed lit has progressed…

    Like

  12. Rohan, It is not laughable that Annals of Statistics would be ranked similarly to AER. What is laughable is to rank it as an Economics journal.

    It seems very sensible to include citations from a wide range of fields in assessing the relative impact of Economics journals. In fact, I would argue that impact outside any inbred set of journals is a good indicator of quality.

    You just need to drop no Economics journals of the list at the last step.

    Like

Comments are closed.