The ERA: Getting less than what you pay for

OK so here is the problem: research quality is a nebulous concept and it takes many years to work out whether someone’s output has actually high quality or not. This is especially hard for non-insiders. The Government is a non-insider that funds academic research in Australia and so would like to know what it is getting for that money.

So here is a typical solution: use the quality of research publications as a proxy for likely high quality in the future and use that to measure performance. Academics will recognise that solution as it is precisely what they try to employ internally. Sometimes it is transparent, sometimes it is subjective. But it is always done.

So what does the Australian Government do in response to all of this: try and mimic the academic solution to the performance measurement issue but scale it up. Alas, the only way to scale it up and to be transparent is to invest and use an objective measure. That is where the Excellence in Research for Australia (or ERA) journal/conference ranking came in. It was not original. The British government had already embarked on that. It was also not new. Australia had been down such paths before.

What is the problem with such objective measures of quality? First, there is a major cost — a huge cost — in coming up with the measure. The reason for the cost is that off-the-shelf measures rarely exist. Actually, that isn’t true. Every discipline had an off-the-shelf measure that managed to get 90% of what any ranking would eventually have in it. But there are always problems. The big one is ranking Australian-based research. Another is inter-disciplinary research. Then there are newer journals and fields. All of these problems bite internally in academia. And so what happens? A big fight ensues. That is what happened with the ERA. Huge fight, lots of rent seeking and mostly a waste of time because the choices impacted on a few people in a big way and everyone else not at all.

Second, once you have an objective measure, people will try to improve their own ranking. Remember that is the point. The problem with research is that the best way to improve your own ranking is not to produce better research (that takes time) but to buy it in (getting good researchers into your institution; see here for an example) or direct publications (to the journals that rank highly but are actually not of great quality). The first sort of gaming is great for high output academics whose salaries went up. The second doesn’t really have any consequence. Actually, both reactions are distributional but involve ‘costs of gaming.’ With these schemes you get what you pay for (in this case, performance directed to improving ERA ranking scores) but at a cost (not getting any real increase in research quality).

Yesterday, Minister Carr announced that the ERA was effectively dead. The journal rankings will be replaced by a new ranking based on ‘frequency of publication.’ This is so useless a measure as I read it as giving up. Of course, I could be wrong and the Government may reward Universities based on it, in which case it is one of the most insane measures ever put forward.

Anyhow, here is the cited reason for the change:

There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings.

One common example was the setting of targets for publication in A and A* journals by institutional research managers.

To which I say: what did you expect? Every academic told the Minister this would happen — it is just basic economics. Secondly, it is exactly what happened in Britain. So we had evidence of the theory.

Now the Government is giving the impression that they are still going to do something. They have opted for a lack of transparency but it seems to me they are packing up and going home. To be clear, these short-term issues aside, that means no measurement of research performance and hence, no reward for it. This would be fine if we had real competition for funds from students in our Universities but we don’t. It is unclear how pretending issues don’t exist will improve matters.

But we should be more angry about this. Many academics’ comments on hearing about the demise of the ERA is good riddance. Why? Because they bore the costs of fighting about the measure and then the gaming. But those costs have been borne. I personally bore a ton of them and so did so many others. A complete waste of time.

And for what? Nothing. Just to prove to the Government what we all could have predicted four years ago! It is an outrage. The Minister had the information. Carr is the same person who pushed the decision in the first place. Has he admitted he was ‘wrong’? No. He pretends that it is news to him that academics were strategic. Well, let me tell you, someone who doesn’t know that academics are strategic is not equipped to manage the University sector. This is an embarrassment to the ‘Education Government.’

9 thoughts on “The ERA: Getting less than what you pay for”

  1. My understanding is that they will simply leave it up to peer reviewers to assess the quality of publications submitted in fields like economics. In the fields that they are assessing using citations, they can use citations just as before. So, no, I don’t think this means that ERA is dead.

    Like

  2. “The journal rankings will be replaced by a new ranking based on ‘frequency of publication.’ This is so useless a measure as I read it as giving up.”

    Quite right. Here in software-land IBM tried measuring productivity by lines of code. Naturally programmers started writing terrible but long programs. IBM, at least, had the sense to dump it.

    Bill Gates was later credited as saying “Measuring programming progress by lines of code is like measuring aircraft building progress by weight”.

    Like

  3. It seems to me that it comes full circle : after building evaluation tools based on the number of publications in the 90s and being rewarded with lots of them (but in “low” journals, as Butler pointed out), ARC switched to a quality-based tool with ERA, with its gaming on the process of ranking, then publications themselves.
    And now back to the mid-90s. Academics are so predictable that they will follow whatever rule they are give.
    To put that in historical perspective and from an international point of view : The controversial policies of journal ratings: evaluating social sciences and humanities http://j.mp/e1xkOP

    Like

  4. Well I can see that Joshua is not currently in the running for any lucrative consultancies from the relevant Department. Why don’t you tell us what you really think?

    Of course Goodhart’s Law reigns supreme – it bedevils all administration. But rather than just pointing this out can you suggest a system for allocating research funds in a way that actually raises standards? Because the fact is that a lot of current Australian research in the social sciences – and possibly the physical sciences for all I know – is crap.

    Like

  5. Dropping the discrete A*/A/B/C tiers is a good idea. I have always argued (see HERE) that a numeric score or full ranking was more appropriate.

    But their journal ranking is going to be based on “frequency of publication” within subject area. I am not sure the people on Canberra even proof read their pronouncements. What the hell does “frequency of publication” mean? Is 12 issues per year journal better than 4? I am told by one source that they meant frequencey of citation. There are many different citation measures. More to the point – they have reduced the ranking to a single dimension.

    Joshua is right to be angry.

    Like

  6. In a sense, what the Minister may have expected was for academics to play to all six sections of the ERA quality framework, rather than to obsessively fixate on two bands of a limited spectrum Given that ranked outlets (journals) were one of five possible publication outlets (books, chapters, journal, conferences, other), and there were five other categories that were not ranked outlets, it takes industry insider experience to fully appreciate that the academic sector would ignore opportunity upon opportunity (prestige, citation, patent income, commercialisation, research income and four of the five outlets)

    Having spoken to several people from ERA over time, they were genuinely frustrated that efforts to expand the portfolio of recognised research were turned into a narrow band of limited outlets when that was expressly contrary to their intent. As recently as April, in a meeting where ERA staff explicitly said “A* and A are meaningless. There are no weightings attached. Here is the data demonstrating that they do not matter” it was met with “And by that you mean that A* is the most important outlet to publish in?”

    If anything, confiscating the research outlet journal ranks may actually force people to consider that there are more than just ranked outlets as measures of quality, and start address some of those areas. Unlikely, but much more possible now the A/A* categories are historical footnotes.

    Like

Comments are closed.