Thoughts on “Thinking, fast and slow”

I couldn’t resist buying a copy of Daniel Kahneman’s best-seller when returning from holidays. Several friends and colleagues told me it was a great book; it got great reviews; and Kahneman’s journal articles are invariably a good read, so I was curious.

Its general message is simple and intuitively appealing: Kahneman argues that people use two distinct systems to make decisions, a fast one and a slow one. System 1, the fast one, is intuitive and essentially consists of heuristics, such as when we without much thought finish the nursery rhyme ‘Mary had a little…’. The answer ‘lamb’ is what occurs to us from our associative memory. The heuristic to follow that impulse gives the right answer in most cases but can be lead astray by phrases like ‘Ork, ork, ork, soup is eaten with a …’. Less innocuous examples of these heuristics and how they can lead to sub-optimal outcomes are to distrust the unfamiliar, to remember mainly the most intense and the last aspect of an experience (the ‘peak-end rule’), to value something more after possessing it than before possessing it (the ‘endowment effect’) and to judge the probability of an event by how easily examples can come to mind.

System 2, the slow way to make decisions, is more deliberative and involves an individual understanding a situation, involving many different experiences and outside data. System 2 is what many economists would call ‘rational’ whilst System 1 is ‘not so rational’, though Kahneman wants his cake and eat it by saying that System 1 challenges the universality of the rational economic agent model whilst nevertheless not wanting to say that the rational model is wrong. ‘Sort of wrong sometimes’ seems to be his final verdict.

Let me below explore two issues that I have not seen in the reviews of this book. The first is on whether or not his main dichotomy is going to be taken up by economics or social science in the longer-run. The second, related point, is where I think this kind of ‘rationality or not’ debate is leading to. Both issues involve a more careful look at whether the distinction between System 1 and 2 really is all that valid and thus the question of what Kahneman ultimately has achieved, which in turn will center on the usefulness of the rational economic man paradigm.

Continue reading “Thoughts on “Thinking, fast and slow””

The Importance of Peer-Review in Journal, Department, and Individual Research Rankings


I recall that some time in the mid 2000s, when the Research Quality Framework (which preceded the current ERA) was being discussed, Arden Bement, the director of the National Science Foundation, was asked what he thought. He responded as one would expect of a serious researcher, by saying that the only method he knows for judging academic outcomes is peer review.

In fact, Peer review is the gold-standard in science. We simply don’t trust any finding, method, conclusion, analysis, study that is not reported in a peer reviewed outlet. Yet there has been rapid growth in the use of indices in ways that have not been tested through peer review and which are being used to measure journal ranking, individual academic performance, and even the standing of departments.

Here, I argue that we should return to published methods that have been tested through peer review.  The unchecked use of non-peer reviewed methods runs the risk of misallocating resources, e.g. if university promotion and appointment committees and bodies like the ARC use them. Even more troubling, is that non-peer reviewed methods are susceptible to manipulation; combined with the problem of inappropriate rewards, this has the potential to undermine what the profession, through peer-review, regards as the most valuable academic contributions.

Economics journal rankings

In the last two decades or so, there has been an explosion in the use of online indices to measure research performance in economics in particular (and academia generally). Thomson-Reuters Social Science Citation Index (SSCI), Research Papers in Economics (RePEc) and Google Scholar (GS) are the most commonly used by economists.

These tools display the set of publications in which a scholar’s given article is cited. While SSCI and GS take their set from the web as a whole, RePEc––hosted by the St Louis Fed–– is different, in that it refers only to its own RePEc digital database, which is formed by user subscription. Further, RePEc calculates rankings of scholars, but only of those who subscribe. Referring to its ranking methods, the RePEc web page states:

This page provides links to various rankings of research in Economics and related fields. This analysis is based on data gathered with the RePEc project, in which publishers self-index their publications and authors create online profiles from the works indexed in RePEc.

While it has been embraced by some academic economists in Australia as a tool for research performance measurement it is important to note that the RePEc ranking methodology is not peer-reviewed. This departure from the usual strong commitment to the process of peer-review by academics is puzzling, given that there is a long history of peer review in economics in in the study of, you guessed it “Journal Ranking”.

A (very) quick-and-dirty modern history

Coates (1971) used cites in important survey volumes to provide a ranking; Billings and Viksnins (1972) used cites from an arbitrarily chosen ‘top three’ journals; Skeels and Taylor (1972) counted the number of articles in graduate reading lists, and; Hawkins, Ritter and Walter (1973) surveyed academic economists. (Source: Leibowitz and Palmer JEL 1984, p78.)

The modern literature is based on a paper by Leibowitz and Palmer in the Journal of Economic Literature,1984. In their own words, their contribution had three key features

…(1) we standardize journals to compensate for size and age differentials; (2) we include a much larger number of journals; (3) we use an iterative process to “impact adjust” the number of citations received by individual journals

Roughly speaking, the method in (3) is to: (a) write down a list of journals in which economics is published, (b) count up the total number of citations to articles in each journal; (c) rank the journals by this count; (d) weight the citations by this count and, finally; (d) iterate. The end result gives you a journal ranking based upon impact-adjusted citations.

The current best method, is Kalaitzidakis et al Journal of the European Economics Association, 2003, hereafter KMS. This study was commissioned by the European Economics Association to gauge the impact of academic research output by European economics departments.

KMS is based on data from the 1990s and, as far as I am aware, has not been updated. No ranking can replace the wisdom of an educated committee examining a CV. However, KMS at least comes from a peer-review process. Unlike simple count methods, it presents impact, age, page and self-citation adjusted rankings, among others.

But even KMS-type methods can be misused: One should be ready to use the “laugh test” to evaluate any given ranking. KMS deliberately uses a set economics journals, roughly defined as journals economists publish in and read. It passes the laugh test because, roughly speaking the usual “top five” that economists have in their heads (AER, Econometrica, JPE, QJE and ReStud) do indeed appear near the top of the ranking, and other prestigious journals are not far behind.

The Economics Department at Tilburg University has included statistics journals in its “Tilburg University Economics Ranking”. The result? “Advances in Applied Probability” beats out the American Economic Review as the top journal: Their list can be found at, but you need look no further than their top five to see that this does not pass the laugh test:

  1. Advances in Applied Probability
  2. American Economic Review
  3. Annals of Probability
  4. Annals of Statistics
  5. Bernoulli

Would I be remiss in suggesting that a statistics-oriented econometrician might have had an input into this ranking? Yes I would oops!

Finally, let us turn to the new RePEc impact-adjusted ranking. A laugh-test failure here, among others, is the inclusion of regional FED journals: Quarterly Review, Federal Reserve Bank of Minneapolis is ranked 14–just above the AER; the Proceedings Federal Reserve Bank of San Franscisco is ranked 16 ahead of the Journal of Econometrics at 19 and; Proceedings of the Federal Reserve Bank of Cleveland is 24, ahead of the European Economic Review at 29.

The RePEc top 5 is:

  1. Quarterly Journal of Economics
  2. Journal of Economic Literature
  3. Journal of Economic Growth
  4. Econometrica
  5. Economic Policy

It would be interesting to investigate whether  macroeconomists and policy scholars had influence here.

My conclusions

If we are going to use ranking methods be very careful. Use methods that have emerged over decades of rigorous peer review, like the European Association’s 2003 study by KMS. And stick to their method rigorously lest we all have to retrain in statistics.

Update on Gene Patents 2010

Here’s an excellent update on Gene Patents covering the year 2010: It is written by my student Rachel Goh, a 5th year medical student at the University of Melbourne. She discusses the controversy surrounding the Myriad and Monsanto cases in the US and Europe, as well as legal decisions in Australia surrounding breast cancer tests and the Australian Senate review on gene patents. Of particular interest is her observation that we are moving increasingly towards “multi-genomic” tests, so the patenting of individual genetic sequences will cause greater problems for follow-on and systemic innovation. I see here a parallel to software patents and patent thickets, which have been said to have had similar effects. Rachel also included a thoughtful commentary along with her summary.

Survival science – just run like hell

You know the story of the two guys who are being chased by a lion. One says to the other “We are going to die. We can never outrun this lion.” His friend replies: “I don’t have to outrun the lion. I only have to outrun you.”

[DDET Read More]

A recent paper* turns the modern spotlight of statistics onto that pressing issue of how best to survive a big cat attack. The authors analysed data from 185 puma attacks on humans in North America over more than 100 years. The response was severity of injury, ranging from no injury to death. The predictors were age, group composition and behaviour. I am not sure about age but I am guessing that you shouldn’t go walking by yourself in puma country for a start. The modern data crunch used to reveal the elusive truth was multinomial regression.

It turns out that age had no effect on injury severity. Once the puma gets his claws into you, you are pretty well fu…d, well in serious trouble however old you are.

There is confirmation that if you are in a group you have less chance of injury – just as the guys in the humourous story reasoned from first principles. But the severity of injury is not reduced by larger group numbers. Your mates are obviously too busy climbing up the nearest tree to distract the puma from snacking on your wobbly bits.

It also appears that your behaviour influences the chance of serious injury. Specifically, it is found that if you stand still and wait for the puma to attack you then you have a higher chance of injury (74%) then if you run like hell (50%). And it doesn’t really matter how fast you run – presumably because the puma can run faster than you or Usain Bolt.

So, if you see a puma attacking then you should run. Who would have thought! ** This is science and modern number crunching at its best – pushing the frontiers of human knowledge and saving human lives.

*Anthrozoos: A Multidisciplinary Journal of the Interactions of People and Animals, 22, 77-87.

** OK, I am being a bit hard on the authors. Actually, conventional wisdom and some wildlife agencies advise against running. The California Department of Fish and Game says on its Web site, in part: “Do not run from a lion. Running may stimulate a mountain lion’s instinct to chase. Instead, stand and face the animal.”


Obama on Science

The report of the National Innovation Review were released in September last year while Barack Obama was still running for President. We are yet to hear whether, if anything, the Government is going to move on those recommendations. Today, however, the US President made a speech outlining what he was going to do in the area of science and technology. It is far-reaching, bold (lifting spending on R&D to 3% of GDP) and adopts many of the things our National Innovation Review recommended.

[DDET Read more]

The speech is here and it is perhaps the best Obama has made since taking office.

The pursuit of discovery half a century ago fueled our prosperity and our success as a nation in the half century that followed. The commitment I am making today will fuel our success for another 50 years. That’s how we will ensure that our children and their children will look back on this generation’s work as that which defined the progress and delivered the prosperity of the 21st century.

Underpinning the speech are some core ideas for the economics of innovation including the recognition of uncertainty as well as the idea that general purpose technologies spur innovative applications.

As Vannevar Bush, who served as scientific advisor to President Franklin Roosevelt, famously said: “Basic scientific research is scientific capital.”

The fact is an investigation into a particular physical, chemical, or biological process might not pay off for a year, or a decade, or at all. And when it does, the rewards are often broadly shared, enjoyed by those who bore its costs but also by those who did not.

And that’s why the private sector generally under-invests in basic science, and why the public sector must invest in this kind of research — because while the risks may be large, so are the rewards for our economy and our society.

No one can predict what new applications will be born of basic research: new treatments in our hospitals, or new sources of efficient energy; new building materials; new kinds of crops more resistant to heat and to drought.

It was basic research in the photoelectric field — in the photoelectric effect that would one day lead to solar panels. It was basic research in physics that would eventually produce the CAT scan. The calculations of today’s GPS satellites are based on the equations that Einstein put to paper more than a century ago.

And it is not all about basic, government-funded science:

But the renewed commitment of our nation will not be driven by government investment alone. It’s a commitment that extends from the laboratory to the marketplace. And that’s why my budget makes the research and experimentation tax credit permanent. This is a tax credit that returns two dollars to the economy for every dollar we spend, by helping companies afford the often high costs of developing new ideas, new technologies, and new products. Yet at times we’ve allowed it to lapse or only renewed it year to year. I’ve heard this time and again from entrepreneurs across this country: By making this credit permanent we make it possible for businesses to plan the kinds of projects that create jobs and economic growth.

But it is about energy and the environment.

But energy is our great project, this generation’s great project. And that’s why I’ve set a goal for our nation that we will reduce our carbon pollution by more than 80 percent by 2050. And that is why and that is why I’m pursuing, in concert with Congress, the policies that will help meet us — help us meet this goal.

My recovery plan provides the incentives to double our nation’s capacity to generate renewable energy over the next few years — extending the production tax credit, providing loan guarantees and offering grants to spur investment. Just take one example: Federally funded research and development has dropped the cost of solar panels by tenfold over the last three decades. Our renewed efforts will ensure that solar and other clean energy technologies will be competitive.

And it will all be done through a new agency, APRA-E.

But like our innovation review, it isn’t just about science. It is also about data.

The Recovery Act will support the long overdue step of computerizing America’s medical records, to reduce the duplication, waste and errors that cost billions of dollars and thousands of lives.

But it’s important to note, these records also hold the potential of offering patients the chance to be more active participants in the prevention and treatment of their diseases. We must maintain patient control over these records and respect their privacy. At the same time, we have the opportunity to offer billions and billions of anonymous data points to medical researchers who may find in this information evidence that can help us better understand disease. …

In biomedicine, just to give you an example of what PCAST can do, we can harness the historic convergence between life sciences and physical sciences that’s underway today; undertaking public projects — in the spirit of the Human Genome Project — to create data and capabilities that fuel discoveries in tens of thousands of laboratories; and identifying and overcoming scientific and bureaucratic barriers to rapidly translating scientific breakthroughs into diagnostics and therapeutics that serve patients.

In environmental science, it will require strengthening our weather forecasting, our Earth observation from space, the management of our nation’s land, water and forests, and the stewardship of our coastal zones and ocean fisheries.

And then, there is the usual commitment to science and mathematics in education that I am sure others can go into.

It is rare these days to see governments focused on the very long-time but that makes it all the more sweeter when they do.


Video Podcast at

I’ve created a website for people to discuss gene patenting. It’s at On that site, you will find video podcasts from last week’s public discussion on gene patenting. If you have comments or thoughts to share on this very important issue, please add them to “comments” section at

Commercial science

I have already commented on the interesting posts by Stephen Quake on the interface between academic science and commerce. In his latest column, Quake comments on how universities manage potential conflicts of interest:

[DDET Read on]

Faculty members with financial interests in their research must disclose such interests through a “conflict of interest” (COI) process. The federal government has up to this point taken a fairly sensible position about COI in the grants they fund: they require conflicts to be disclosed by the faculty and “managed” by the university – but don’t prescribe a particular method.

Unfortunately, this has encouraged universities to create a bureaucracy to “manage” COI – often by meddling into faculty research in ways that create more heat than light. These COI bureaucracies often overlook the solution that has been arrived at by the scientific community: disclosure and peer review of all publications. Peer review is the bedrock value of the scientific community and although it certainly is not perfect, it is, to paraphrase Winston Churchill, “the worst system, except for all the others that have been tried.”

When this bureaucracy asked me for a plan to manage conflicts in my own research, I wrote one that described all of the steps involved in peer review – and the COI committee sent it back as “too much.” In their view the process that scientific publications go through was more rigorous than necessary.

He goes on:

Interestingly, it is not unusual for basic scientists with no commercial relationships to be dependent on grants for their salaries and therefore have a significant personal financial interest in preserving their grants. Although COI experts have assured me that that this is not a conflict that needs to be managed, I must confess that I have some difficulty with the distinction they are trying to draw. Who is under greater temptation to bias the results of their research: the financially comfortable academic entrepreneur, or the ivory tower scientist who may not be able to pay his mortgage if his grant is not renewed? Perhaps all financial conflicts should be treated even-handedly.

Then there is a discussion about actual university commercialisation:

Licensing is often a protracted process, and licensing officers so paralyzed with fear about making a mistake and not maximizing licensing revenues that they discourage all but the most persistent licensees. Because universities are non-profit institutions, the true measurement of technology transfer success should not be the total amount of licensing revenue, but rather the successes in helping faculty members patent inventions, in forming new ventures that create jobs, and in facilitating the commercialization of technologies that in many cases will help improve our society.

The best way for universities to achieve this would be to make the same decision the federal government did, and relinquish their control over licensing. Since in most cases faculty know the context of their invention and how it can be best commercialized, they should drive the licensing process, and the OTL should play a supporting role. The university deserves to receive some compensation, but this should be fixed by a simple formula and limited — bearing in mind that the vast majority of research funding that leads to inventions has been obtained by the faculty through grants, and that the university has already taxed a fair bit of that to support its facilities.

I thought of all this stuff about University meddling as I listened to a talk the other week from a noted Australian scientist who described how the University of Melbourne used to make it nearly impossible for academic entrepreneurs to commercialise potential innovators and then for a brief period assigned ownership to them and kept their hands off. That apparently led to a great deal of commercialisation before the University reverted back to the high transaction cost model.

[Update: Actually, there is some ambiguity as to whether the policy of assigning ownership to academics really worked at the University of Melbourne; which is why they abandoned it. Other reports suggest that the current policies are working better. Sounds like an interesting area for further study.]