We economists are a dismal bunch, aren’t we?

In a forthcoming article (downloadable here) in Research Policy (according to the 2013 ranking of the Australian Business Deans Council an A*journal), Sarah Necker of the University of Freiburg, Germany, reports the results of a study on fraudulent and questionable research practices in economics. The study is based on survey data, so it automatically comes with the usual caveats (sample selection biases, untruthful responses) that any such study faces.

Necker is aware of the various pitfalls of this kind of research (e.g., the systematic differences between own reports and reports of misbehaving colleagues) and makes for the most part the best of it, given the circumstances: the questionnaire was sent out to all registered members of the European Economics Association. Of 2,520 potential responders (including me), more than 600 started it (not me), and 426 continued until the last page for a participation rate of 17%, about in the ballpark. Looking at the few observable characteristics that she has information on, such as gender and national distribution of institutional affiliation, Necker argues that her sample is broadly representative. Also, the answers of those respondents in the first and last quintile are similar, adding allegedly to the trustworthiness of her results; as do several other robustness checks (see pp. 2 , first column).

Necker finds,

“The correction, fabrication, or partial exclusion of data, incorrect co-authorship, or copying of others’ work is admitted by 1- 3.5%. The use of ‘tricks to increase t-values, R2, or other statistics’ is reported by 7%. Having accepted gifts exchange for (co-)authorship, access to data, or promotion is admitted by 3%. Acceptance of sex or money is reported by 1-2%. One percent admits to the simultaneous submission of manuscripts to journals.

About one fifth admits to having refrained from citing others’ work that contradicted the own analysis or to having maximized the number of publications by slicing their work into the smallest publishable unit. Having at least once copied from their own previous work without citing is reported by 24% (CI: 20 – 28%). Even more admit to questionable practices of data analysis (32 – 38%). e.g., the ‘selective presentation of findings so that they confirm one’s argument.’ Having complied with suggestions from referees despite having thought that they were wrong is reported by 39% (CI: 34-44%). Even 59% (CI: 55-64%) report that they have at least once cited strategically to increase the prospect of publishing their work.” (p. 3)

Needless to say that there is a tendency of respondents who admit to a questionable behavior to think of it as being somewhat more justifiable, a result that is not surprising in light of a well-established literature on self-serving biases. Needless to say also that the reported percentages of colleagues misbehaving are all higher, sometimes by almost an order of magnitude. The median response is “up to 10%” for fabrication of data, for example. And the median response for plagiarism (here defined as “the incorrect handling of others’ ideas” is up to 20% of published research. The median response is “up to 30%” for various forms of massaging/incorrectly reporting data.

These numbers – which Necker argues are similar for psychologists – seem at first sight quite distressing. It hence needs stressing that the numbers are quite misleading. The author asked her respondents whether they “ever” engaged in, or observed, the various questionable behaviors but does not control for actual incidence (see p. 10 – 11). Necker’s piece hence strikes me as exhibit A for questionable, and sensationalist, reporting practices. (Interestingly, the folks at retractionwatch did not pick that up.)

In the end it is not clear how widespread scientific misbehavior in economics is and I wonder whether Fanelli, who is quoted in the retractionwatch piece, actually read it. If so, he seems to have missed a major problem of Necker’s study.

Whatever the incidence numbers really are, it remains open what their trajectory is going forward. While it is clear that increasing performance management adds to the temptation to cut corners, the increasing likelihood of fraudulent and questionable behavior to be discovered (as illustrated by sites such as retractionwatch) acts as a countervailing force that makes it difficult to predict where we are headed.

As mentioned, Necker’s respondents are members of the European Economics Association. It is not clear to what extent we can make inferences about Aussie economists although the performance pressures inducing fraudulent and questionable behavior seem worldwide similar.

3 thoughts on “We economists are a dismal bunch, aren’t we?”

  1. nice. Perhaps they should add the question whether the researchers have ever written anything for the sole purpose of the career advantages of getting published, without the slightest hope that their work will truly advance our joint knowledge or help society. Surely we’d get close to 100% on that one, you would think!

    Like

  2. On a quasi-positive note: at least economists are (somewhat) honest when it comes to filling out anonymous surveys about their vices?

    Like

Comments are closed.