How much can economics labs teach us?

As anyone who has read SuperFreakonomics would’ve seen, Steven Levitt (along with and John List) appears to be on a quest to reach into the chest of laboratory experiments and rip out its beating heart. Their latest two papers are below (gated links, sorry).

What Happens in the Field Stays in the Field: Exploring Whether Professionals Play Minimax in Laboratory Experiments 
Steven Levitt, John List and David Reiley
The minimax argument represents game theory in its most elegant form: simple but with stark predictions. Although some of these predictions have been met with reasonable success in the field, experimental data have generally not provided results close to the theoretical predictions. In a striking study, Palacios-Huerta and Volij (2007) present evidence that potentially resolves this puzzle: both amateur and professional soccer players play nearly exact minimax strategies in laboratory experiments. In this paper, we establish important bounds on these results by examining the behavior of four distinct subject pools: college students, bridge professionals, world-class poker players, who have vast experience with high-stakes randomization in card games, and American professional soccer players. In contrast to Palacios-Huerta and Volijs results, we find little evidence that real-world experience transfers to the lab in these games–indeed, similar to previous experimental results, all four subject pools provide choices that are generally not close to minimax predictions. We use two additional pieces of evidence to explore why professionals do not perform well in the lab: (1) complementary experimental treatments that pit professionals against preprogrammed computers, and (2) post-experiment questionnaires. The most likely explanation is that these professionals are unable to transfer their skills at randomization from the familiar context of the field to the unfamiliar context of the lab.

Checkmate: Exploring Backward Induction Among Chess Players
Steven Levitt, John List and Sally Sadoff
Although backward induction is a cornerstone of game theory, most laboratory experiments have found that agents are not able to successfully backward induct. Much of this evidence, however, is generated using the Centipede game, which is ill-suited for testing the theory. In this study, we analyze the play of world class chess players both in the centipede game and in another class of games – Race to 100 games – that are pure tests of backward induction. We find that world class chess players behave like student subjects in the centipede game, virtually never playing the backward induction equilibrium In the race to 100 games, in contrast, we find that many chess players properly backward induct. Consistent with our claim that the Centipede game is not a useful test of backward induction, we find no systematic within-subject relationship between choices in the centipede game and performance in pure backward induction games.

This follows on from List’s famous 2007 JPE paper (On the Interpretation of Giving in Dictator Games), which found that a simple modification to the dictator game (allowing the player to take money as well as share it) drastically altered the results. As List stated in the conclusion to that paper:

A recent surge of research in economics uses the laboratory as a tool to measure preferences. One stylized fact from this literature is that a majority of agents in standard dictator games pass a portion of their funds to an anonymous agent, and the amount is nontrivial—roughly 20 percent of the endowment. Utility theories that invoke social preferences have been forwarded to explain such data patterns. One puzzling feature of everyday life, however, is that even though scores of students around the world have outwardly exhibited their preferences for equality in laboratory experiments by sending anonymous cash gifts to anonymous souls (in some cases not even knowing that such a soul actually exists), why is it rare to find such data patterns in the extra-lab world?

My own take on this literature is that for experimental economists, there is probably more to be learned from economics games that don’t use computers. If you want to learn about teams, put people around a table. If you want to learn about trust, let them speak to their partner. If you want to learn about giving, put the cash in their hands first. The less that the game resembles real life, the less we are likely to be able to learn from it.

2 thoughts on “How much can economics labs teach us?”

  1. I suspect that experimental economic suffers from similar problems to experimental psychology.
    The researchers will be rigorous about randomisation, fastidious about their statistical analysis and beyond reproach in giving enough information so others can repeat the experiment.
    But what gets the experiment into the text books is a heroic interpretive leap. We start with a modest claim about how American college students (or whoever the subjects are) will behave in a given circumstance but end with sweeping conclusion about the nature of human beings  and their behaviour just about everywhere.
    Stanley Milgram’s obedience experiments are one example.


  2. In my view learning is dependent upon on us; if we are determined then we will be highly successful. I am working with OctaFX broker where I can learn things very well and that’s allowing to be successful easily with low spread of 0.2 pips, high leverage up to 1.500 while there is also smooth trading platform where I am able to work easily without any manipulation at all. Also, it has great support service which is been 24/5 active!


Comments are closed.