Let’s suppose that the guy on the left is choosing column and the guy on the right is choosing row. (The first payoff in each cell in the following game is the right guy’s payoff).

Here is the original game.

It has three (pure strategy) Nash equilibria: (Steal, Steal), (Split, Steal) and (Steal, Split). So it is not a traditional prisoner’s dilemma which has one unique equilibrium (Steal, Steal).

Now the left guy’s strategy is to change the payoffs. Specifically, he does so by offering a contract in one cell where he plays steal and the other plays split. That generates a new game.

Notice that there are now only one pure strategy Nash equilibrium. It is just that one of these cells a split like solution. This change occurs because, I suspect, the contract is enforceable because it was a clear offer and acceptance. However, it will likely have some details associated with it and so I have reduced the left guy’s payoff by c to account for those transaction costs. If c = 0, then the (Split, Steal) option is a Nash equilibrium again.

The point is that (Split, Split) — the thing that was actually played is not a Nash equilibrium. A real strategic innovation would be to ensure that. But what is more, unless you believe c = 0, then (Split, Steal) isn’t a Nash equilibrium either.

Now the left guy had recognised that (a) there were three Nash equilibria and (b) by committing to steal so openly he may have pushed the other person to take the split choice as it would make them look better. The point is that, if this is a real innovation, then this game show is dead. But it isn’t sustainable so it will live on nicely.

I don’t see how [Steal,Steal] is a Nash equilibrium in the second game. The Row player could switch from Steal to Split and their payoff change from 0 to 6800 (assuming the contract is enforceable). If your c>0 then there would be only one Nash equlibrium of [Steal(r),Split(c)], which is oddly the opposite of the threat/promise.

However, I don’t see why the cost ‘c’ is imposed on the person who offered the deal, rather than the one who would have to try enforce it. In that case, [Split,Steal] would be a Nash equilibrium (as would [Steal,Split] – but the former arguably the more likely outcome).

I don’t see how [Steal,Steal] is a Nash equilibrium in the second game. The Row player could switch from Steal to Split and their payoff change from 0 to 6800 (assuming the contract is enforceable). If your c>0 then there would be only one Nash equlibrium of [Steal(r),Split(c)], which is oddly the opposite of the threat/promise.

However, I don’t see why the cost ‘c’ is imposed on the person who offered the deal, rather than the one who would have to try enforce it. In that case, [Split,Steal] would be a Nash equilibrium (as would [Steal,Split] – but the former arguably the more likely outcome).

LikeLike

You are right. Wrote this too early in the morning.

LikeLike