The X-mas quiz: are you a utilitarian?

Economists are wedded to utilitarianism as their collective moral compass. This is why we speak of social planners, welfare, utility maximization, and quality of life. The essence of utilitarianism is that moral judgments are reserved for final outcomes, not the means via which those outcomes are achieved (unless people have preferences over those means). As Bentham said, it is about the greatest happiness of the greatest number of people. In modern jargon, classic utilitarianism is about getting the highest number of total happy life years.

The quiz has 4 questions. ‘Classical utilitarian’ answers and discussions on Friday:

  1. To which identifiable group should society allocate its scarce supply of life-saving donor organs? I am thinking here of gender, age, race, area, anything that is a potential basis for an administrative allocation.
  2. There is a potential terrorist of whom there is a probability that he will cause a million deaths and he can only be stopped by being killed. How high should the probability of the threat materializing be for you to agree that your society should have institutions (such as drone programs) that kill him off pre-emptively? And how high should the probability be for you yourself to be willing to kill him off pre-emptively, presuming no other consequences for yourself of that act?
  3. Suppose you are in the position whereby you alone can choose to make it statistically visible what socially-unwanted things are done to pets by people in their own homes, but no-one knows you have that ability. In this hypothetical, making the data available would in no way change outcomes. Would you make that information visible?
  4. Suppose you are in the position to decide on whether to have an institution that saves the lives of an identified group of patients, say with a particular genetic or childhood disease. With the same money you could set up an institution that prevents 10% more deaths in the general population, for instance by innoculation or investments in road quality that reduce accident rates. Hence the second institution saves more lives, but the lives saved are not visible, either beforehand or afterwards: even afterwards, you do now know who was saved so the lives saved are ‘statistical’. Would you invest in the first or the second institution? More generally, what is the ratio of ‘statistical lives saved’ to ‘identified lives saved’ you implicitly choose via your policies?

Author: paulfrijters

Professor of Wellbeing and Economics at the London School of Economics, Centre for Economic Performance

8 thoughts on “The X-mas quiz: are you a utilitarian?”

  1. Death-with-high-probability and the unborn are a hard things to think about in regard to welfare and the Coase theorem. Do you have other examples that don’t involve these? BTW, does the Coase theorem require interpersonal comparison of utility? I don’t think so.

    I also don’t think that economists care about utility, we care about revealed preferences. I would say that economists are wedded to the revealed preference theory of Samuelson (in fact, it’s revealed preference theory that distinguishes us from other social sciences). It’s choice that matters.

    Like

  2. 1. Open market, let the donors sell and the price comes down.
    2. Thinking is not a crime. No drones unless someone has been convicted of a crime. Otherwise it is just State sanctioned murder that in the end will make everyone worse off.
    3.

    Like

  3. 3. Put a trailor up and sell the video on eBay. Whoever will pay the most maximises my utility.
    4. Obviously whichever group pays the most. Why would I care otherwise?

    Like

  4. Can you be an economist without being utilitarian in moral calculations? I think so. I don’t agree with the premise.

    Like

  5. 1) Society has no supply of organs; individuals do. Therefore society has no say in how they are allocated. Organs should be allocated to whoever the donor wishes to give them to. Personally, I would probably prioritise my own loved ones, followed by those whose situation was most urgent, then by those who could not otherwise afford them, then by those least likely to reject them.

    2) It’s not a question of probability, but intent. If he has the means and has stated intent to cause a million deaths, he should be stopped by minimum necessary force (which may include death)

    3) No. I’m invading the privacy of all pet-owners and should be arrested. If I have the permission of the pet-owners, and am gathering the data with their knowlege, then yes.

    4) I personally would invest in the latter. Other people may choose differently. Investment in charitable organisations is a matter for individuals

    Like

  6. Just to add, I think it’s more accurate to say economists use utilitarianism as a methodological tool to model human behaviour because it is a good approximation.

    Like

  7. @Bolter: Note that the fundamental welfare theorems imply only Pareto efficiency, not anything to do with utilitarianism.

    It seems to me that it makes sense to have a systematic bias towards the young. They are less likely to have accumulated the wealth to purchase organs, yet stand to gain the most from a transplant.

    Like

  8. This is the usual critique of utilitarianism : that a simplistic head count of beneficiaries often yields utilitarian solutions which are obviously morally repugnant. But as usual this critique ignores the effects of the decision itself in the utility function. It is perfectly consistent with utilitarian theory to suggest that it is NEVER OK to kill a suspected terrorist because such an act would create a general sense of injustice and fear in society which would outweigh the saving of thousands of lives ‘saved’.

    Like

Comments are closed.