The Xmas quiz answers and discussion

Last Monday I posted 4 questions to see who thought like a classic utilitarian and who adhered to a wider notion of ethics, suspecting that in the end we all subscribe to ‘more’ than classical utilitarianism. There are hence no ‘right’ answers, merely classic utilitarian ones and other ones.

The first question was to whom we should allocate a scarce supply of donor organs. Let us first briefly discuss the policy reality and then the classic utilitarian approach.

The policy reality is murky. Australia has guidelines on this that advocate taking various factors into account, including the expected benefit to the organ recipient (relevant to the utilitarian) but also the time spent on the waiting list (not so relevant). Because organs deteriorate quickly once removed, there are furthermore a lot of incidental factors important, such as which potential recipient is answering the phone (relevant to a utilitarian)? In terms of priorities though, the guidelines supposedly take no account of “race, religion, gender, social status, disability or age – unless age is relevant to the organ matching criteria.” To the utilitarian this form of equity is in fact inequity: the utilitarian does not care who receives an extra year of happy life, but by caring about the total number of additional happy years, the utilitarian would use any information that predicts those additional happy years, including race and gender.

In other countries, the practices vary. In some countries the allocation is more or less on the basis of expected benefit and in the other is it all about ‘medical criteria’ which in reality include the possibility that donor organs go to people with a high probability of a successful transplant but a very low number of expected additional years. Some leave the decision entirely up to individual doctors and hospitals, putting huge discretion on the side of an individual doctor, which raises the fear that their allocation is not purely on the grounds of societal gain.

What would the classic utilitarian do? Allocate organs where there is the highest expected number of additional happy lives. This thus involves a judgement on who is going to live long and who is going to live happy. Such things are not knowable with certainty, so a utilitarian would turn to statistical predictors of both, using whatever indicator could be administrated.

As to length of life, we generally know that rich young women have the highest life expectancy. And amongst rich young women in the West, white/Asian rich young women live even longer. According to some studies in the US, the difference with other ethnic groups (Black) can be up to 10 years (see the research links in this wikipedia page on the issue). As to whom is happy, again the general finding is that rich women are amongst the happiest groups. Hence the classic utilitarian would want to allocate the organs to rich white/Asian young women.I should note that the classic utilitarian would thus have no qualms about ending up with a policy that violates the anti-discrimination laws of many societies. Our societies shy away from using observable vague characteristics as information to base allocations on, which implicitly means that the years of life of some groups are weighed higher than the years of life of another. The example thus points to a real tension between on the one hand classic utilitarianism and its acceptance of statistical discrimination on the basis of gender and perceived ethnicity and on the other hand the dominant moral positions within our society. Again, I have no wish to say which one is ‘right’ but merely note the discrepancy. As to myself, I have no problem with the idea that priority in donor organs should be given to young women though I also see a utilitarian argument for a bit of positive discrimination in terms of a blind eye to ethnicity (ie, there is utilitarian value in maintaining the idea that allocations should not be on the basis of perceived ethnicity, even though in this case that comes at a clear loss of expected life years).

The second question surrounded the willingness to pre-emptively kill off threats to the lives of others.

The policy reality here is, again, murky. In order to get a conviction on the basis of ‘attempted’ acts of terrorism or murder, the police would have to have pretty strong evidence of a high probability that the acts were truly going to happen. A 1-in-a-million chance of perpetrating an act that would cost a million lives would certainly not be enough. Likely, not even a 10% chance would be enough, even though the expected costs of a 10% chance would be 100,000 lives, far outweighing the life of the one person (and I know that the example is somewhat artificial!).

When it concerns things like the drone-program of the west though, under which the US, with help from its allies (including Australia), kills off potential terrorist threats and accepts the possibility of collateral damage, the implicit accepted burden of proof seems much lower. I am not saying this as a form of endorsement, but simply stating what seems to go on. Given the lack of public scrutiny it is really hard to know just how much lower the burden of proof is and where in fact the information is coming from to identify targets, but being a member of a declared terrorist organisation seems to be enough cause, even if the person involved hasn’t yet harmed anybody. Now, it is easy to be holier-than-thou and dismissive about this kind of program, but the reality is that this program is supported by our populations: the major political parties go along with this, both in the US and here (we are not abandoning our strategic alliance over it with the Americans, are we, nor denying them airspace?), implying that the drone program happens, de facto, with our society’s blessing, even if some of us as individuals have mixed feelings about it. So the drone program is a form of pre-emptively killing off potential enemies because of a perceived probability of harm. The cut-off point on the probability is not known, but it is clearly lower than used in criminal cases inside our countries.

To the classic utilitarian, if all one knew would be the odds of damage and the extent of damage, then the utilitarian would want to kill off anyone who represented a net expected loss. Hence the classic utilitarian would indeed accept any odds just above 1 in a million when the threat is to a million lives: the life of the potential terrorist is worth the expected costs of his possible actions (which is one life). If one starts to include the notion that our societies derive benefit from the social norm that strong proof of intended harm is needed before killing anyone, then even the classic utilitarian would increase the threshold odds to reflect the disutility of being seen to harm those social norms, though the classic utilitarian would quickly reduce the thresholds if there were many threats and hence the usefulness of the social norm became less and less relevant. To some extent, this is exactly how our society functions: in a state of emergency or war, the burden of proof required to shoot a potential enemy drastically reduces as the regular rule of law and ‘innocent till proven guilty’ norms give way to a more radical ‘shoot now, agonize later’ mentality. If you like, we have recognised mechanisms for ridding ourselves of the social norm of a high burden of proof when the occasion calls for it.

As to personally pulling the trigger, the question to a utilitarian becomes entirely one of selfishness versus the public good and thus dependent on the personal pain of the person who would have to pull the trigger. To the utilitarian person who is completely selfless but who experiences great personal pain from pulling the trigger, the threshold probability becomes 2 in a million (ie, his own life and that of the potential terrorist), but to a more selfish person the threshold could rise very high such that even with certainty the person is not willing to kill someone else to save a million others. That might be noble under some moral codes, but to a utilitarian it would represent extreme selfishness.

So the example once again shows the gulf between how our societies normally function when it concerns small probabilities of large damages, and what the classic utilitarian would do. A utilitarian is happy to act on small probabilities, though of course eager to purchase more information if the possibility is there. Our societies are less trigger-happy. Only in cases whereby there is actual experienced turmoil and damage, do our societies gradually revert to a situation where it indeed just takes a cost-benefit frame of mind and suspends other social norms. A classic utilitarian is thus much more pro-active and willing to act on imperfect information than is normal in our societies.

The third question was about divulging information that would cause hurt but that did not lead to changes in outcomes. In the case of the hypothetical, the information was about the treatment of pets. To the classic utilitarian, this one is easy: information itself is not a final outcome and, since the hypothetical was set up in that way, the choice was between a lower state of utility with more information, versus a higher state of utility with less information. The classic utilitarian would chose the higher utility and not make the information available.

The policy reality in this case is debatable. One might argue that the hypothetical, ie that more information would not lead to changes but merely to hurt, is so unrealistic that it basically does not resemble any real policies. Some commentators made that argument, saying they essentially had no idea what I was asking, and I am sympathetic to it.

The closest one comes to the hypothetical it is the phenomenon of general flattery, such as where populations tell themselves they are god’s chosen people with a divine mission, or where whole populations buy into the idea that no-one is to blame for their individual bad choices (like their smoking choices). One might see the widespread phenomenon of keeping quiet when others are enjoying flattery as a form of suppressing information that merely hurts and would have no effect. Hence one could say that ‘good manners’ and ‘tact’ are in essence about keeping information hidden that hurts others. Personally, though I hate condoning the suppression of truth for any cause, I have to concede the utilitarian case for it.

The fourth and final question is perhaps the most glaring example of a difference between policy reality and classic utilitarianism, as it is about the distinction between an identified saved life and a statistically saved life. As one commenter already noted (Ken), politicians find it expedient to go for the identified life rather than the un-identified statistical life, and this relates to the lack of reflection amongst the population.

To the classic utilitarian, it should not matter whose life is saved: all saved lives are to the classic utilitarian ‘statistical’. Indeed, it is a key part of utilitarianism that there is no innate superiority of this person over that one. Hence, the classic utilitarian would value an identified life equally to a statistical one and would thus be willing to pour the same resources into preventing the loss of a life (via inoculations, safe road construction, etc.) as into saving a particular known individual.

The policy practice is miles apart from classic utilitarianism, not just in Australia but throughout the Western world. For statistical lives, the Australian government more or less uses the rule of thumb that it is willing to spend some 50,000 dollars per additional happy year. This is roughly the cut-off point for new medicines onto the Pharmaceutical benefit Scheme. It is also pretty much the cut-off point in other Western countries for medicines (as a rule of thumb, governments are willing to pay about a median income for another year of happy life of one of their citizens).

For identified lives, the willingness to pay is easily ten times this amount. Australia thus has a ‘Life Saving Drugs’ program for rare life-threatening conditions. This includes diseases like Gaucher Disease, Fabry disease, and the disease of Pompe. Openly-available estimates of the implied cost of a life vary and it is hard to track down the exact prices, but each year of treatment for a Pompe patient was said, in a Canadian conference for instance, to cost about 500,000 dollars. In New Zealand, the same cost of 500,000 is being used in their media. Here in Australia, the treatment involved became available in 2008 and I understand it indeed costs about 500,000 per patient per year. There will be around 500 patients born with Pompe on this program in Australia (inferred from the prevalence statistics). Note that this treatment cost does not in fact mean the difference between life and death: rather it means the difference between a shorter life and a longer one. Hence the cost per year of life saved is actually quite a bit higher than 500,000 for this disease.

What does this mean? It means, quite simply, that in stead of saving one person with the disease of Pompe, one could save at least 10 others. In order for the person born with Pompe to live, 10 others in his society die. It is a brutal reality that is difficult to talk about, but that does not change the reality. Why is the price so high? Because the pharmaceutical companies can successfully bargain with governments for an extremely high price on these visible lives saved. They hold politicians to ransom over it, successfully in the case of Australia.

Saving one identified life rather than ten unidentified ones is not merely non-utilitarian. It also vastly distorts incentives. It distorts the incentives for researchers and pharmaceutical companies away from finding solutions to the illnesses had by the anonymous many, to finding improvements in the lives of the identifiable few. It creates incentives to find distinctions between patients so that new ‘small niches’ of identified patients can be found out of which to make a lot of money. Why bother trying to find cures for malaria and cancer when it is so much more lucrative to find a drug that saves a small but identifiable fraction of the population of a rich country?

So kudos to those willing to say they would go for the institution that saved the most lives. I agree with you, but your society, as witnessed by its actions, does not yet agree, opening the question what can be done to more rationally decide on such matters.

Thanks to everyone who participated in the quiz and merry X-mas!

Author: paulfrijters

Professor of Wellbeing and Economics at the London School of Economics, Centre for Economic Performance

One thought on “The Xmas quiz answers and discussion”

  1. I think the big problem with statistical discrimination approach to utilitarianism is that the statistics are themselves products of institutions that may not already be utilitarian, and hence they are not useful decision tools.

    For example you say this about the utilitarian approach to organ donation.

    “Allocate organs where there is the highest expected number of additional happy lives. This thus involves a judgement on who is going to live long and who is going to live happy. Such things are not knowable with certainty, so a utilitarian would turn to statistical predictors of both, using whatever indicator could be administrated.”

    I don’t think they would, because the statistics don’t tell us what you think since they are themselves are a product of the historical institutions of society. There is a Lucas Critique hidden in all of these hypotheticals.

    To be clear, let’s say you lived in a society where all people were equal in terms of life expectancy and happiness. Then you developed an organ donor system that gave priority to young white women for some political reason.

    Now, society has this group of young white women who are statistically more likely to live longer and happier. Hence, when a utilitarian planner who is happy to use a statistical discriminatory approach makes a separate decision about a new system for, say, emergency services, they will prioritise the young white women, further entrenching their advantages.

    The second question simply highlights the nature of all probability as being conditional. Let’s say you have 3 potential terrorists you each seem to have a 1:1mill chance of some kind of attack, yet this probability is conditional on the others not attacking. If another attacks, the chances plummet rapidly.

    How do you approach this? Do you kill all three? If so, you’ve killed 3 people to save 1mill. But what if there are more? What if by reducing the potential attack from one individual you always increase the chance of attack from another? And how do you know that this cascading effect is not going to occur.

    Hidden in these probabilities are many assumptions which may or may not hold. I don’t think utilitarianism provides a solution to this problem, and hence whether any particular policy choice is appropriately applying utilitarianism is generally unknowable.

    Regarding the fourth question, the utilitarian answer is obviously the statistical saved life, CONDITIONAL upon the quality of the statistics and the verification of the embedded assumptions. However, again, in reality we can’t know a lot here, and when there is a lot of grey I think it should be to err on the side of individuals rather than statistics.

    As a final general comment though, what is interesting about these questions is that they imply a lot of knowledge of risk and probability, but don’t really function that well in a world of genuine uncertainty. What if the conditions required for the probability are so broad that no one would even attempt to assign a distribution to them?

    Like

Comments are closed.