Like most people, highly-skilled professionals are lazy. They can’t help leaning on excuses in order to avoid completely thinking a problem through. While this tendency is arguably benign in a well-functioning private market—where somebody will in fact find it in their interests to consider a problem from all angles, if it is economically worthwhile to do so, and hence squeeze the effects of unwarranted laziness out of the system—this pressure is not nearly as potent in government-subsidized industries, like health and education. It is all too easy, and hence all too common, for very smart people working in these industries to stop midway through the critical analysis of a complex problem and settle for a half-baked solution. Often their rationalization for stopping is in some way related to not having perfect information, and often this very absence of perfect information has nurtured the creation of institutional supports for their laziness.
Take the simple example of the typical treatment of people at risk for heart disease or stroke. For years, the achievement of threshold values for LDL cholesterol served as a simple rule-of-thumb that doctors could use in deciding whether to continue prescribing statins (cholesterol-lowering drugs) to their patients. As in much of medical science, the right approach to managing cardiovascular disease risk is difficult to know for certain. Patients differ in their baseline health levels, their habits, their family histories, and any number of other dimensions, meaning that a one-size-fits-all approach to assessing and treating cardiovascular risk will almost certainly be less effective than individually-tailored treatment plans. Yet, for a doctor who is pressed for time and has incomplete information about a given patient, the existence of a guideline endorsed by medical associations is just too tempting to pass up. More than that, the very existence of the guideline creates the spectre that not following it might open the door to malpractice suits should something go awry. Easier not to think, and instead to simply invoke benchmarks. This process is perpetuated rather than being stamped out by competitive forces because of imperfections in the market for patient care. Arguably, those imperfections themselves played a role in the rise of the professional associations that curtail the worst excesses of powerful professionals, in part through the crafting and promulgation of benchmarks.
In a similar fashion do other examples of benchmarks promulgated by our institutions support laziness and encourage a conservative rather than entrepreneurial approach to professional practice. Take the institution of large lectures for the teaching of university course material. It is highly unlikely that a large lecture experience is going to lead to optimum learning for every one of the diverse students in a given class. But tailoring the learning experience to individual students is just too costly, and too fraught with uncertainty given that no lecturer has perfect information about the learning-relevant characteristics of each student. (Or so we tell ourselves!) The benchmark tradition of the lecture in this way controls teachers who might otherwise choose delivery methods even less effective than the lecture, but also supports a lazy approach to teaching and leads to innovation (e.g., flipped classrooms) being viewed with skepticism. This dynamic again is perpetuated due to market imperfections.
This is not to say that guidelines and benchmarks are unwarranted on economic grounds. In the examples above, the existence of a benchmark has likely benefitted untold legions of patients and students whose true response profile was sufficiently close to average and who, in the absence of the benchmark, might well have ended up either with no service (due to cost barriers) or at the mercy of some unhinged doctor or teacher. The main problem arises for people on the extremes of the relevant distribution. This means that determining the socially optimal behavior for the supplier involves knowing how thick the tails of that distribution are, as well as the cost of the additional effort required to achieve a better result via customized, rather than benchmark, service to people in those tails. As technology progresses and the population grows, so too will our ability to collect enough data on the tail-people to develop effort-saving guidelines that cater for most of them too, meaning that this is in the grander scheme of things a short-run problem that is already receding and should continue to do so over the next hundred years.
In the meantime though, what benchmark recommendations might be offered to those working in or being serviced by such industries? First, if you really care about others’ outcomes and have some flexibility to vary the intensity of your effort, avoid the temptation to merely lean on guidelines when you supply your services if you suspect that the tails of the population you are servicing are thick. Second, hope that you are average—at least in some dimensions.