A fable of Eunuchs, Praetorians, and University funding cuts.

Imagine yourself to be in the mythical Land of Beyond where you need minions to do a dirty job that men with honour would refuse to do. A classic trick in this situation is to pick people despised by the rest of society who are thus dependent on protection and will simply do what is asked for.

The Chinese emperors hit upon this truth when they started to surround themselves with eunuchs, despised by the rest of Chinese society and thus fiercely loyal to their protector, the Emperor. The roman emperors, similarly, made a habit of surrounding themselves with freed slaved who were despised by other Romans, as well as by a dedicated palace guard (the Praetorians) who were the only militia allowed in the vicinity of Rome.

The European colonialists too used this basic ‘dirty dozen’ technique when it came to keeping a large population in check with minimal own presence, particularly in Africa, by elevating some small despised group (ethnic or religious minorities) as the preferred club from whom the senior administrators came. This small favoured group would get personal benefits (riches and influence) but in return they would do whatever the colonizers wanted.

To see the relevance of this for university cuts in the Land of Beyond, you first need to step back a level and imagine yourself to be the Vice Chancellor of a second-rate university that brings in, say, a billion ‘Beyond’ dollars a year out of which some 300 million is money you dont really need to generate that 1 billion. It is ‘potential profit’ if you like.

Now, your first thought will of course be to give as much of this money to yourself as you can. That is not so easy though: in Beyond, universities are non-profit organisations nominally run by senates and full of academics who like to monitor and criticise you. You would never get away with giving yourself multi-million dollar salaries and huge offices if academics are really watching your every step.

So in order to get more of the profit, you need to subdue two groups, the academics and the senate. You subdue the academics by keeping them busy with ‘compliance’ and having a lot of systems in place to punish them if they become pesky. You thus include in your rules that anything that harms the reputation of the university is a sacking offence. You put yourself at the top of the committees that decide on professorial promotions and academic bonuses so that you are their direct boss. You appoint hundreds of administrators to monitor the media, teaching, and student-related activities of the academics with the purpose of keeping them quiet and punishing them when they get out of line.
You subdue the senate by overloading them with information (for which you need again more administrators) and by keeping them happy with luxuries and gifts. Over time, you attempt to get control of the mechanism via which new members get to be in these senates.

Now, the essential problem you face in this as a VC is how to ensure that the people helping you with your take-over plans are somewhat loyal to you rather than to something as silly as the goals of the university or academia or even to the needs of Beyond. It is loyalty to yourself that you need in order to eventually be able to get away with giving yourself huge amounts of money.

You remember your history lessons and realise that what you need is a set of eunuchs: people despised by the academics in your organisation who will thus have the same incentive as you have to subdue the academics and grab as much of the university resources as possible.

What are the equivalent of eunuchs in universities? Why, non-academics of course! Better still, non-academics whom you give academic titles for they will be even more despised! Hence you pick the most efficient bullies you can find, call them all professor and put them in charge of the divisions that subdue the academics and that send mountains of information to the university senate to ensure they will just go along with whatever you happen to ask of them at the end of some sumptuous occasion.

Due to your brilliance and foresight, the trick works like a charm and you find yourself earning well over a million, with several huge offices, and in a position to bargain for even more kick-backs from outsiders who want to use parts of the university for their own end (property developers and the like).

Now imagine yourself in the layer yet higher: you are now an ambitious paymaster in the Capital of Beyond, someone who nurtures a reputation for being able to get things done even if they might not really be in Beyond’s best interests. You too have a control problem for you want all kinds of things from universities. You would like the universities to keep the population happy by churning out cheap degrees to domestics. You also want universities to sell visas to smart oversees students by means of high fees for almost no education (cross-subsidising those domestics). Basically, you want universities to abide by whatever fancy drifts into the head of your current minister.

The control problem you have as a ‘wheeling and dealing’ senior civil servant in Beyond is again those pesky academics: they are self-righteous, not all that interested in your opinion or even your money, and wouldn’t easily go along with these plans. They might well flatly refuse to sell visas to foreigners because they would baulk at short-changing the education given to those foreigners. Indeed, they would probably laugh in your face if you suggested that universities should fall in line with, say, your wish to have a campus in the middle of nowhere just because it is a marginal constituency.

Just imagine what confident academics would do if you told them to cut their budget by 900 million! Why, they might do something as bold and brash as to honestly tell their students that there are no funds to properly educate them. Imagine the political fallout of such honesty by a bunch of self-righteous academics who won’t simply do your bidding! No no, it is quite clear to you that the last people you want leading universities are academics. You want leaders who know what you really mean when you talk about ‘university accountability’, ‘stakeholder management’, ‘strategic visions’ and ‘preparing for the future’.

So the senior Beyond bureaucrat too finds herself in the situation of needing eunuchs in charge of universities. You don’t mind if they get some private benefits out of the arrangement as long as they do your bidding and not rock the boat politically.

Now think a step higher again and consider why Beyond might have fixers at the top of the ministries …..

 

Paul Milgrom’s 65th Birthday

My PhD advisor turns 65 today and here, at Stanford, we are having a conference in his honor. I made some remarks that I thought I’d post here.

I am here to talk about Paul’s contributions to applied theory. While Susan and Yeon-Koo have talked about theoretical contributions that so many in this room associate with Paul, to the wider profession, his main contribution is somewhat different.

Take a look here at his most highly cited work. With just a couple of exceptions, it is all applied theory. And moreover, when you look at where those citations are coming from it is not economics. It is management, strategy and finance. In other words, Paul is the most significant theorist in business and management, today, and possibly ever.

How did this happen? To give some context, there is really a schism in economics and social science in general. It surrounds the issue of complexity. There are many social scientists who think the world is too complex to make simplified theory useful. When you use specific assumptions — like rationality or expected utility or equilibrium — or more commonly in applied work, functional forms — they argue that you miss so much that what remains is useless.

Paul’s view of the world, it seems to me, is that complexity must be respected but our tools of economic theory can guide us as to their own appropriateness. In that respect, simplicity is a virtue and is manageable so long as the tools and methodology applied is understood.

Take the famous result in agency theory of Bengt and Paul’s that simple wage functions can be optimal. Everyone knew those functions were employed in practice but the ‘informal’ reaction was that it was a response to difficult of doing more, or a saving of cognitive costs or a lack of skill. Paul said no, it can’t be that. There will always be a smart agent who would improve it and then we would see heterogeneity. Instead, the complexity of the world itself would give rise to a simple response.

That is one way to read all of Paul’s work. Simplicity must be a response to the complex environment. And simple theoretical treatments can be immediately generalised if those treatments capture key trade-offs. Paul taught us where to look.

What I learned from this is that in applied theory there is a symbiotic relationship between the real world phenomenon, the formal model and its intuition. And there are feedbacks between all three in the exploration that is economic theory. I had the pleasure of observing Paul, and John, during the hey-day of their foray into organisational economics. Time and time again, they would take an individual transaction (Paul contracting with a builder, say; Paul thinking about spectrum packets; observations of a Toyota factory in Japan) and realise why existing theory just couldn’t apply. In that process, they would identify and relax the key assumption and draw new implications (the job design should change; you have to us computer technology to deal with substitutes and complements in packages; that change will be hard) and discover it in the real world. They would leave behind a framework for empirical researchers to follow and that is where all those citations come from. They seep through MBA curriculum. It is a tremendous legacy.

Not only that, Paul appears to yearn for ‘beauty’ in his theories. If it is a mess, you must be missing something. You haven’t identified the key trade-offs. These papers are beautiful. I have taken this to my own applied work. Avoid contrivance. Understand intuition. And above all, become a useful theorist. That will make theory useful.

Now for we mortals this is a challenge. Paul lights the path because it comes easily to him. I remember him lamenting to me that it took him a whole day — a whole day — to get the model right for a paper. But that doesn’t mean that we should not aspire for the same.

Applied theory is an area that continues to have issues finding its place in economic research. But Paul has, in many respects, allowed good applied theory to flourish and rise to a new standard.

In conjunction with the conference, everyone involved contributed to his Wikipedia page as a birthday gift. Suffice it to say, this was greeted with enthusiasm and it produced one of the most comprehensive entries of any economist (and perhaps the longest, in the terms of bytes, of them all). Several Nobel prize winners contributed so I think it is safe to say that quality is high. Here is Al Roth’s account.

The Role of Research in Business Schools

In the Financial Times, there was a feature piece interviewing Larry Zicklin who wants to eliminate research funding and promotions for academics in business schools. Naturally, I disagree. I wasn’t the only one. UTS’s Timothy Devinney published a comment on that post that he gave me permission to reproduce here.

Comment by Professor Timothy Devinney:

It is interesting how over my 20 years as an academic I have heard this sort of logic again and again and again. Invariably it is from adjunct faculty with a more ‘professional’ background complaining that they do not understand what it is that academics do and why the do not ‘teach’ more or that their promotions should be based more on teaching. Unfortunately such arguments, while valid to the individuals who make them, are based mainly on faulty logic and a basic misunderstanding of what is going on. For example, whenever I go and work with a company I am amazed at how much time managers waste actually doing nothing but monitoring and interacting with other managers? Why are they not working with customers more? Why are they not out in the field rounding up more business? Isn’t it inefficient to have them in meetings so often invariably doing little more than playing power games against other managers? Of course, this is a naive viewpoint and it is based on my failure to understand what these managers do. Ditto Mr. Zicklin’s view of academics in business schools. Here are some points that matter.

His view of teaching is dominantly one of information dissemination. Having been at the top and bottom of the academic food chain (being both at U. Chicago and now in Australia at what is dominantly a teaching factory) I have seen the differences. The students at Chicago get knowledge at the coal face by people who understand what is both leading edge and sophisticated. Students here get commoditized information delivered by individuals who only know what they read because they are not leading edge scholars. Indeed, where the MOOC Tsunami will hit is on this commoditized end of the business.

Second, his viewpoint is based on the ‘leach on society’ view of academics. I argue that good scholars are some of the most entrepreneurial people in the world. Imagine Mr. Zicklin working in a business in which the failure rate is > 90% (which is the rejection rate of most leading journals). Also, it does not matter where you reside or which university you are at since the rejection is based on blind review. Imagine your typical corporate manager working in an environment in which their work was evaluated blindly and in 9 cases out of 10 rejected as being inadequate. Imagine also those individuals attempting to run projects on little more than scraps of funding (for an average academic on what is known as a 40:40:20 contract the actual cost of the research per year amounts to only about $50,000 per year). Most companies spend more on business class airfare for managers than this. Most universities spend 20 times this on the basketball coach.

Third, most good academics could easily make more money outside academics than inside academics. When I received my PhD I had an offer from one of the major consultancies. It was three times my academic salary. But I remained an academic because I believed in what I wanted to do. I argue that the difference between managers and academics is that managers give up what they love for money while academics give up money for what they love. If you take away the scholarship aspect of this then the equation skews toward money. So if I am going to sing for my supper then I want to be paid for singing. Unfortunately as soon as that occurs I end up choosing not to be an academic. In reality, we have serious problems getting good brains to commit to getting phds and hence the pool of potential future faculty is actually drying up. If anything the premium needs to be bigger not smaller.

Fourth, Mr. Zicklin’s argument that promotion is all about research and not teaching is just wrong. You cannot get promoted anywhere as a basket case in the classroom. Indeed, nearly every academic I know is quite good to very exceptional in the classroom. It is also the cases that I know where we looked at exactly this we found that our best scholars were our best teachers. So this idea that there are ‘teachers’ and there are ‘researchers’ is just nonsense. The best scholars are on average exceptional at communicating. Mr. Zicklin’s problem is that he is basing his viewpoint on myth and exceptions and not evidence. However, in the end, if your best scholars are you best teachers the institution must make a decision as to the allocation of their time. Unfortunately, good scholars are rare and institutions cannot replace them as easily as they could to one trick teaching ponies.

Finally, the fact that academic journals are not read by managers is absolutely meaningless. These journals are not meant for managers. That is why you have HBR, Sloan Mgt Review, McKinsey Quarterly and other outlets. Any good journalist or writer will tell you that you write to the audience. If you want to communicate with managers you do it differently than when you speak to other scientists. As soon as you attempt to write to everyone you actually communicate with no one. I personally am the sort of academic that communicates to broad audiences (like my colleague Pankaj) but I do not expect managers to read my academic articles. Also, in a response to Freek Vermeulen on this same topic (also in the FT), I argued that we as academics influence practice one student at a time by how we do what we do and what we pick to have in our classes and how we communicate in public forums. Many of the examples above are good examples of others. And there are many many more.

So while Mr. Zicklin’s arguments appear to be logical and reasonable I would argue that you need to be careful about what you wish for. There is more than one tsunami approaching and my view is that the more dangerous one is that there are fewer and fewer potential scholars choosing to be academics because the personal benefits of such a career are being eroded while the financial compensation is not sufficient to offset this. If I had to make the decision today that I made 20+ years ago I would not go into academics. I would chase the money, cash out and then become and adjunct faculty member writing opinion pieces for the FT while living the life of the casual academic.

Are there unhelpful mathematical models of economic phenomena?

Take your bog-standard first-year economics story of why money (sea shells, coins, notes, bank statements) exist. Money, you will be told, is a means of exchange, a store of value, and a unit of accounting, thoughts going back to David Hume (18th century) and earlier.

When explaining the idea of exchange to students you say things like ‘you can’t exchange a hundredth of a sheep for a loaf of bread so you want something to represent the value of a hundredth of a sheep, and in any case it’s a long slog to the market carrying a sheep around’.

When explaining the idea of a store of value you say things like ‘You would like to be able to consume things when you are old without working when you are old. That means you need to save up wealth in the form of something that doesn’t perish. Sheep perish, gold does not’; and when explaining the unit of value idea you say things like ‘we all think of the value of things in terms of a numeraire, such as that milk costs 1 dollar per liter and flour 2 dollars a kilo. None of us think in terms of 1 liter of milk being worth half a kilo of flour. Given many different products, it is more convenient to think of the value of each of them in terms of something you can compare across these goods. Money performs that role and you will find that even when the unit of money changes (such as moves from the Deutschmark to the Euro) that people will continue to calculate everything back in terms of the old money for many years’.
Simple stories, no? And most students will ‘get the point’ of each of these three stories. They will see the difficulties of exchange with lumpy goods that cannot easily be stored and exchanged, and they will see the point of being able to save up for a later date and that requires some form of storable money.

Simple though these arguments are, you will be hard-pressed to find mathematical models of them that anyone would recognise as remotely capturing these verbal arguments. It tells you something about the limits of mathematical models to think through why recognisable models of money do not exist. So bear with me as I take you through the actual difficulties of modelling money and how those difficulties end up as unhelpful advise from theoretical economists to policy makers.Think of the actual difficulties involved in modelling the story of money as a medium of exchange. Before even thinking about money, you have to start from a model with exchange. This means you need to model the production of more than one good and you must build in a reason, like comparative advantage, why individuals do not simply produce all the goods they need by themselves. For realism you would want the goods to be lumpy, perishable, and to require long-term investments. After all, sheep herding and crop-growing do not happen overnight and neither sheep nor apples can meaningfully be stored for very long or exchanged in halves.
You immediately hit your first mathematical snag right there: if production is lumpy (you can’t produce half-apples), then you won’t get the simple outcome that someone will spend all his time on what he is best at. An individual could optimally spend his time by producing one sheep and two apples even though he has a comparative advantage in sheep, simply because he can’t make exactly two sheep. If you want lumpiness in your model, you thus would have to solve the problem of how a person would optimally allocate a fixed amount of time over lumpy investment projects. This is known in the Operations Research literature as the knap-sack problem (in which you need to decide which lumpy goods to put into a knapsack of particular size) and it is known to be an ‘NP-hard’ problem. Simply put, you know of such problems that there is a single optimal solution but it may take a long time to actually find it. Solving just that knapsack problem for a single individual is already something that may take a computer years if you choose the bundle of potential goods to be large enough, and there will be cases in which you will find that even with comparative advantage the sheep herders may grow enough apples to not need exchange.
How do you solve that snag, which incidentally arises in all models of production? The reality is that you don’t because solving just that one leaves you with a model in which you can solve little else and in which you are not assured of any real impetus for exchange. Hence you ‘simplify reality’. You thus presume that there is no such thing as a lumpy good and that people spend their time producing a ‘continuous’ amount of goods, say, 3.271 sheep or 14.231 apples. Without lumpiness, people will specialise in making one thing and have a reason to trade. Note that you thus have already given up on describing the most intuitive reasons for having money around: you can no longer meaningfully talk about the difficulties of exchanging a hundredth of a sheep for half an apple since you now have presumed a world in which you produce sheep in hundredths and apples in halves.
Moving on, the next modelling problem you hit is that it must be the case that different individuals happen to want what the other produces, a ‘coincidence of wants’. Indeed, you want some kind of place (a market) where people come to exchange what they have produced. In model-land you must answer every counter-factual. You must thus have a reason why traders would use money instead of giving each other credit or just exchanging bundles of good (since goods are now not lumpy, you can just go to the market with your 2/3 sheep and exchange it in one big free-for-all for all the goods you need). Such thoughts may sound absurd to you, but working them through has occupied really good mathematicians for years. It is in fact nigh impossible to solve models in which people do not know exactly beforehand what will happen in a market.

You see, as soon as you say that a person does not know beforehand what other people have produced and at what prices they might trade, you are in the world of limited information and in the world where it is possible that people make mistakes (go to the market empty handed, produce the wrong things, etc.). You are then in the business of having to specify how people form expectations about what others would do and what prices they would trade at.

You are then also in the business of working out whether there are perhaps multiple equilibria (i.e. different configurations of the whole economy) and the issue of how people who don’t know each other could actually coordinate on a particular configuration. You then for instance have to contend with the possibility that nobody shows up at the market because they expect nobody else to show up. You have to contend with the possibility that you get the wrong prices, under which there is no specialisation at all.

You have to contend with the problem that the only people to show up wouldn’t want to trade with each other because they have produced the same thing and you have to figure out how a group of people would actually arrive at a price (or prices). Each of these sub-problems is considered exceptionally hard by theorists: only under very specific mathematical assumptions can you be absolutely guaranteed that the problems above do not occur.
Hence, what do you do? Well, again, the reality is that you assume away all these problems. You simply make those assumptions that guarantee you that everyone who produces something is ‘magically’ matched up with someone else who has something they want to trade with. Also, you now presume the existence of some kind of all-powerful benevolent entity, say god. You need such an entity to do away with elements in your model you cannot model but need anyway, such as how prices arise before any exchange takes place (if prices change during exchange one gets into exceptionally complicated dynamics where you need to start talking about the expectations that people have of possible price paths). So you invent a god that takes care of such issues. God, in his first incarnation as a Walrasian auctioneer, announces the prices at which everyone is willing to trade, whereby everyone believes god and acts accordingly. God, now in his second role as a benevolent and completely trusted government, then also provides a means of exchange that is not perishable, i.e. money.

Usually, a third sleight of hand is needed to get a workable model and that is to have a situation in which there is no such thing as a mistake because there is no such thing as expectations that are incorrect. This of course basically presumes away the original problem you were starting out to model, but that is an almost inevitable casualty of the wish to have a tractable economic model.

What kind of models of money do we end up with? To my taste, the best that mathematicians have come up with is the story that some sheep producers have a craving for eating apples in the night, but they are themselves just innately incapable of producing apples and their sheep always die at the end of the day (i.e. they must be eaten before the end of the day. New ones are only born at the start of the next day). This means that the sheep herder must sell his sheep during the day to the apple maker whilst buying the apples during the night (apples also perish at the end of each half day so he can’t trade during the day). In a modelling sense, that ensures you the ‘coincidence of wants’ you need to have a role for exchange and ensures that sheep herders and apple farmers cannot just trade their produce. By assuming that they not trust each other, but that they do trust the provider of money, you ensure that they do not just trade promises but use money for their trades. Within this kind of basic set-up you can even introduce monetary policy in the form of allowing god to hand out more money to specific groups or to reduce the value of the money in circulation. Whole ‘policy edifices’ have been built upon the basic structure of sheep herders having cravings for apples in the night. For those who are interested, I am talking about the model by Lagos and Wright (2005) and the many extensions on their basic idea.
Now, anyone in his right mind would laugh out loud at the story above as it comes nowhere close to the historical stories told about why we have money and what its role is in the economy: big historical problems in the emergence of money concerned the fact that there was no trusted government, and the value of money had a lot to do with the actual costs of information and transportation, costs that the story talked about above had to assume away. Yet the story of apples and sheep above, believe it or not, is one of the dominant stories told in ‘micro-founded’ monetary economics. It is in that kind of model-economy that they talk about money, credit, banks, regulation, etc. If it weren’t for the fact that it is deemed cutting-edge research, you would have to cry.
I hope you will take my word for it that the problems of generating models in which money exists because of savings and as a numeraire good are equally hard to set up and hence such models don’t exist at all as far as I know.
The value of the actual models of money are mainly as proof of concept, i.e. that you can think of a micro-model in which money emerges and where you can base the emergence of money on at least one of the underlying micro-motivations you think are important for the existence of money (the advantage of having a more varied consumption bundle). It is not the model you would have wanted but at least you can have it in the back of your mind as an example of the micro-mechanisms that are relevant.
The problem with the monetary model talked about above is that it fits so poorly. It hardly fits the many historical examples we know of the emergence of money, nor does it capture the problems we face today when thinking about money markets (trust in the institutions, the incentive problems inside organisations, the investment problem). Hence it is singularly unsuitable to use as a mental laboratory for the policy problems of today, or even as a descriptive model of the actual roles of money in our economy.

The problem of poor fit carries over to unhelpful advise: despite the fact that it is such a poor fit to reality, it is the only ‘game in town’ when it comes to micro-models of money. A most unfortunate and destructive phenomenon then appears, which is that the only game in town becomes the truth to a whole set of people making their careers on the back of it.

All the potential advantages of models become a disadvantage when a poorly-fitting model is taken too seriously. One potential advantage of models is that they can be the codification of previous knowledge and as such a good model is a quick way of conveying a lot of knowledge to the next generation who don’t have to learn what reasons went into the construction of the model in the first place.

This now becomes a disadvantage: the new generation that looks to write papers ‘on money’ need know nothing about the history of money or its uses today but only need know the dominant model, which turns into a disadvantage because that new generation will come up with twists and extensions of something that is innately unsuitable to answer any interesting question. Yet that new generation will be blissfully ignorant of the uselessness of what they are doing because they, unlike the originators of the first models on money, will lack the historical database in their heads of what actually goes on. They are simply proving their worth by being more acquainted with the mathematical ins and outs of these models than anyone else and that is what supplies them their daily dinner, not whether the model is useful to anyone else.

Another potential advantage of a good model is that you can make consistent statements instead of waffling on incoherently. One real advantage of model-land is that it is fairly easy to spot someone who is not capable of understanding models. This advantage also becomes a disadvantage in a model that fits poorly because you will see a great proliferation of consistent statements that are based on poor abstractions of real phenomena. You might term this the proliferation of ‘precisely wrong’ statements.

And it is a cop-out to say that these precisely wrong statements are not intended to be taken literally: despite being mere models, the adherents deliberately use words that convey its supposed usefulness, such as monetary policy, government, banks, etc. The pretense of usefulness pervades each paper and each grant proposal using these models. Worse still, that modelling community is a group with a big incentive to pretend that the assumptions made for convenience are ‘actually true’, i.e. it is a constituency of individuals with an incentive to presume there is no such thing as transaction costs or a trust problem when it comes to money. When such people become important they will poo poo those who make different assumptions and force them to first invest in their models. In short, a poor model that is taken seriously becomes a part of the problem.

Would you also have the same problems if monetary economics were mainly based on a set of historical case studies and an awareness of the problems faced today by economic actors? Unlikely, because you then at least have set up an ultimate goal of the discipline, which is to understand how the world came to be as it is and to help economic actors shape their world to their advantage, i.e. you are grounding your discipline in historical reality and real world problems. Having said this, one should not be blind to the disadvantage of a more verbal discipline though. The disadvantage is that when knowledge consists of a collection of examples and lessons, there is more room for the wafflers of this world to ply their trade, and there are millions of eager wafflers around.

Are there any good economic models you might ask? I believe there are and my prime example would be Industrial Organisation models of competition and market interaction. These are the Cournot models, Stackleberg models, models of complementary investments in vertical markets, oligopoly models, models of the internet as a platform, etc. The nice thing about these models is that the motivations they presume of their actors (pure greed) are pretty well spot-on and that it is not that hard in reality to see what kind of market interaction is happening, i.e. which of the I/O models to use.

Though it is hard to measure for a statistician, it is not so hard to spot as a human whether, say, the oil companies are engaging in collusion or not. It is not hard to spot a cartel, or the basic information structure of a market, nor is it hard to spot the structure of investment complementarities. In short, I/O models can do a remarkably good job of describing the particular aspects of reality one can optimally intervene in, which is of course why they are so central to the work of regulation authorities and why, for instance, auction design on the internet is done by mathematically schooled geeks. They need to know nothing of the history of auctions to nevertheless be damned good designers of auctions as long as they understand the models and have learned to spot the market patterns around them.

There are thus good models out there and the groups of disconnected geeks working on extending them are, often to their own surprise, doing something useful with their lives. We wouldn’t want to go back to waffling in those areas. The problem is thus not the existence of mathematical models per se, but rather that there are aspects of economic reality where the best we can do is a bad model.

Is money the only area where we can do no better than bad models that are worse than useless when they are taken seriously? Alas, no. What goes for money goes for many economic phenomena. To have an economic model where growth is driven by specialisation (which is what most historical economists believed was the engine of growth) has so far been beyond us, which is why we have ended up with these ridiculous representative agent models. What the pragmatists believe is true about specialisation can’t be modelled by the best minds in math econ land (this is not to say there are no models of specialisation, simply none that get close to illuminating the path-dependence, trust, and institutions that sustain it). Satisfactory ‘des-equilibrium’ models of recessions also simply don’t exist. Models of human behaviour drawing upon more than two of the known ‘irrationalities in our make-up’ are also too hard to solve. The list goes on and on: if one insists on consistent mathematical theorising from ‘micro-foundations’, nearly all of the big drivers of economic growth and economic institutions are beyond our ability to model even remotely realistically.

Mathematical models are hence in many areas a problem because they fit poorly but nevertheless live a life of their own, taking up valuable mental time of smart people, leading individuals to think about the wrong problems, leading people to think in terms of the wrong assumptions, motivating statisticians to measure the wrong things, and divorcing their discipline from reality.

Suppose you believe all this, but nevertheless want to make progress in disciplines by doing proper science, differentiating yourself from the wafflers. What is ‘proper science’ in an area where we cannot make much mathematical headway and hence where we can be reasonably certain that every grand story we tell (in maths or in words) has inconsistent parts to it? That’s the subject of a future blog….

What an academic article of the future should look like

There is much discussion these days about the future of scholarly publishing. Much of this surrounds the value of traditional publishers. When challenged those publishers point to the value and potential value they create. Here is Elsevier responding to a recent boycott led by mathematician Tim Gowers:

And we invest a lot in infrastructure, the tags and metadata attached to each article that makes it discoverable by other researchers through search engines, and that links papers together through citations and subject matter. All of that has changed the way research is done today and makes it more efficient. That’s the added value that we bring.

One of those elements of added value is the format for the published article itself. Publishers are so confident that this adds value that they permit working paper versions — prior to getting the publisher magic touch — reside freely on the web. To be sure, articles are typeset and tables and figures are cleaned up to look good on paper. But does all that make it better for those who are looking for knowledge?

Publishers know there is more potential there. Elsevier has launched its “Article of the Future” experiment to show what digitisation might make possible. Here, for instance, is their representation of a future mathematics article. The main text looks like a working paper. It isn’t a pdf but a webpage and the equations are rendered in latex. And it looks awful. Indeed, for reasons that are perplexing it is hard to read. Now I’m assuming stuff like that can be fixed but having it out there doesn’t inspire confidence.

But let’s focus on other elements that they have put in. First, they have an interactive element to allow you to play with a graph of a particular formula. That looks like a good feature to have available to readers. Second, they have included a video abstract. This could be a good thing but it shows one of the authors in front of a blackboard. This might be useful, it is hard to say. I have to admit that he did look the part. But I can imagine that seminars could be embedded here and that such things may be of use to readers. (Some academics have taken it upon themselves to provide such materials on their own websites, here is Glen Weyl). In another prototype, there are videos all through the article. Third, there are hyperlinks everywhere. The most useful of these link in to Elsevier’s database for references.

The things Elsevier are trying to do here are sensible from an adding value perspective. But they augment print and are still fundamentally based in it. The problem is that as the technology for sharing information changes we can refocus on what we should really care about. Print was a repository of knowledge. It allowed access and catered to the person who would spend time with an article. The additions Elsevier proposes are all about spending more time with the content. But I would argue that that is a narrow view of scholarly communication.

If you are like me, when you look at a scholarly article, most of the time, you want to spend as little time with it as possible. You want to look at it, see if it is relevant and get out. Better still you might want to find what you are looking for quickly. The more context you are required to have, the worse the experience is. Now, to be sure, there are occasions where you want as much as an article can give you. Invariably, print versions come up short there with appendices moved elsewhere to save on print and little extra content like PowerPoint presentations and even video thoughts from the authors.

But how can you cater to those who need to access knowledge efficiently versus those who need to access it deeply?

Here I am going to present my approach to doing that. It is focussed on reading and so I am imagining reading articles on a tablet. But I want it to be efficient. To this end, I took an old paper of mine published in Economics Letters in 1996. The idea was to find something short but also mathematical. If you have access, here is the article as currently represented on Elsevier and here is what the printed version looks like.

What I did was take the text and reformatted it using iBooks Author. If you have an iPad, click on this link and open the attachment in iBooks. If you don’t have an iPad you can see a little of what I have done with this pdf version. Notice that that version is already more readable than the published version. If you don’t have an iPad you can watch the following that gives a run down of the interesting features.

To be sure, those features are three-fold. First, you can easily adjust the font size for easy reading and you can scroll through the article very quickly rather than by confined to pages. Second, the idea is that proofs are things that are for in-depth reading while other stuff is not. So in portrait mode, the article is presented in a light form but as you turn it to landscape you get the full thing at the point you are at. You can hide proofs, literature reviews and all manner of other stuff that are secondary to the knowledge but often embedded in the article and require the reader to sort through. Also, on the proof front, I presented the proof as a PowerPoint as well that allows you to work through it. These are often better than textual proofs as they allow you to present steps and build through. There is much more that can be done there but the point is iBooks makes it easy to embed these things and call them up.

Finally, and this is the main point: I could do this myself. The author is the best person to think about how to present the material in a paper. We take so much away from authors in the whole editing for print process and this harms scholarly communication. These tools allow authors to put in the enhancements as they see fit and, indeed, compel them to think more about the reader. I know it did with this almost 20 year old article of mine and I have to admit I didn’t go nearly far enough there. Imagine thinking about that when presenting current research.

You might wonder: how long did all this take? Well, I did it initially with iBooks Author 1.0 and was exploring as much as writing. So it took about 6 hours. Now that I know what I am doing it would take me about 4 hours for a regular length article. That is not much for so much greatly improving the experience for the readers of your work. When you spend years on a paper, 4 hours making it easier to read doesn’t seem much of an ask. If iBooks was optimised for this, it would take even less time.

This is just a start but I think it shows that there is real potential in using new tools to enhance scholarly communication. But the key is to put the reader front and centre and to ensure that authors can more directly communicate and represent knowledge to readers. We also need to ensure that readers who need to access knowledge efficiently are catered for. On that score, this extends beyond scholarly publishing. I recommend this talk by Craig Mod presented at the Tools for Change in Publishing conference that argues for a ‘sub-compact’ approach to magazine publishing that will focus on readers.

Finally, traditional publishers allow scholars to post non-published versions of articles for free and open to all. Wouldn’t it be something if those non-published articles were, in fact, the ones people preferred to access over what the published versions are? Nothing would shake up the traditional market power of those publishers quicker.

What is ‘face’?

I have been part of a research group looking into Chinese migration for about 5 years now (see rumici.anu.edu.au/), and the main cultural difference one has to get used to as a Westerner in interactions with the East is the notion of ‘face’. This Asian cultural trait has been written about for centuries, but I haven’t found a definition that makes sense to an economist used to the language of game theory and utility functions. So let’s look at ‘face’ from an economic perspective, allowing me to make statements on where it comes from and what will happen to it.

To set the scene, consider some examples of the way in which ‘face’ pervades everyday life in China, Japan, and much of South-East Asia. For one, the boss never gets contradicted directly and no-one tells a boss that he is wrong, even if behind his back things are done completely differently and everyone believes him to be wrong. It would thus be quite common for people to congratulate a boss about a decision he did not in fact take. Connected to this, decisions and opinions are obscured in secret codes. By this I mean that it is never said that ‘we dont care about this so we are not going to do it’ but rather the whole topic is avoided or some technical difficulty is fabricated to avoid a negative decision on something. You will thus be hard pressed to hear ‘no, we will not allow you to do X’ but instead will be told ‘we are still working on how to measure X’.

And loss of face is serious business for as soon as you are publicly contradicted and told you are useless, it means that no-one will protect you, help you, or trade with you. Losing face is thus being shut out from a community, which of course explains why keeping up face is a life-and-death thing for many people in Asian societies, even today.

Face thus means people are not directly contradicted; opinions and preferences are hardly ever asked for directly, but instead are inferred; and there is a whole language known to insiders via which to convey actual opinions and coordinate responses.

If you think about this from a game-theoretical perspective you might first naively think that ‘face’ is about people’s beliefs as to how good (or useful or important, etc.) that you are. To have face would then mean people believe you to be virtuous, valuable, important, etc.

This clearly does not fit most examples of face though: it is perfectly possible that someone has face and yet nothing he or she says gets done. What people hence actually think about you does not prevent you from having a ‘face’. As long as efforts are made to hide the truth from you, one still has ‘face’. Hence face is not just about beliefs.

Face is more about the willingness of others to go along with pretending you are good, important, useful, etc. It is only when that pretense becomes unsustainable that one has lost face.

Yet this as a definition is not useful enough because it begs the question why it would matter what others are willing to pretend about you. With well-defined property rights, it matters not what other people think about you since that in no way influences the trades and decisions you can make.

I would therefore venture that the rub behind the whole concept of face is imperfect property rights. With imperfect property rights, it becomes a matter of fluid group opinion as to what you actually own and what you dont. ‘Face’ is then connected to those implicit property rights. The willingness of others to go along with your ‘face’ then signals the degree to which they still respect your property rights and the moment you lose face is the moment all others can rob you of whatever you possess with social impunity.

Translated to a game-theoretical context, this means one should think of ‘face’ as the degree to which others see you as partaking of the social norm upholding a particular allocation of property rights. Their willingness to go along with your face is then nothing less but a social vote as to whether you are still in the club or not. This in turn relies upon a social game in which the accepted rule is that if any two (or more) people deny each other their face then social voting continues until either face is restored or face is lost completely, leading to a re-allocation of the property rights of the loser. Note that what is actually believed about anyone does not matter.

This kind of conception of face has many important implications. For one, it is clear that something like this is more likely to arise in economic systems where most property rights are ill-defined, such as in large bureaucracies where nominally all is owned by the collective (or the emperor who leads the collective) but where limitations of span of control imply that cliques can actually appropriate things for themselves though only to the degree they cover each other’s backs. This of course explains the importance of face for a country like China that has so long had a bureaucracy. It also fits the ‘all who remain in the clique have to stick up for each other’ aspect of face and why someone who has lost face must be killed or in some other way neutralised since there is an outside world who can be alerted to the degree to which these implicit property rights violate the official ones.

Yet, also in more primitive cultures that lack well-established property rights (understood here as allocations that can only be undone by voluntary trades), the same general idea would hold to some extent though one then more normally would call it ‘honour’, and indeed there is an anthropological literature saying that pastoralists (who dont have official lists of who owns what) are big on honour.

The second and perhaps more important implication is that ‘face’ should lose its meaning and value when an economy becomes more monetised and based on formal property rights. Hence the industrial revolution taking place in China right now should be strongly eroding the whole notion of face, at least within the business community. And indeed, if you meet an outspoken Chinese person who says what he wants and tells you what he thinks, it is most likely someone from the business community.

The third, and most worrying implication is that something like face should inevitably start to arise in any major organisation that survives for a long time, since it is in large organisations that property rights become less perfect. Hence the Western world, which has seen greatly expanding government bureaucracies in the last few centuries and where there are relatively large long-lived private enterprises with major span-of-control problems over what all the managers do should see an increase in the importance of face. Whilst ‘face’ thus becomes less important in the East, it is probably on the rise here……

Thoughts on “Thinking, fast and slow”

I couldn’t resist buying a copy of Daniel Kahneman’s best-seller when returning from holidays. Several friends and colleagues told me it was a great book; it got great reviews; and Kahneman’s journal articles are invariably a good read, so I was curious.

Its general message is simple and intuitively appealing: Kahneman argues that people use two distinct systems to make decisions, a fast one and a slow one. System 1, the fast one, is intuitive and essentially consists of heuristics, such as when we without much thought finish the nursery rhyme ‘Mary had a little…’. The answer ‘lamb’ is what occurs to us from our associative memory. The heuristic to follow that impulse gives the right answer in most cases but can be lead astray by phrases like ‘Ork, ork, ork, soup is eaten with a …’. Less innocuous examples of these heuristics and how they can lead to sub-optimal outcomes are to distrust the unfamiliar, to remember mainly the most intense and the last aspect of an experience (the ‘peak-end rule’), to value something more after possessing it than before possessing it (the ‘endowment effect’) and to judge the probability of an event by how easily examples can come to mind.

System 2, the slow way to make decisions, is more deliberative and involves an individual understanding a situation, involving many different experiences and outside data. System 2 is what many economists would call ‘rational’ whilst System 1 is ‘not so rational’, though Kahneman wants his cake and eat it by saying that System 1 challenges the universality of the rational economic agent model whilst nevertheless not wanting to say that the rational model is wrong. ‘Sort of wrong sometimes’ seems to be his final verdict.

Let me below explore two issues that I have not seen in the reviews of this book. The first is on whether or not his main dichotomy is going to be taken up by economics or social science in the longer-run. The second, related point, is where I think this kind of ‘rationality or not’ debate is leading to. Both issues involve a more careful look at whether the distinction between System 1 and 2 really is all that valid and thus the question of what Kahneman ultimately has achieved, which in turn will center on the usefulness of the rational economic man paradigm.

Continue reading “Thoughts on “Thinking, fast and slow””

Happiness over lifetimes

In a recent study withTony Beatton (QUT), I looked at how happiness changes by age. For the freely downloadable working paper version, see here.

What got me into this issue is the dominance of the “U-shape” story in the economic literature on happiness. The dominant story is that we get miserable in mid-life as the stresses of life are piled onto us and then get happier again the closer we get to death. It didnt seem right, either from the point of view of raw data nor intuitively, so we set to work. The main findings of our study can be summarised by the graph below, where you can see the way happiness changes over the lifetime in Australia, the UK and Germany for a representative individual starting at a 7 on a 0-10 scale.

This Graph summarises data from over 50,000 individuals in Australia, the UK, and Germany, followed over more than 10 years. It turns out that individuals are happiest early into retirement years (65-75) and at their most miserable close to death (80-90), with relatively little changing in the years before retirement. Our interpretation is that individuals older than 65 no longer have unrealistic expectations of what their life will be like and simply enjoy reasonable health and wealth, leading to a marked surge in happiness. As their health starts to deteriorate after 75, their happiness plunges.

In particular: Continue reading “Happiness over lifetimes”

ARC chief is wrong, wrong, wrong on open access

On The Conversation today, an interview of outgoing Australian Research Council chief, Margaret Sheil. In response to a question about her sister organisation the NHMRC requiring funded research to be available for free within 12 months, she responded:

We’re quite comfortable with our current position and we don’t have any plans to change that at the moment, because we serve as a much broader, much more complex research community than the NHMRC. We would not want to move to a position of mandating [open access] until we understood the full range of those complexities: whether [academics] are in a position to comply, whether they can afford to comply.

There are a whole range of cost issues in relation to open access, so we feel that the position that we’ve taken, which is to strongly encourage and [make academics] explain why not, and also the provisions that we’ve put in place to allow for up to 2% of each grant awarded to be used towards publication costs, is a reasonable and considered position.

Complexities? There are no complexities. At the moment, academics have an incentive to publish in established journals that increasingly cost libraries more and more to fund. Today, Harvard University called on its academics to avoid them and basically threatened that it would be cutting those journals if the price didn’t come down. That didn’t seem to complex to them. Harvard were in a prominent position to do something and they look like doing something. Big funding agencies are in the same position.

But for the ARC there is a bigger point. What is the point of research if it is costly to access? Why are they funding it and not someone else? She goes on …

The other issue is that it’s not always appropriate to make research public. Making something publicly available doesn’t necessarily make it accessible. And so there are many, many examples of where protecting intellectual property actually makes it more readily available, because then someone is prepared to commercialise it and make it accessible.

Once again, if that is the point, why is the government funding it? If there is a buck to be made then those making the buck should fund these things. Otherwise, you put it out there and allow others to profit off it cheaply because there is no other way to get the research done. It is Economics 101.

Now the issue Sheil points to is apparently cost. Open access can be costly. The Public Library of Science charges $1500 per paper. Elsevier has an open access option that is more expensive. I suspect both are too high but let’s leave that aside for the moment. Even given those costs, what is the problem with mandating open access? People will build those costs into their grant proposals.

So let’s see how that is likely to pan out. I’ve had many grants from the ARC. Truthfully, my back of the envelope calculation is that they are paying $30,000 a paper for that. If it cost $1500 to have an open access version that would be 5% of the cost of the research cost. So that sounds rather high but it reflects insufficient pressure to get those costs down. Mandate open access and researchers will economise and look for lower cost options of providing access. For instance, each and every one of my papers are available somewhere for free. It cost the government nothing.

My point is that when the ARC raises their hands and says it is all too hard they miss the opportunity to create markets that will drive long-term socially beneficial outcomes.

The Importance of Peer-Review in Journal, Department, and Individual Research Rankings

Preamble

I recall that some time in the mid 2000s, when the Research Quality Framework (which preceded the current ERA) was being discussed, Arden Bement, the director of the National Science Foundation, was asked what he thought. He responded as one would expect of a serious researcher, by saying that the only method he knows for judging academic outcomes is peer review.

In fact, Peer review is the gold-standard in science. We simply don’t trust any finding, method, conclusion, analysis, study that is not reported in a peer reviewed outlet. Yet there has been rapid growth in the use of indices in ways that have not been tested through peer review and which are being used to measure journal ranking, individual academic performance, and even the standing of departments.

Here, I argue that we should return to published methods that have been tested through peer review. The unchecked use of non-peer reviewed methods runs the risk of misallocating resources, e.g. if university promotion and appointment committees and bodies like the ARC use them. Even more troubling, is that non-peer reviewed methods are susceptible to manipulation; combined with the problem of inappropriate rewards, this has the potential to undermine what the profession, through peer-review, regards as the most valuable academic contributions.

Economics journal rankings

In the last two decades or so, there has been an explosion in the use of online indices to measure research performance in economics in particular (and academia generally). Thomson-Reuters Social Science Citation Index (SSCI), Research Papers in Economics (RePEc) and Google Scholar (GS) are the most commonly used by economists.

These tools display the set of publications in which a scholar’s given article is cited. While SSCI and GS take their set from the web as a whole, RePEc––hosted by the St Louis Fed–– is different, in that it refers only to its own RePEc digital database, which is formed by user subscription. Further, RePEc calculates rankings of scholars, but only of those who subscribe. Referring to its ranking methods, the RePEc web page states:

This page provides links to various rankings of research in Economics and related fields. This analysis is based on data gathered with the RePEc project, in which publishers self-index their publications and authors create online profiles from the works indexed in RePEc.

While it has been embraced by some academic economists in Australia as a tool for research performance measurement it is important to note that the RePEc ranking methodology is not peer-reviewed. This departure from the usual strong commitment to the process of peer-review by academics is puzzling, given that there is a long history of peer review in economics in in the study of, you guessed it “Journal Ranking”.

A (very) quick-and-dirty modern history

Coates (1971) used cites in important survey volumes to provide a ranking; Billings and Viksnins (1972) used cites from an arbitrarily chosen ‘top three’ journals; Skeels and Taylor (1972) counted the number of articles in graduate reading lists, and; Hawkins, Ritter and Walter (1973) surveyed academic economists. (Source: Leibowitz and Palmer JEL 1984, p78.)

The modern literature is based on a paper by Leibowitz and Palmer in the Journal of Economic Literature,1984. In their own words, their contribution had three key features

…(1) we standardize journals to compensate for size and age differentials; (2) we include a much larger number of journals; (3) we use an iterative process to “impact adjust” the number of citations received by individual journals

Roughly speaking, the method in (3) is to: (a) write down a list of journals in which economics is published, (b) count up the total number of citations to articles in each journal; (c) rank the journals by this count; (d) weight the citations by this count and, finally; (d) iterate. The end result gives you a journal ranking based upon impact-adjusted citations.

The current best method, is Kalaitzidakis et al Journal of the European Economics Association, 2003, hereafter KMS. This study was commissioned by the European Economics Association to gauge the impact of academic research output by European economics departments.

KMS is based on data from the 1990s and, as far as I am aware, has not been updated. No ranking can replace the wisdom of an educated committee examining a CV. However, KMS at least comes from a peer-review process. Unlike simple count methods, it presents impact, age, page and self-citation adjusted rankings, among others.

But even KMS-type methods can be misused: One should be ready to use the “laugh test” to evaluate any given ranking. KMS deliberately uses a set economics journals, roughly defined as journals economists publish in and read. It passes the laugh test because, roughly speaking the usual “top five” that economists have in their heads (AER, Econometrica, JPE, QJE and ReStud) do indeed appear near the top of the ranking, and other prestigious journals are not far behind.

The Economics Department at Tilburg University has included statistics journals in its “Tilburg University Economics Ranking”. The result? “Advances in Applied Probability” beats out the American Economic Review as the top journal: Their list can be found at https://econtop.uvt.nl/journals.php, but you need look no further than their top five to see that this does not pass the laugh test:

  1. Advances in Applied Probability
  2. American Economic Review
  3. Annals of Probability
  4. Annals of Statistics
  5. Bernoulli

Would I be remiss in suggesting that a statistics-oriented econometrician might have had an input into this ranking? Yes I would oops!

Finally, let us turn to the new RePEc impact-adjusted ranking. A laugh-test failure here, among others, is the inclusion of regional FED journals: Quarterly Review, Federal Reserve Bank of Minneapolis is ranked 14–just above the AER; the Proceedings Federal Reserve Bank of San Franscisco is ranked 16 ahead of the Journal of Econometrics at 19 and; Proceedings of the Federal Reserve Bank of Cleveland is 24, ahead of the European Economic Review at 29.

The RePEc top 5 is:

  1. Quarterly Journal of Economics
  2. Journal of Economic Literature
  3. Journal of Economic Growth
  4. Econometrica
  5. Economic Policy

It would be interesting to investigate whether macroeconomists and policy scholars had influence here.

My conclusions

If we are going to use ranking methods be very careful. Use methods that have emerged over decades of rigorous peer review, like the European Association’s 2003 study by KMS. And stick to their method rigorously lest we all have to retrain in statistics.