During a panel session at  CSECON09, Joan Feigenbaum and Yoav Shoham commented on the underepresentation of Economists. One reason offered was that the tribe of the ECON is large with varied interests: fertility, wage discrimination, returns to education etc. However, I don’t think Joan and Yoav meant all Economists but Economic Theorists.

Like the tribe at large, other topics compete for the attention of theorists. Some have gone over to the dark side, behavioral economics, to rearrange the foundations of Economics in service of the Obama administration (if Business Week is to be believed). Others are engaged in work of national importance extending the folk theorem to repeated games with imperfect monitoring. More have gone beyond the event horizon of the decision theory black hole to be lost forever. Put differently, it is not sufficient for the topics of CSECON09 to be interesting, it must be more interesting than what currently occupies their attention. Unfortunately, some of the topics associated with the overlap between CS and ECON induce yawns amongst the economic theorists.

Consider, first, the stream of papers on the Price of Anarchy (PoA). A catchy name prompted by a natural question: how should one quantify the degree of inefficiency associated with the Nash equilibrium (NE) of a game? The Roughgarden and Tardos papers offer a proof of concept with an appealing game (traffic flow) and an elegant analysis. What next? Repeat the same for any game that comes to mind between breakfast and a bowel movement? Non (denying something always sounds better in French).

First, if the PoA is R, what does it mean for R to be economically significant? This depends on the magnitude of the total utilities involved. So, knowing R alone is not particularly informative about the magnitude of the inefficiency. Further, what exactly is to be gained from quantifying the inefficiency in a stylized model?

Second, one’s interest in quantifying the degree of inefficiency is to offer suggestions on how to improve efficiency. Thus, from an understanding of R, can we tell whether it is better, for example, to alter the network in the traffic game or modify the latencies? Which intervention will have a larger impact?

I’ve not heard a good counter to the first. I don’t recall an example of the second. As always, such statements reflect the authors deafness and absent mindedness. I confess to being both. Others may suggest that muteness would correct for this.

Next, on the docket is mechanism design (MD). Catnip to computer scientists since it is constrained optimization. Whilst many economic theorists make use of mechanism design, the deeper recesses of it’s theory are irrelevant to them. For them, MD is a convenient way to model an institution without being encumbered by it’s details. The corresponding MD problems are simple to deal with. An analogy to this situation is the classic monopoly pricing problem. It is an instance of concave programming, but a deep knowledge of concave programming is unnecessary for it’s analysis (I confess to unease with such a view because of the possibility of selection bias in models). Thus, for them, the body of work by Computer Scientists on algorithmic mechanism design (AMD) is irrelevant.

At the other extreme, are theorists who actually want to implement mechanisms. This group acknowledges the restraints that computation imposes but is largely untouched by  work in AMD. Why?

  1. Some of the underlying problems seem contrived.
  2. Some of the mechanisms with good approximation bounds appear unnatural (which suggests unmodeled constraints that should be articulated). The criticism is applied even handedly. Consider the debate about Cremer-McLean full surplus extraction.
  3. Worst-case approximation bounds are worst-case and in some cases ad-hoc. When partial information is available, why isn’t that incorporated? Little justification is given for the  benchmarks against which the selected mechanism is compared to.
  4. Perhaps, the solution to a hard MD problem is to side step the problem altogether.  For example, the allocation of spectrum may be a hard problem only because of the way the Federal government has chosen to design spectrum property rights. There are other ways in which spectrum could be managed that would lead to very different resource allocation problems.

I close by noting some other blogs on the issues raised at CSECON09. See Lance Fortnow, Noam Nisan and Muthu.