You are currently browsing the monthly archive for September 2009.

I confess. Blogging is harder than publishing. Lance Fortnow tells me that I would find Blogging easier if the orifice in my antipodes was not constricted. Perhaps this is true, but it does make one wonder about the source of a bloggers wisdom.

Very well then. I will follow his advice. Where is the milk of magnesia? Jacta alea est.

Drive-by theorizing. Practiced by applied theorists. This is the species that theorizes about phenomena, rather than theory.

Works like this. Read a book from another discipline about some unusual phenomena over the summer. In the fall, write a paper with an economic model about the phenomena. Usually with a title like `An Economic Theory of BLANK.’ A popular choice for BLANK is one of the 7 deadly sins (but these have  been exhausted) or something magesterial like empire, justice, language, family or sex.

The contents of said paper consist of a verbal intuition formalized with a patina of mathematics. Indeed, the best examples, admit no other intuition but the one advanced by the author. Of course, this makes the mathematical exercise utterly pointless. We build mathematical models to sort out competing intuitions not to codify them.

 When completed, move on. It is always best to have the first word, no matter how anemic, than the last.

Of course, one should name names. I won’t. I’ve sipped a laxative not drowned in it.

 

Richard Dawkins has a new book, in which he sets out to prove, once and for all, that evolution is a fact `in the same sense as it is a fact that Paris is in the northern hemisphere’, or, as he calls it, a Theorum.

Read the rest of this entry »

During a panel session at  CSECON09, Joan Feigenbaum and Yoav Shoham commented on the underepresentation of Economists. One reason offered was that the tribe of the ECON is large with varied interests: fertility, wage discrimination, returns to education etc. However, I don’t think Joan and Yoav meant all Economists but Economic Theorists.

Like the tribe at large, other topics compete for the attention of theorists. Some have gone over to the dark side, behavioral economics, to rearrange the foundations of Economics in service of the Obama administration (if Business Week is to be believed). Others are engaged in work of national importance extending the folk theorem to repeated games with imperfect monitoring. More have gone beyond the event horizon of the decision theory black hole to be lost forever. Put differently, it is not sufficient for the topics of CSECON09 to be interesting, it must be more interesting than what currently occupies their attention. Unfortunately, some of the topics associated with the overlap between CS and ECON induce yawns amongst the economic theorists.

Consider, first, the stream of papers on the Price of Anarchy (PoA). A catchy name prompted by a natural question: how should one quantify the degree of inefficiency associated with the Nash equilibrium (NE) of a game? The Roughgarden and Tardos papers offer a proof of concept with an appealing game (traffic flow) and an elegant analysis. What next? Repeat the same for any game that comes to mind between breakfast and a bowel movement? Non (denying something always sounds better in French).

First, if the PoA is R, what does it mean for R to be economically significant? This depends on the magnitude of the total utilities involved. So, knowing R alone is not particularly informative about the magnitude of the inefficiency. Further, what exactly is to be gained from quantifying the inefficiency in a stylized model?

Second, one’s interest in quantifying the degree of inefficiency is to offer suggestions on how to improve efficiency. Thus, from an understanding of R, can we tell whether it is better, for example, to alter the network in the traffic game or modify the latencies? Which intervention will have a larger impact?

I’ve not heard a good counter to the first. I don’t recall an example of the second. As always, such statements reflect the authors deafness and absent mindedness. I confess to being both. Others may suggest that muteness would correct for this.

Next, on the docket is mechanism design (MD). Catnip to computer scientists since it is constrained optimization. Whilst many economic theorists make use of mechanism design, the deeper recesses of it’s theory are irrelevant to them. For them, MD is a convenient way to model an institution without being encumbered by it’s details. The corresponding MD problems are simple to deal with. An analogy to this situation is the classic monopoly pricing problem. It is an instance of concave programming, but a deep knowledge of concave programming is unnecessary for it’s analysis (I confess to unease with such a view because of the possibility of selection bias in models). Thus, for them, the body of work by Computer Scientists on algorithmic mechanism design (AMD) is irrelevant.

At the other extreme, are theorists who actually want to implement mechanisms. This group acknowledges the restraints that computation imposes but is largely untouched by  work in AMD. Why?

  1. Some of the underlying problems seem contrived.
  2. Some of the mechanisms with good approximation bounds appear unnatural (which suggests unmodeled constraints that should be articulated). The criticism is applied even handedly. Consider the debate about Cremer-McLean full surplus extraction.
  3. Worst-case approximation bounds are worst-case and in some cases ad-hoc. When partial information is available, why isn’t that incorporated? Little justification is given for the  benchmarks against which the selected mechanism is compared to.
  4. Perhaps, the solution to a hard MD problem is to side step the problem altogether.  For example, the allocation of spectrum may be a hard problem only because of the way the Federal government has chosen to design spectrum property rights. There are other ways in which spectrum could be managed that would lead to very different resource allocation problems.

I close by noting some other blogs on the issues raised at CSECON09. See Lance Fortnow, Noam Nisan and Muthu.

I write from Ithaca. Not Homer’s, but New York’s. The Economics and Computer Science department at Cornell hosted a workshop on the Research Issues at the Interface of Computer Science and Economics (CSECON09). 

There have been a number of these `intimate’ dances between computer scientists and economists. The first, about 10 years ago at Northwestern. Thence, Maastricht, IMA, Dagsthul, Bertinoro and DIMACS. They tend to attract more Computer Scientists than Economists. In another time and place I’d expatiate on the causes of this. At this time and place I’d rather make the argument for why Economists should pay attention to (Theoretical) Computer Science. I leave, for another post, why Economists, rightly, in some cases, turn a deaf ear.

I’ll begin with complexity. Knowing the complexity of computing an equilibrium is important. When it is polynomial (in a natural specification of the input) then, there will be a good, non-trivial characterization of the equilibrium. Further, if a problem is polynomially solvable, there is usually a simple, decentralized algorithm (best response, no-regret, fictitious play) that will compute the equilibrium. In many cases, this decentralized algorithm can be interpreted as a plausible dynamic process that terminates in the equilibrium of interest.

If the problem is hard (in a formal sense, say NP-complete, PPAD), then, there will be no good characterization and no simple decentralized algorithm that will offer a learning justification for the underlying equilibrium.

One might argue that the test is whether agents behave as if they are in equilibrium not whether they actually play an equilibrium. In this case, pretty characterizations and learning justifications for equilibrium are merely ornamental. Very well. How is it that the observed behavior might be indistinguishable from equilibrium play? Two explanations come to mind. 

First, coincidence. This  provides no good reason to use an equilibrium concept to make predictions; why should the coincidence be repeated? Second, the notions of hardness employed are worst-case. Perhaps, the world organizes itself so that equilibria are easy to compute. If so, what are the structures that lend themselves to efficient computation? How does one identify them? 

I picked the problem of computing equilibria as an example to argue the importance of thinking about complexity. I could instead have picked mechanism design the vehicle for making the same point. It will be useful to recall what the purpose of mechanism design is. It is identify what a given institution can achieve when the information necessary to make decisions is dispersed and privately held. If one takes this seriously, then one must accept that in many situations, computation is as much a constraint on economic activity as private information.

I write from Ithaca. Not Homer’s, but New York’s. The Economics and Computer Science department at Cornell hosted a workshop on the Research Issues at the Interface of Computer Science and Economics (CSECON09). 
There have been a number of these `intimate’ dances between computer scientists and economists. The first, about 10 years ago at Northwestern. Thence, Maastricht, IMA, Dagsthul, Bertinoro and DIMACS. They tend to attract more Computer Scientists than Economists. In another time and place I’d expatiate on the causes of this. At this time and place I’d rather make the argument for why Economists should pay attention to (Theoretical) Computer Science. I leave, for another post, why Economists, rightly, in some cases, turn a deaf ear.
I’ll begin with complexity. Knowing the complexity of computing an equilibrium is important. When it is polynomial (in a natural specification of the input) then, there will be a good, non-trivial characterization of the equilibrium. Further, if a problem is polynomially solvable, there is usually a simple, decentralized algorithm (best response, no-regret, fictitious play) that will compute the equilibrium. In many cases, this decentralized algorithm can be interpreted as a plausible dynamic process that terminates in the equilibrium of interest.
If the problem is hard (in a formal sense, say NP-complete, PPAD), then, there will be no good characterization and no simple decentralized algorithm that will offer a learning justification for the underlying equilibrium.
One might argue that the test is whether agents behave as if they are in equilibrium not whether they actually play an equilibrium. In this case, pretty characterizations and learning justifications for equilibrium are merely ornamental. Very well. How is it that the observed behavior might be indistinguishable from equilibrium play? Two explanations come to mind. 
First, coincidence. This  provides no good reason to use an equilibrium concept to make predictions; why should the coincidence be repeated? Second, the notions of hardness employed are worst-case. Perhaps, the world organizes itself so that equilibria are easy to compute. If so, what are the structures that lend themselves to efficient computation? How does one identify them? 
I picked the problem of computing equilibria as an example to argue the importance of thinking about complexity. I could instead have picked mechanism design the vehicle for making the same point. It will be useful to recall what the purpose of mechanism design is. It is identify what a given institution can achieve when the information necessary to make decisions is dispersed and privately held. If one takes this seriously, then one must accept that in many situations, computation is as much a constraint on economic activity as private information.

Join 103 other followers

Follow

Get every new post delivered to your Inbox.

Join 103 other followers

%d bloggers like this: