I write from Ithaca. Not Homer’s, but New York’s. The Economics and Computer Science department at Cornell hosted a workshop on the Research Issues at the Interface of Computer Science and Economics (CSECON09). 

There have been a number of these `intimate’ dances between computer scientists and economists. The first, about 10 years ago at Northwestern. Thence, Maastricht, IMA, Dagsthul, Bertinoro and DIMACS. They tend to attract more Computer Scientists than Economists. In another time and place I’d expatiate on the causes of this. At this time and place I’d rather make the argument for why Economists should pay attention to (Theoretical) Computer Science. I leave, for another post, why Economists, rightly, in some cases, turn a deaf ear.

I’ll begin with complexity. Knowing the complexity of computing an equilibrium is important. When it is polynomial (in a natural specification of the input) then, there will be a good, non-trivial characterization of the equilibrium. Further, if a problem is polynomially solvable, there is usually a simple, decentralized algorithm (best response, no-regret, fictitious play) that will compute the equilibrium. In many cases, this decentralized algorithm can be interpreted as a plausible dynamic process that terminates in the equilibrium of interest.

If the problem is hard (in a formal sense, say NP-complete, PPAD), then, there will be no good characterization and no simple decentralized algorithm that will offer a learning justification for the underlying equilibrium.

One might argue that the test is whether agents behave as if they are in equilibrium not whether they actually play an equilibrium. In this case, pretty characterizations and learning justifications for equilibrium are merely ornamental. Very well. How is it that the observed behavior might be indistinguishable from equilibrium play? Two explanations come to mind. 

First, coincidence. This  provides no good reason to use an equilibrium concept to make predictions; why should the coincidence be repeated? Second, the notions of hardness employed are worst-case. Perhaps, the world organizes itself so that equilibria are easy to compute. If so, what are the structures that lend themselves to efficient computation? How does one identify them? 

I picked the problem of computing equilibria as an example to argue the importance of thinking about complexity. I could instead have picked mechanism design the vehicle for making the same point. It will be useful to recall what the purpose of mechanism design is. It is identify what a given institution can achieve when the information necessary to make decisions is dispersed and privately held. If one takes this seriously, then one must accept that in many situations, computation is as much a constraint on economic activity as private information.

I write from Ithaca. Not Homer’s, but New York’s. The Economics and Computer Science department at Cornell hosted a workshop on the Research Issues at the Interface of Computer Science and Economics (CSECON09). 
There have been a number of these `intimate’ dances between computer scientists and economists. The first, about 10 years ago at Northwestern. Thence, Maastricht, IMA, Dagsthul, Bertinoro and DIMACS. They tend to attract more Computer Scientists than Economists. In another time and place I’d expatiate on the causes of this. At this time and place I’d rather make the argument for why Economists should pay attention to (Theoretical) Computer Science. I leave, for another post, why Economists, rightly, in some cases, turn a deaf ear.
I’ll begin with complexity. Knowing the complexity of computing an equilibrium is important. When it is polynomial (in a natural specification of the input) then, there will be a good, non-trivial characterization of the equilibrium. Further, if a problem is polynomially solvable, there is usually a simple, decentralized algorithm (best response, no-regret, fictitious play) that will compute the equilibrium. In many cases, this decentralized algorithm can be interpreted as a plausible dynamic process that terminates in the equilibrium of interest.
If the problem is hard (in a formal sense, say NP-complete, PPAD), then, there will be no good characterization and no simple decentralized algorithm that will offer a learning justification for the underlying equilibrium.
One might argue that the test is whether agents behave as if they are in equilibrium not whether they actually play an equilibrium. In this case, pretty characterizations and learning justifications for equilibrium are merely ornamental. Very well. How is it that the observed behavior might be indistinguishable from equilibrium play? Two explanations come to mind. 
First, coincidence. This  provides no good reason to use an equilibrium concept to make predictions; why should the coincidence be repeated? Second, the notions of hardness employed are worst-case. Perhaps, the world organizes itself so that equilibria are easy to compute. If so, what are the structures that lend themselves to efficient computation? How does one identify them? 
I picked the problem of computing equilibria as an example to argue the importance of thinking about complexity. I could instead have picked mechanism design the vehicle for making the same point. It will be useful to recall what the purpose of mechanism design is. It is identify what a given institution can achieve when the information necessary to make decisions is dispersed and privately held. If one takes this seriously, then one must accept that in many situations, computation is as much a constraint on economic activity as private information.