You are currently browsing the monthly archive for March 2010.

Here is a nice initiative of a group of econ students. The transatlantic, Journal of Economics and Philosophy. The first issue is about economics and science. Both the editorial and the two guest articles lament what they view as a mechanistic perspective to economics and disproportional math training in econ schools.

According to Tony Lawson, mathematics entered economics as a side effect of a failed attempt to achieve the predictive power of natural sciences.

If there is a laudable justification for emphasis of mainstream economists on using mathematical formalism, it is the desire to be scientific in the sense of natural scientists. The latter, after all, do regularly achieve success in advancing human understanding. A less laudable justification is to pose as scientists merely in the hope of acquiring a certain status in society.

Actually, there are other justifications. Even if you don’t want to predict anything, you may still want to formulate your ideas using mathematical toy models, to increases clarity and reduce ambiguity. Mathematics is just a language which is useful to convey ideas in unambiguous ways. Personally, I have an even less profound justification: I like doing math. It’s fun.

Anyway, good luck with the journal guys ! and since you don’t like math I hope they won’t force it on you !

(h/t Jeff Helzner)

Saul and Clara would like to meet in Brussels, and as both arrive by train, they decided to meet at the café in the train station. Alas, there are three train stations in Brussels: Brussels South, Brussels Central and Brussels North, and it turns out that they arrived to different stations. Once they realized that there are three train stations, and that they do not have cell-phones, they have to start looking for each other. None of them has any clue regarding the location of the other, so that they assign probability 1/2 to each alternative. What is the search method that minimizes their expected searching time?

Suppose that Saul arrived to Brussels South, and Clara to Brussels Central. The best method is simple: Clara waits, and lets Saul do the search; Saul searches Clara in both Brussels Central and Brussels North in a random order. Assuming the travel time from one train station to another is 1 hour, Saul will find Clara in 3/2 hours.

But this method assumes asymmetry between the two parties: Clara waits while Saul searches. What if Clara is supposed to meet Sandra in Brussels, so that they cannot use gender to create asymmetry?

So suppose that the two players are indistinguishable, and they must use the same strategy. Also, suppose that the three locations are symmetric, and there is no focal point. In that case, to ensure that they meet, they have to use a mixed strategy. But which one is optimal?

In 1990, Anderson and Weber conjectured that the following strategy is optimal: play is divided into blocks of size 2. At the beginning of each block, each player decides whether to stay at his current location (probability 1/3), or look for the other player in the current block (probability 2/3). In the former case, the player remains in his current location during the two stages of the current block. In the latter case, the player visits the other two locations in a random order. Simple arithmetic shows that the expected time until the two parties meet is 5/2. In a recent paper Weber proved that this strategy is indeed optimal. A must read.

The field of location games offers many other open problems that are easy to state. An expert in the field told me that the following problem is still open: locations are the integers (so for each integer z, positive or negative, there is a train station called Brussels-z). Two players are located in two neighboring stations (Brussles-z and Brussels-z+1). It takes one hour to move between neighboring train stations, and the players must use the same strategy. What is the strategy that minimizes the time until they meet?

There is an argument going the rounds that Management should be a profession. The most recent incarnation of this proposition is credited to  Khurana and Nohria.

Nohria, by the way, is one name bandied about as a future dean of the West Point of Capitalism. The poor man has received unsolicited support (or stab in the back) by way of a web site. It is worth a read if only as parody. I cannot resist quoting one line:

Professor Nohria is a level 5 leader:……………….. sometimes saying “Perhaps I’m wrong – what do you think?”…………..

Who knew that what passes for modesty elsewhere is elevated to a condition called level 5 leadership.

Khurana and Nohria have gone a step further to suggest an oath that MBA’s should consider. I’ll not repeat the arguments against the professionalization idea or the criticisms of the oath. Suffice it to say, neither Doctors or Lawyers are examples to be emulated. Hobbes got it right:

“…..the bonds of words are too weak bridle men’s ambition…….”

More interesting, I think, is what motivates these well meaning but misguided suggestions. The first is the supposed failures of those with an MBA. The argument is familiar. Consider all the CEO’s who had MBA’s. They alsol had undergraduate degrees from elite institutions as well. They were mostly male. So, perhaps the root cause of our current malaise is an elite university education combined with the male reproductive organ? Wait! Someone has already made that argument. The militantly middle of the road columnist David Brooks, who argues that the crash was caused by whiz kid barrow boys replacing dim witted blue-bloods.

By taking the blame for the supposed failures of capitalism, one can lay claim to more influence than is warranted. It is no more than self-aggrandizement. Proponents of professionalization labor under the misconception that they are a species of Jesuit who

have discovered the precise point to which intellectual culture can be carried without risk of intellectual emancipation.

Even the Jesuits required the child `until he was seven’ in order to return the man. This is emphatically not the case with B-schools I am familiar with.

Those of us who labor in silvas academi, far from the non-profit wing of the McKinsey corporation on the banks of the Charles do not have the same ambitions. We are driven by a desire to understand the world of commerce. We convey what few principles that have been teased out to our students and point out the impediments to further understanding. Not all our students share the same curiosity. For some, we are an annoying obstacle course on the way to a life of quiet desperation. By osmosis, they acquire some useful knowledge.  However, there are enough that share that same curiosity and perhaps, they will go on to make a difference.

Bruce Bueno de Mesquita may have achieved what some game theorists (I am not one of them) view as the ultimate goal of game theory: predict outcomes of `real life’ strategic situations using an algorithm that is based on game theoretic modeling. Using his model, Mesquita can tell you, or the CIA, whether Iran will build a nuclear bomb. The input to Mesquita’s algorithm is a couple of numerical values that represent the leverage and preferences of the `players’ (In Iran’s bomb case — the people who influence Iran’s nuclear policy). According to this New Scientist article, Mesquita made thousands of predictions using his model and has been more than 90% accurate.

Don’t get too excited though. Even if you buy the 90% accuracy assertion, it does not necessarily prove the validity of the model per se

According to political scientist Nolan McCarty of Princeton University, … [the input that Mesquita feeds into his model] is the real strength of the approach. “I suspect the model’s success is largely due to the fact that Bueno de Mesquita is very good on the input side; he’s a very knowledgeable person and a widely respected political scientist. I’m sceptical that the modelling apparatus adds as much predictive power as he says it does.”

Paul Goldberg is correct to note that I picked on one thread in the literature on Computational Social Choice. By coincidence, Paul himself had just posted a piece on computational social choice on his blog. The other thread is the Bartholdi, Tovey & Trick piece on the hardness of manipulation. The result here is that some voting rules have the property that finding a beneficial deviation from sincere reporting of preferences is NP-hard. At first glance this seems to be a promising weakening of strategy-proofness. Since I’m in a Judge Jeffries mood I’ll argue the contrary. First, this is a worst-case result. It is not that every instance will be `hard’. Only that there will be some instances that will tax ones computational resources. Second, even in these cases, it is only when the instances are large that NP-hardness bites.

Eran, argued that should a voting rule become law,  “then the law should be explicit enough to say what happen in every case.” Agreed, but NP-hardness does not mean the rule is not well defined. Only that it might take a long time to compute the outcome. In any case, we do have plenty of examples of laws and contractual arrangements that  are`incomplete’ in the sense they do not specify the outcome in every contingency.

Michael Trick was kind enough to absolve Hugo S. of any responsibility for the placement of the relevant papers. I did not intend to suggest otherwise. Only, that reading some of the papers in computational social choice reminded me of the event.

Legend has it that when Hugo Sonnenschein was editor of Econometrica, he declared an embargo on papers in social choice. To borrow from Ecclesiastes: `of the making of impossibility theorems there is no end, and much reading of them is a weariness of the soul’. Why the community soured on impossibility theorems on principle escapes me. The world is the way it is and not the way we want it to be. But, I digress. Returning to Sonnenschein’s action, the sweep of it fair takes one breath away. Were a Sonnenschein at the helm of Econometrica now, what might be on the chopping block? Decision Theory? But, I digress, again.

What prompted these ruminations, was computational social choice. The subject begins, I suppose, with a paper by Bartholdi, Tovey, and Trick (Voting Schemes for which It Can Be Difficult to Tell Who Won the Election) in which it is shown that the Dodgson rule is NP-hard. An interesting mathematical observation, but does it matter? Only if the Dodgson rule is a compelling one. However, it has a number of properties that make it unappealing. Second, when would the complexity constraint bite? When there are a large number of voters? In these cases, the number of candidates is usually very small.

The Bartholdi, Tovey & Trick paper has inspired attempts to approximate the Dodgson rule by interpreting the rule as an optimization problem. Again, approximation can only be a concern if the complexity constraint bites. Further, approximation ratios are a cardinal notion while the inputs to a voting problem are ordinal in nature. More importantly, why is `closeness’ to a particular rule a compelling criterion for selecting a rule? After all, there is no guarantee that `closeness’ in the optimization sense means that the approximate rule has similar properties to the target rule.

Anyway, ruminations. Let a thousand flowers bloom.

What does it mean to describe a probability distribution over, say, {\{0,1\}^\mathbb{N}} ? I am interested in this question because of the expert testing literature (pdf), in which an expert is supposed to provide a client with the distribution of some stochastic process, but the question is also relevant for Bayesiansts, at least those of us who think that the Bayesian paradigm captures (or should capture) the reasoning process of rational agents, in which case it makes little sense to reason about something you cannot describe.

So, what does it mean to describe your belief/theory about a stochastic process {X_1,X_2,\dots} with values in {\{0,1\}}?  Here are some examples of descriptions:

{X_n} are I.I.D with {P(X_n=0)=3/4}.

{X_0=X_1=1} and {\mathop{\mathbb P}(X_n=0|X_0,\dots,X_{n-1})=1/eX_{n-1}+0.4X_{n-2}}

{X_n=Y_{2n}\cdot Y_{2n-1}} where {Y_0,Y_1,\dots} are I.I.D {(1/2,1/2)}.

Everyone agrees that I have just described processes, and that the first and last are different descriptions of the same process. Also everyone probably agrees that there are only countably many processes I can describe, since every description is a sentence in English/Math and there are only countably many such sentences.

Read the rest of this entry »

I heard this from Marco who heard it from Tzachi. Not sure what to make of it, but that will not deter me from ruminating publicly

There is a sack of chocolate and you have two options: either take one piece from the sack to yourself, or take three pieces which will be given to Dylan. Dylan also has two options: one pieces for himself or three to you. After you both made your choices independently each goes home with the amount of chocolate he collected.

Write down the payoff matrix for this game and you’ll get the Prisoner’s Dilemma. But where is the dilemma here ? In the few occasions that I presented the Nash solution to the prisoner’s dilemma to a friend who didn’t know about game theory, the response was always `something is wrong here’. I am pretty sure that if I try to do the same with the Chocolate Dilemma the response will be `well, duh’. Obviously every sensible person will prefer one chocolate piece to himself than three pieces to somebody else.

One can say that this hypothetical response vindicates the Nash solution also in the Prisoner’s Dilemma. After all, these are the same games. If people find the solution intuitive in the Chocolate Dilemma they should also accept it in the Prisoner’s Dilemma. But this argument is not very convincing since these are the same games only because we game theorists identify a game with its payoff matrix. The rest of the world might disagree.

Everyone has some papers that influenced his or her research. I have too. One of my favorite papers was written by the Dutch group, Janos Flesch, Frank Thuijsman and Koos Vrieze. They considered the following game.

Anne, Bill and Chris are neighbors. Anne likes Bill and dislikes Chris, Bill likes Chris and dislikes Anne, and Chris likes Anne and dislikes Bill. Every morning each one has to decide whether to invite someone to the five-o’clock tea. Plainly, if Anne decides to invite someone, it would be Bill. Similarly, if Bill invites someone, it would be Chris, and if Chris invites someone, it would be Anne.
The three neighbors make their decisions simultaneously, so when Anne decides whether to send an invitation to Bill, she does not know whether she is going to receive an invitation from Chris.
The game ends once at least one invitation is sent. Thus, the game goes on as long as no-one sends an invitation. This game is a combination of a repeated game and of a one-shot game: the game is repeated if no-one send invitations, and it terminates if someone sends an invitation.
What about payoffs? The payoffs are described by the following table:

The payoffs in the game

Surprisingly, the numbers can be related to the story about tea time. Each player gets a utility of 3 if s/he is invited and s/he did not send an invitation, and a utility of 0 if s/he sent an invitation and was invited, a utility of 1 if s/he sends an invitation and the invited neighbor also sends an invitation, and a utility of 1 if s/he receives an invitation and the inviting neighbor received an invitation. Payoffs are not discounted.

What are the equilibria of this game? Though this game is symmetric, there is no symmetric equilibrium. Also, there is no stationary equilibrium. The simplest equilibrium is periodic with period 3:
At period 1: with probability 1/2 Anne sends an invitation to Bill, and with probability 1/2 she does not send an invitation. Meanwhile, Bill and Chris do not send invitations.
At period 2: with probability 1/2 Bill sends an invitation to Chris, and with probability 1/2 he does not send an invitation. Meanwhile, Chris and Anne do not send invitations.
At period 3: with probability 1/2 Chris sends an invitation to Anne, and with probability 1/2 he does not send an invitation. Meanwhile, Anne and Bill do not send invitations.
At period 4: like period 1.
And so on.

Plainly there are two additional equilibria, symmetric to the one provided above:
1) Bill is the first to randomize between sending an invitation and not sending an invitation, followed by Chris, who is follows by Anne.
2) Chris is the first to randomize between sending an invitation and not sending an invitation, followed by Anne, who is follows by Bill.

There are other periodic equilibria, but they all share the same structure as the three equilibria described above. Why do I like this paper? The game that I described above is a simple stochastic game. Whereas discounted stochastic games have stationary equilibria, this is not the case with undiscounted stochastic games. It is known that undiscounted two-player stochastic games have an epsilon-equilibrium (Vieille, 2000), but the equilibrium strategies may be complex, and they involve threats of punishment. It is not known whether undiscounted stochastic games with more than two players have an epsilon-equilibrium: we do not have a proof that an epsilon-equilibrium exists, nor do we have an example of a stochastic game without an epsilon-equilibrium. The paper by the Dutch suggests that in multi-player stochastic games we should expect to find periodic equilibria. This was a new observation, and in my view, quite important. In fact I used it in subsequent papers, and I think that more can be said about periodic equilibria than what we know at present. I never forget where it all started.

Kellogg faculty blogroll