You are currently browsing the tag archive for the ‘Nash equilibrium’ tag.

Credit for the game that bears his name is due to to Borel. It appears in a 1921 paper in French. An English translation (by Leonard Savage) may be found in a 1953 Econometrica.

blottorace1

 

The first appearance in print of a version of the game with Colonel Blotto’s name attached is, I believe, in the The Weekend Puzzle Book by Caliban (June 1924). Caliban was the pen name of Hubert Phillips one time head of Economics at the University of Bristol and a puzzle contributor to The New Statesman.

Blotto itself is a slang word for inebriation. It does not, apparently, derive from the word `blot’, meaning to absorb liquid. One account credits a French manufacturer of delivery tricycles (Blotto Freres, see the picture) that were infamous for their instability. This inspired Laurel and Hardy to title one of their movies Blotto. In it they get blotto on cold tea, thinking it whiskey.

Over time, the Colonel has been promoted. In 2006 to General and to Field Marshall in 2011.

Update (May 2017) McLennan and Tourky seem to be the first to make the  argument in this post (link to their paper)

 

They say that when Alfred Tarski came up with his theorem that the axiom of choice is equivalent to the statement that, for every set {A}, {A} and {A\times A} have the same cardinality, he first tried to publish it in the French PNAS. Both venerable referees rejected the paper: Frechet argued there is no novelty in equivalence between two well known theorems; Lebesgue argued that there is no interest in equivalence between two false statments. I don’t know if this ever happened but it’s a cool story. I like to think about it everytime a paper of mine is rejected and the referees contradict each other.

Back to game theory, one often hears that the existence of Nash Equilibrium is equivalent to Brouwer’s fixed point theorem. Of course we all know that Brouwer implies Nash but the other direction is more tricky less known. I heard a satisfying argument for the first time a couple of months ago from Rida. I don’t know whether this is a folk theorem or somebody’s theorem but it is pretty awesome and should appear in every game theory textbook.

So, assume Nash’s Theorem and let {X} be a compact convex set in {\mathbf{R}^n} and {f:X\rightarrow X} be a continuous function. We claim that {f} has a fixed point. Indeed, consider the two-player normal-form game in which the set of pure strategies of every player is {X}, and the payoffs under strategy profile {(x,y)\in X^2} is {-\|x-y\|^2} for player I and {-\|f(x)-y\|^2} for player II. Since strategy sets are compact and the payoff function is continuous, the game has an equilibrium in mixed strategies. In fact, the equilibrium strategies must be pure. (Indeed, for every mixed strategy {\mu} of player II, player 1 has a unique best response, the one concentrated on the barycenter of {\mu}). But if {(x,y)} is a pure equilibrium then it is immediate that {x=y=f(x)}.

Update I should add that I believe that the transition from existence of mixed Nash Equilibrium in games with finite strategy sets to existence of mixed Nash Equilibrium in games with compact strategy sets and continuous payoffs is not hard. In the case of the game that I defined above, if {\{x_0,x_1,\dots\}} is a dense subset of {X} and {(\mu_n,\nu_n)\in \Delta(X)\times\Delta(X)} is a mixed equilibrium profile in the finite game with the same payoff functions and in which both players are restricted to the pure strategy set {\{x_1,\dots,x_n\}}, then an accumulation point of the sequence {\{(\mu_n,\nu_n)\}_{n\geq 1}} in the weak{^\ast} topology is a mixed strategy equilibrium in the original game.

 

There are two equivalent ways to understand the best response property of a Nash Equilibrium strategy. First, we can say that the player plays a mixed strategy whose expected payoff is maximal among all possible mixed strategies. Second, we can say that the player randomly chooses a pure strategy from the set of pure strategies whose expected payoff is maximal among all possible pure strategies.

So far so good, and every student of game theory is aware of this equivalence. What I think is less known is that the two perspectives are not identical for {\epsilon}-best response and {\epsilon}-equilibrium: A mixed strategy whose expected payoff is almost optimal might put some positive (though small) probability on a pure strategy which gives a horrible payoff. In this post I am going to explain why I used to think the difference between the two perspectives is inconsequential, and why, following a conversation with Ayala Mashiah-Yaakovi about her work on subgame perfect equilibrium in Borel games, I changed my mind.

Read the rest of this entry »

This is the most frustrating part in academic career: You come up with a cool idea, google around a bit for references, and discover that the Simpsons did it twenty years ago. It happened to Ronen and I recently when we were talking about computability of Nash equilibrium. Only thing left is to blog about it, so here we are.

A good starting point is the omitted paragraph from John Nash’ Thesis (scanned pdf), in which Nash motivates his new idea. The paragraph is not included in the published version of the thesis, it is not clear whether because of editorial intervention or Nash’ own initiative.

We proceed by investigating the question: What would be a rational prediction of the behavior to be expected of rational playing the game in question? By using the principles that a rational prediction should be unique, that the players should be able to deduce and make use of it, and that such knowledge on the part of each player of what to expect the others to do should not lead him to act out of conformity with the prediction, one is led to the concept of a solution defined before.

The `concept of a solution defined before’ is what every reader of this blog knows as Nash Equilibrium in mixed strategies. This paragraph is intriguing for several reasons, not the least of them is the fact, acknowledged by Nash, that Nash equilibrium of a game is not necessarily unique. This opens the door to the equilibrium refinements enterprise, which aims to identify the unique `rational prediction’: the equilibrium which the players jointly deduce from the description the game. The refinements literature seems to have gone out of fashion sometimes in the eighties (`embarrassed itself out’ as one prominent game theorist told me) without producing a satisfactory solution, though it is still very popular `in applications’.

Anyway, the subject matter of this post is another aspect of Nash’s argument, that the players should be able to deduce the prediction and make use of it. It is remarkable that Nash (and also von-Neumann and Morgenstern before him but I’ll leave that to another post) founded game theory not on observable behavior, as economics orthodoxy would have had it, but on unobservable reasoning process. How can we formally model reasoning process ? At the very least, the players should somehow contain the mixed strategies in their mind, which means that the strategies can be explicitly described, i.e that the real numbers that represent the probabilities of each actions are computable: Their, say, binary expansion should be the output of a Turing Machine. If this is the case then the player can also `make use’ of these mixed strategies: they have an effective way (that is, a computer program) that randomizes a strategy according to these strategies.

I hasten to say that while I refer to the agent’s mind, we must not be so narrow mindedly earth-bound as to assume our players are human beings. Our players might arrives from another planet, where evolution made a better job than what we see around us, or from another universe where the law of physics are different, or they may be the gods themselves. Species come and go, but concepts like rationality and reasoning — the subject matter of game theory — are eternal.

Well, Can the players contain the mixed strategies in their mind ? Are the predictions of game theory such that cognitive agents can reason about them and make use of them? Fortunately they are

Theorem 1 Every normal form game with computable payoffs admits a mixed Nash Equilibrium with computable mixed strategies.

My favorite way to see this is to using Tarski’s Theorem that real algebraic fields are elementary isomorphic (I also wrote about it here). Thus, Nash’s Theorem, being a first order statement, is also true in the field of computable real numbers.

The story does not ends here, though. Take another look at Nash’s omitted paragraph: Our players are not only supposed to be able to somehow hold the prediction in their minds and make use of it. They should also deduce it: Starting from the payoff matrix, one step after another, a long sequence of arguments, each following the previous one, should culminate in the `rational prediction’ of the game. You see where this leads: The prediction should be computable from the payoff matrix. Alas,

Theorem 2 There exists no computable function that get as input payoff matrix with computable payoffs and outputs a mixed Nash Equilibrium of the game.

Bottom line: Rational players can reason about and make use of Nash’s rational prediction, but they cannot deduce it. The prediction should somehow magically pop up in their minds. Here is a link to Kislaya Prasad’s paper where these theorems were already published.

Kellogg faculty blogroll