You are currently browsing the tag archive for the ‘mixed strategies’ tag.
There are two equivalent ways to understand the best response property of a Nash Equilibrium strategy. First, we can say that the player plays a mixed strategy whose expected payoff is maximal among all possible mixed strategies. Second, we can say that the player randomly chooses a pure strategy from the set of pure strategies whose expected payoff is maximal among all possible pure strategies.
So far so good, and every student of game theory is aware of this equivalence. What I think is less known is that the two perspectives are not identical for -best response and -equilibrium: A mixed strategy whose expected payoff is almost optimal might put some positive (though small) probability on a pure strategy which gives a horrible payoff. In this post I am going to explain why I used to think the difference between the two perspectives is inconsequential, and why, following a conversation with Ayala Mashiah-Yaakovi about her work on subgame perfect equilibrium in Borel games, I changed my mind.
This is the most frustrating part in academic career: You come up with a cool idea, google around a bit for references, and discover that the Simpsons did it twenty years ago. It happened to Ronen and I recently when we were talking about computability of Nash equilibrium. Only thing left is to blog about it, so here we are.
A good starting point is the omitted paragraph from John Nash’ Thesis (scanned pdf), in which Nash motivates his new idea. The paragraph is not included in the published version of the thesis, it is not clear whether because of editorial intervention or Nash’ own initiative.
We proceed by investigating the question: What would be a rational prediction of the behavior to be expected of rational playing the game in question? By using the principles that a rational prediction should be unique, that the players should be able to deduce and make use of it, and that such knowledge on the part of each player of what to expect the others to do should not lead him to act out of conformity with the prediction, one is led to the concept of a solution defined before.
The `concept of a solution defined before’ is what every reader of this blog knows as Nash Equilibrium in mixed strategies. This paragraph is intriguing for several reasons, not the least of them is the fact, acknowledged by Nash, that Nash equilibrium of a game is not necessarily unique. This opens the door to the equilibrium refinements enterprise, which aims to identify the unique `rational prediction': the equilibrium which the players jointly deduce from the description the game. The refinements literature seems to have gone out of fashion sometimes in the eighties (`embarrassed itself out’ as one prominent game theorist told me) without producing a satisfactory solution, though it is still very popular `in applications’.
Anyway, the subject matter of this post is another aspect of Nash’s argument, that the players should be able to deduce the prediction and make use of it. It is remarkable that Nash (and also von-Neumann and Morgenstern before him but I’ll leave that to another post) founded game theory not on observable behavior, as economics orthodoxy would have had it, but on unobservable reasoning process. How can we formally model reasoning process ? At the very least, the players should somehow contain the mixed strategies in their mind, which means that the strategies can be explicitly described, i.e that the real numbers that represent the probabilities of each actions are computable: Their, say, binary expansion should be the output of a Turing Machine. If this is the case then the player can also `make use’ of these mixed strategies: they have an effective way (that is, a computer program) that randomizes a strategy according to these strategies.
I hasten to say that while I refer to the agent’s mind, we must not be so narrow mindedly earth-bound as to assume our players are human beings. Our players might arrives from another planet, where evolution made a better job than what we see around us, or from another universe where the law of physics are different, or they may be the gods themselves. Species come and go, but concepts like rationality and reasoning — the subject matter of game theory — are eternal.
Well, Can the players contain the mixed strategies in their mind ? Are the predictions of game theory such that cognitive agents can reason about them and make use of them? Fortunately they are
Theorem 1 Every normal form game with computable payoffs admits a mixed Nash Equilibrium with computable mixed strategies.
My favorite way to see this is to using Tarski’s Theorem that real algebraic fields are elementary isomorphic (I also wrote about it here). Thus, Nash’s Theorem, being a first order statement, is also true in the field of computable real numbers.
The story does not ends here, though. Take another look at Nash’s omitted paragraph: Our players are not only supposed to be able to somehow hold the prediction in their minds and make use of it. They should also deduce it: Starting from the payoff matrix, one step after another, a long sequence of arguments, each following the previous one, should culminate in the `rational prediction’ of the game. You see where this leads: The prediction should be computable from the payoff matrix. Alas,
Theorem 2 There exists no computable function that get as input payoff matrix with computable payoffs and outputs a mixed Nash Equilibrium of the game.
Bottom line: Rational players can reason about and make use of Nash’s rational prediction, but they cannot deduce it. The prediction should somehow magically pop up in their minds. Here is a link to Kislaya Prasad’s paper where these theorems were already published.
This post is a sequel to my previous ad of a joint paper with Yaron, in which we prove existence of pure -equilibria in certain games. I am now going to make a fuss over the fact that our result is essentially a collection of logical relations between linear inequalities, yet its proof seems to require Brouwer’s fixed point theorem.
I start with emphasizing game theory’s reliance on Brouwer’s Theorem to prove Nash’s Theorem, an outrage with which I have more or less already learned to live. I call this an outrage because Nash’s Theorem is essentially a theorem about logical relationships between algebraic inequalities. More formally, fix integers and let be the assertion that every -player game with strategies to every player admits a mixed Nash Equilibrium. Then is a first order sentence in the language of ordered fields. The signature of this language is where are constant symbols, is a relation symbol and are binary function symbols. (Skip the next paragraph if this is obvious to you).
Indeed, let be a set of variables, representing the payoff matrix of an -player game with strategies for every player, and let be a set of variables representing a mixed strategies profile. Then
where is a formula that says that is a mixed Nash equilibrium in a game with payoff matrix . This is a conjunction of formulas that assert that is indeed a mixed strategy profile (nonnegative elements which sum to ), and that if player plays action with a positive probability under this profile then player doesn’t gain from deviating to any pure strategy. The last assertion involved a somewhat unappealing term (the payoff for player under profile when the payoff matrix is ), but this term is just products and additions of variables.
Now since by Tarski’s Theorem all real closed fields satisfy the same first order sentences, it follows that Nash’s Theorem is true in every real closed field ! Here is an interesting corollary of this conclusion: Every game in which the payoffs are algebraic numbers has an equilibrium in which the probabilities are algebraic numbers. Here is a more interesting corollary: In discounted stochastic games, an equilibrium strategy can be expressed as a fractional laurent series of the discount factor. This appears in a seminal paper (jstor) of Bewley and Kohlberg, who are to my knowledge the first to make this observation. I presented this paper in a students seminar back in grad school, probably the first paper in game theory I have ever read, and it is still one of my favorites.
Anyway, back to pure -equilibrium. Read the rest of this entry »
Among game theoretic concepts, mixed strategy is arguably the most difficult to digest. We don’t see people tossing coins all the time, and it’s difficult to justify rational decision as based on Lady Fortuna’s unpredictable caprices. The case of Nash Equilibrium is especially disturbing — if you are indifferent between a couple of strategies then why bother randomizing between them according to the mixture prescribed by the equilibrium. Just pick one of these strategies arbitrary and get it over with.
I know of two types of answers that game theory gives for this conundrum. One, which may be called `interpretation of mixed strategies’ is arguing that the mixed strategies in Nash equilibrium do not reflect an actual randomization performed by the players: Epistemic game theory interprets mixed strategies as opponent’s beliefs about a player’s (non-randomized) strategy; Harsanyi’s Purification approach embeds a normal form game in a larger game with incomplete information and pure equilibrium. The other type of answers is identifying classes of games which admit pure equilibrium, such as games with strategic complementarity and potential games.
In my new paper with Yaron(pdf) we suggest another class of games which admit pure -equilibrium, which means that no player gains more than from deviating. These are games in which a player’s payoff does not change much if one of her opponents changes his strategy:
Math and open problems below the fold…