You are currently browsing the monthly archive for July 2012.
In this post I describe an alternative proof of a nifty result that appears in a forthcoming paper by Goeree, Kushnir, Moldovanu and Xi. They show (under private values) given any Bayesian incentive compatible mechanism, M, there is a dominant strategy mechanism that gives each agent the same expected surplus as M provides.
For economy of exposition only, suppose 2 agents and a finite set of possible outcomes, . Suppose, also the same type space,
for both. Let
be the density function over types. To avoid clutter, assume the uniform distribution, i.e.,
. Nothing in the subsequent analysis relies on this.
When agent 1 reports type and agent 2 reports type
, denote by
the probability that outcome
is selected. The
‘s must be non-negative and satisfy
Associated with each agent is a vector
that determines, along with her type, the utility she enjoys from a given allocation. In particular, given the allocation rule
, the utility that agent
of type
enjoys when the other agent reports type
is
A similar expression holds agent 2.
Let Interpret the
‘s as the `quantity’ of goods that each agent receives. Dominant strategy implies that
should be monotone increasing in
for each fixed
and
should be monotone increasing in
for each fixed
. The interim `quantities will be:
Bayesian incentive compatibility (BIC) implies that the
‘s should be monotone. To determine if given BIC interim `quantities’
‘s can be implemented via dominant strategies, we need to know if the following system is feasible
System (1-6) is feasible iff the optimal objective function value of the following program is zero:
subject to
Let be an optimal solution to this program.
Suppose, for a contradiction there is a pair such that
. I will argue that there must exist an
such that
. Suppose not, then for each
, either
or
and
(at optimality, it cannot be that
and
are both non-zero). In this case
. This last term is negative, a contradiction. Therefore, there is a
such that
but
.
Let ,
and denote by
the point
. Observe that
is in the convex hull,
of
for all
. Thus choosing
‘s amounts to choosing a point
. Equivalently, choosing a point
gives rise to a set of
‘s. For convenience assume that
is in the strict interior of
for all
and that
is full dimensional. This avoids having to deal with secondary arguments that obscure the main idea.
Recall, implies
implies that
. Take all points
and shift them horizontally to the right by
. Call these new points
. Observe that
for all
. Next, take all points
and shift them horizontally to the left by
to form new points
. These points are also in
. Leave all other points
unchanged.
Because the vertical coordinates of all points were left unchanged, (8) and (10) are satisfied by this choice of points. Because and
were shifted in opposite directions along the horizontal, (9) is still true. Finally, because all points in
and
were shifted by the same amount, (7) continues to hold.
The shift leftwards of reduces
while the rightward shift of
reduces
. Thus, we get a new solution with higher objective function value, a contradiction.
If and
are not the interior of
but on the boundary, then horizontal shifts alone may place them outside of
. In the case of
this can only happen if
. In this case, shift
across and to the right by
as well and then downwards by the same amount. This would have to be matched by a corresponding upward shift by some point
. Similarly with
.
Thanks to Alexey Kushnir and Ahmad Peivandi for comments.
Every finite-horizon game in extensive form in which the number of actions in every node is finite has a subgame-perfect equilibrium. Indeed, consider a smallest subgame, that is, a subgame that contains no proper subgames. By Nash’s theorem this subgame has a Nash equilibrium in mixed strategies. Since this subgame contains no proper subgame, this Nash equilibrium is a subgame-perfect equilibrium of the subgame. We can then replace this whole subgame by a single node with terminal payoff that coincides with the equilibrium payoff of the subgame, and continue inductively to construct a subgame-perfect equilibrium (in mixed strategies) in the whole game.
When uncountably many actions are available at some nodes, a subgame-perfect equilibrium need not exist (Harris, Reny, and Robson, 1995).
What happens in infinite-horizon games? When horizon is infinite, a Nash equilibrium need not exist. Indeed, consider a one-player infinite-horizon game in which the player, Alice, has two possible actions, A and B, at every stage. Alice’s payoff is 1-(1/n), where n is the first stage in which she chooses the action B, if n is finite. If Alice never chooses the action B, her payoff is 0. In this game, Alice would like to choose the action B for the first time as late as possible, and, whenever she chooses B, she is better off waiting one more stage. Therefore Alice has no optimal strategy, and there is no equilibrium.
Plainly this toy-game does have an ε-equilibrium: choose B for the first time at stage 1/ε.
Does there exist a subgame-perfect ε-equilibrium in every infinite-horizon perfect-information extensive-form game? Apparently the answer is negative. The following example, to appear in a new paper of János Flesch, Jeroen Kuipers, Ayala Mashiah-Yaakovi, Gijs Schoenmakers, Eran Shmaya, your humble servant, and Koos Vrieze, proves this point.
Alice and Bob play an infinite-stage game. In each stage one of them is the active player who chooses the action (and the other does nothing). The active player has two actions, A and B, that correspond to the identity of the next active player. That is, if the current active player chooses action A, then Alice is the active player in the next stage, while if the current active player chooses action B, then Bob is the active player in the next stage.
What are the payoffs?
- If there is a stage t such that, from stage t and on, Alice is the active player, then the payoff to Alice is -1 and Bob’s payoff is 2. In such a case we say that the game absorbs with Alice.
- If there is a stage t such that, from stage t and on, Bob is the active player, then the payoff to Alice is -2 and Bob’s payoff is 1. In such a case we say that the game absorbs with Bob.
- Otherwise the payoff to both is 0.
Let us argue that in this game there is no subgame-perfect ε-equilibrium, provided ε is sufficiently small. So fix ε>0 sufficiently small and suppose by contradiction that there is a subgame-perfect ε-equilibrium.
In any subgame in which Bob is the first active player he can guarantee 1 by always choosing B. In particular, Bob’s payoff in this subgame must be at least 1-ε.
Fix now a subgame in which Alice is the first active player. Can it be that the play absorbs with Alice with probability less than ε? If this happens, then, since Bob’s payoff is at least 1-ε, the probability that the play absorbs with Bob is at least 1-3ε. But then Alice’s payoff is way below -1, which she can guarantee by always playing A. Thus, in any subgame in which Alice is the first active player, play absorbs with Alice with probability at least ε.
By Levy’s 0-1 law this conclusion implies that with probability 1 the play absorbs with Alice. This in turn implies that play will not absorb with Bob: he will never start an infinite sequence of B’s. But then Alice has no incentive to play A: she will always play B and get 0.
This example shows that apart of the well-beloved concept of ε-equilibrium, we do not have a good concept of subgame-perfect ε-equilibrium in infinite-horizon games. In another post I will comment on the concept of trembling-hand equilibrium in infinite-horizon perfect-information games.
In Israel, when a graduate student completes writing his thesis, the thesis is sent to several referees, and based on their reports the university decides whether to grant the Ph.D. degree to the student.
France has a different system. Two referees provide reports that determine whether the student is allowed to defend his thesis. During the defense, a jury made of three judges, the two referees, and the Ph.D. adviser listen to the student’s talk and decides whether to grant the Ph.D. degree to the student or not. Given that the student’s family makes up the majority of the audience, and given that they already prepared a banquet in the adjacent room, the outcome is pretty obvious. I guess that 100 years ago when commuting was more difficult and the student’s family was not in the audience, the outcome varied more than today.
The “polymath project” is a blog where serious unsolved problems in math are attacked in a massively collaborative way. A problem is posted and hundreds of users suggest lines of attack. Of course, a lot of the progress is made by a few superstars, but it is still a cool idea. The cynical economist in me must point out that while the supposed m.o. of the project is to harness the beauty of collaboration and cooperation in the age of the web, surely the fuel behind the engine is the competitive instincts of the participants. Nothing wrong with that, though; if not for the competitive drive we might still all live in the jungle.
Also, once a year they run a “mini-polymath” project where they attack a problem from the International Math Olympiad. Of course this is very different from an unsolved problem; everyone knows there is a solution that can reasonably be found by a single person in an hour or two. Still a fun idea. This year’s problem, posted yesterday, is phrased as a game. I’ve played with it for a few minutes and I suspect that knowledge of specific theorems or concepts in game theory will not be useful, though of course the habit of thinking strategically will. (Rubinstein would say that this is true of most real-life applications of game theory as well.) Try your hand, or look at the comments, which surely have spoilers by now as it has been up for about a day.
BusinessInsider reports that Paul Krugman sees two possible outcomes to the Euro crisis, both are considered unacceptable by the Germans at present:
“I say it’s 50/50 … Either the Germans have to accept something they consider unacceptable, or they have to accept something, the breakup of the euro, that they consider something unacceptable.”
Read the original article here. I still have to fully understand and digest the game theoretic implications of Krugsman’s point of view.
This column by David Leonhardt cites the recent Supreme Court ruling as an example showing that prediction markets aren’t so great. After all, Intrade was putting the chances of the individual mandate being found unconstitutional at 75%.
But concluding anything from this about the prediction market being “wrong” is, well, wrong. If *everything* that Intrade said would happen with probability 75% indeed happened, *that* would be a clear sign of a market inefficiency. If only half of such events happened, that would constitute an inefficiency also. But a single event with market probability .75 failing to happen is no more a sign that the market is inefficient or “wrong” than the Yankees losing a game to the Royals, however much more important the event may be.
What should we expect from Intrade? Barring insider trading, we can expect a reasonably efficient aggregation of publicly available information. If there is little public information, we will see probabilities that tend more towards .5 than to the extremes, and they will often be “wrong” if one defines this as being on the wrong side of .5, but non-extreme predictions are supposed to be “wrong” a substantial minority of the time. Of course, if you were able to seduce a Supreme Court clerk, you might have been able to make a better prediction. This doesn’t expose a market flaw any more than the potential profits from insider trading expose a flaw in the conventional stock market.
Roberts’ vote was no more predictable to an outside observer than a baseball pitcher’s unexpectedly good or bad game, however many reasons we may find for it ex post. Unpredictable things happen. This is why the probability was .75 and not 1. Many pundits acted as if striking down the law was near-certain, so I would say the market showed some wisdom in placing the probability at only .75.
Recent Comments