You are currently browsing the monthly archive for July 2012.

In this post I describe an alternative proof of a nifty result that appears in a forthcoming paper by Goeree, Kushnir, Moldovanu and Xi. They show (under private values) given any Bayesian incentive compatible mechanism, M, there is a dominant strategy mechanism that gives each agent the same expected surplus as M provides.

For economy of exposition only, suppose 2 agents and a finite set of possible outcomes, {A}. Suppose, also the same type space, {\{1, \ldots, m\}} for both. Let {f(\cdot)} be the density function over types. To avoid clutter, assume the uniform distribution, i.e., {f(\cdot) = \frac{1}{m}}. Nothing in the subsequent analysis relies on this.

When agent 1 reports type {s} and agent 2 reports type {t}, denote by {z_r(s,t) } the probability that outcome {r \in A} is selected. The {z}‘s must be non-negative and satisfy {\sum_{r \in A}z_r(s,t) = 1.}

Associated with each agent {i} is a vector {\{a_{ir}\}_{r \in A}} that determines, along with her type, the utility she enjoys from a given allocation. In particular, given the allocation rule {z}, the utility that agent {1} of type {t} enjoys when the other agent reports type {s} is {\sum_{r \in A}ta_{1r}z_r(t,s).} A similar expression holds agent 2.

Let {q_i(s,t) = \sum_{r \in A}a_{ir}z_r(s,t).} Interpret the {q}‘s as the `quantity’ of goods that each agent receives. Dominant strategy implies that {q_1(s,t)} should be monotone increasing in {s} for each fixed {t} and {q_2(s,t)} should be monotone increasing in {t} for each fixed {s}. The interim `quantities will be: {mQ_i(s) = \sum_t\sum_{r \in A}a_{ir}z_r(s,t).} Bayesian incentive compatibility (BIC) implies that the {Q_i}‘s should be monotone. To determine if given BIC interim `quantities’ {Q_i}‘s can be implemented via dominant strategies, we need to know if the following system is feasible

\displaystyle \sum_{r \in A}a_{1r}[z_r(i,j) - z_r(i-1,j)] \geq 0\,\, \forall \,\, i = 2, \ldots, m \ \ \ \ \ (1)

\displaystyle \sum_{r \in A}a_{2r}[z_r(i,j) - z_r(i,j-1)] \geq 0\,\, \forall \,\, j = 2, \ldots, m \ \ \ \ \ (2)

\displaystyle \sum_t\sum_{r \in A}a_{1r}z_r(s,t) = mQ_1(s) \ \ \ \ \ (3)

\displaystyle \sum_s\sum_{r \in A}a_{2r}z_r(s,t) = mQ_2(t) \ \ \ \ \ (4)

\displaystyle \sum_{r \in A}z_r(s,t) = 1\,\, \forall s,t \ \ \ \ \ (5)

\displaystyle z_r(i,j) \geq 0\,\, \forall i,j,r \ \ \ \ \ (6)

System (1-6) is feasible iff the optimal objective function value of the following program is zero:

\displaystyle \min \sum_{i,j}w_{ij} + \sum_{i,j}h_{ij}

subject to
\displaystyle \sum_{r \in A}a_{1r}[z_r(i,j) - z_r(i-1,j)] + w_{ij} - u_{ij} = 0\,\, \forall \,\, i = 2, \ldots, m\,\, (\mu_{ij}) \ \ \ \ \ (7)

\displaystyle \sum_{r \in A}a_{2r}[z_r(i,j) - z_r(i,j-1)] + h_{ij} - v_{ij} = 0\,\, \forall \,\, j = 2, \ldots, m\,\, (\nu_{ij}) \ \ \ \ \ (8)

\displaystyle \sum_j\sum_{r \in A}a_{1r}z_r(i,j) = mQ_1(i)\,\, (\alpha_i) \ \ \ \ \ (9)

\displaystyle \sum_i\sum_{r \in A}a_{2r}z_r(i,j) = mQ_2(j)\,\, (\beta_j) \ \ \ \ \ (10)

\displaystyle \sum_{r \in A}z_r(i,j) = 1\,\, \forall i,j\,\, (\lambda(i,j)) \ \ \ \ \ (11)

\displaystyle w_{ij},\,\, h_{ij}\,\,z_r(i,j) \geq 0\,\, \forall i,j,r \ \ \ \ \ (12)

Let {(z^*, w^*, h^*, u^*, v^*)} be an optimal solution to this program.
Suppose, for a contradiction there is a pair {(p,q)} such that {w^*_{pq} > 0}. I will argue that there must exist an {s} such that {u^*_{ps} > 0}. Suppose not, then for each {j \neq q}, either {w^*_{pj} > 0} or {w^*_{pj} = 0} and {u^*_{pj} = 0} (at optimality, it cannot be that {w^*_{pq}} and {u^*_{pq}} are both non-zero). In this case

\displaystyle m[Q_1(p)-Q_1(p-1)] = \sum_j\sum_{r \in A}a_{1r}z^*_r(p,j) - \sum_j\sum_{r \in A}a_{1r}z^*_r(p-1,j)

= \sum_{j: w^*_{pj} > 0}a_{1r}[z^*_r(p,j) - z^*_r(p-1,j)] = -\sum_j w^*_{pj}. This last term is negative, a contradiction. Therefore, there is a s such that w^*_{ps} = 0 but u^*_{ps} > 0.

Let {Z_1(i,j) = \sum_{r \in A}a_{1r}z^*_r(i,j)}, {Z_2(i,j) = \sum_{r \in A}a_{2r}z^*_r(i,j)} and denote by {Z(i,j)} the point {(Z_1(i,j), Z_2(i,j))}. Observe that {Z(i,j)} is in the convex hull, {K} of {\{(a_{1r}, a_{2r}\}_{r \in A}} for all {i,j}. Thus choosing {\{z_r(i,j)\}_{r \in A}}‘s amounts to choosing a point {Z(i,j) \in K}. Equivalently, choosing a point {Z(i,j) \in K} gives rise to a set of {\{z_r(i,j)\}_{r \in A}}‘s. For convenience assume that {Z(i,j)} is in the strict interior of {K} for all {(i,j)} and that K is full dimensional. This avoids having to deal with secondary arguments that obscure the main idea.

Recall, {w^*_{pq} > 0} implies {Z_1(p,q)  0} implies that {Z_1(p,s) > Z_1(p,s)}. Take all points {\{Z(i,q)\}_{i \geq p}} and shift them horizontally to the right by {\delta}. Call these new points {\{Z'(i,q)\}_{i \geq p}}. Observe that {Z'(i,q) \in K} for all {i \geq p}. Next, take all points {\{Z(i,s)\}_{i \geq p}} and shift them horizontally to the left by {\delta} to form new points {\{Z'(i,s)\}_{i \geq p}}. These points are also in {K}. Leave all other points {\{Z(p,j)\}_{j \neq, q,s}} unchanged.

Because the vertical coordinates of all points were left unchanged, (8) and (10) are satisfied by this choice of points. Because {\{Z(i,q)\}_{i \geq p}} and {\{Z(i,s)\}_{i \geq p}} were shifted in opposite directions along the horizontal, (9) is still true. Finally, because all points in {\{Z(i,q)\}_{i \geq p}} and {\{Z(i,s)\}_{i \geq p}} were shifted by the same amount, (7) continues to hold.

The shift leftwards of {Z(p,s)} reduces {u_{ps}} while the rightward shift of {Z(p,q)} reduces {w_{pq}}. Thus, we get a new solution with higher objective function value, a contradiction.

If {Z(p,q)} and {Z(p,s)} are not the interior of {K} but on the boundary, then horizontal shifts alone may place them outside of {K}. In the case of {Z(p,q)} this can only happen if {Z_2(p,q) > Z_2(p-1,q)}. In this case, shift {Z(p,q)} across and to the right by {\delta} as well and then downwards by the same amount. This would have to be matched by a corresponding upward shift by some point {Z_2(h,q)}. Similarly with {Z(p,s)}.

Thanks to Alexey Kushnir and Ahmad Peivandi for comments.

Every finite-horizon game in extensive form in which the number of actions in every node is finite has a subgame-perfect equilibrium. Indeed, consider a smallest subgame, that is, a subgame that contains no proper subgames. By Nash’s theorem this subgame has a Nash equilibrium in mixed strategies. Since this subgame contains no proper subgame, this Nash equilibrium is a subgame-perfect equilibrium of the subgame. We can then replace this whole subgame by a single node with terminal payoff that coincides with the equilibrium payoff of the subgame, and continue inductively to construct a subgame-perfect equilibrium (in mixed strategies) in the whole game.

When uncountably many actions are available at some nodes, a subgame-perfect equilibrium need not exist (Harris, Reny, and Robson, 1995).

What happens in infinite-horizon games? When horizon is infinite, a Nash equilibrium need not exist. Indeed, consider a one-player infinite-horizon game in which the player, Alice, has two possible actions, A and B, at every stage. Alice’s payoff is 1-(1/n), where n is the first stage in which she chooses the action B, if n is finite. If Alice never chooses the action B, her payoff is 0. In this game, Alice would like to choose the action B for the first time as late as possible, and, whenever she chooses B, she is better off waiting one more stage. Therefore Alice has no optimal strategy, and there is no equilibrium.

Plainly this toy-game does have an ε-equilibrium: choose B for the first time at stage 1/ε.

Does there exist a subgame-perfect ε-equilibrium in every infinite-horizon perfect-information extensive-form game? Apparently the answer is negative. The following example, to appear in a new paper of János Flesch, Jeroen Kuipers, Ayala Mashiah-Yaakovi, Gijs Schoenmakers, Eran Shmaya, your humble servant, and Koos Vrieze, proves this point.

Alice and Bob play an infinite-stage game. In each stage one of them is the active player who chooses the action (and the other does nothing). The active player has two actions, A and B, that correspond to the identity of the next active player. That is, if the current active player chooses action A, then Alice is the active player in the next stage, while if the current active player chooses action B, then Bob is the active player in the next stage.

What are the payoffs?

  • If there is a stage t such that, from stage t and on, Alice is the active player, then the payoff to Alice is -1 and Bob’s payoff is 2. In such a case we say that the game absorbs with Alice.
  • If there is a stage t such that, from stage t and on, Bob is the active player, then the payoff to Alice is -2 and Bob’s payoff is 1. In such a case we say that the game absorbs with Bob.
  • Otherwise the payoff to both is 0.

Let us argue that in this game there is no subgame-perfect ε-equilibrium, provided ε is sufficiently small. So fix ε>0 sufficiently small and suppose by contradiction that there is a subgame-perfect ε-equilibrium.

In any subgame in which Bob is the first active player he can guarantee 1 by always choosing B. In particular, Bob’s payoff in this subgame must be at least 1-ε.

Fix now a subgame in which Alice is the first active player. Can it be that the play absorbs with Alice with probability less than ε? If this happens, then, since Bob’s payoff is at least 1-ε, the probability that the play absorbs with Bob is at least 1-3ε. But then Alice’s payoff is way below -1, which she can guarantee by always playing A. Thus, in any subgame in which Alice is the first active player, play absorbs with Alice with probability at least ε.

By Levy’s 0-1 law this conclusion implies that with probability 1 the play absorbs with Alice. This in turn implies that play will not absorb with Bob: he will never start an infinite sequence of B’s. But then Alice has no incentive to play A: she will always play B and get 0.

This example shows that apart of the well-beloved concept of ε-equilibrium, we do not have a good concept of subgame-perfect ε-equilibrium in infinite-horizon games. In another post I will comment on the concept of trembling-hand equilibrium in infinite-horizon perfect-information games.

In Israel, when a graduate student completes writing his thesis, the thesis is sent to several referees, and based on their reports the university decides whether to grant the Ph.D. degree to the student.

France has a different system. Two referees provide reports that determine whether the student is allowed to defend his thesis. During the defense, a jury made of three judges, the two referees, and the Ph.D. adviser listen to the student’s talk and decides whether to grant the Ph.D. degree to the student or not. Given that the student’s family makes up the majority of the audience, and given that they already prepared a banquet in the adjacent room, the outcome is pretty obvious. I guess that 100 years ago when commuting was more difficult and the student’s family was not in the audience, the outcome varied more than today.

Read the rest of this entry »

The “polymath project” is a blog where serious unsolved problems in math are attacked in a massively collaborative way. A problem is posted and hundreds of users suggest lines of attack. Of course, a lot of the progress is made by a few superstars, but it is still a cool idea. The cynical economist in me must point out that while the supposed m.o. of the project is to harness the beauty of collaboration and cooperation in the age of the web, surely the fuel behind the engine is the competitive instincts of the participants. Nothing wrong with that, though; if not for the competitive drive we might still all live in the jungle.

Also, once a year they run a “mini-polymath” project where they attack a problem from the International Math Olympiad. Of course this is very different from an unsolved problem; everyone knows there is a solution that can reasonably be found by a single person in an hour or two. Still a fun idea. This year’s problem, posted yesterday, is phrased as a game. I’ve played with it for a few minutes and I suspect that knowledge of specific theorems or concepts in game theory will not be useful, though of course the habit of thinking strategically will. (Rubinstein would say that this is true of most real-life applications of game theory as well.) Try your hand, or look at the comments, which surely have spoilers by now as it has been up for about a day.

BusinessInsider reports that Paul Krugman sees two possible outcomes to the Euro crisis, both are considered unacceptable by the Germans at present:

“I say it’s 50/50 … Either the Germans have to accept something they consider unacceptable, or they have to accept something, the breakup of the euro, that they consider something unacceptable.”

Read the original article here. I still have to fully understand and digest the game theoretic implications of Krugsman’s point of view.

This column by David Leonhardt cites the recent Supreme Court ruling as an example showing that prediction markets aren’t so great. After all, Intrade was putting the chances of the individual mandate being found unconstitutional at 75%.

But concluding anything from this about the prediction market being “wrong” is, well, wrong. If *everything* that Intrade said would happen with probability 75% indeed happened, *that* would be a clear sign of a market inefficiency. If only half of such events happened, that would constitute an inefficiency also. But a single event with market probability .75 failing to happen is no more a sign that the market is inefficient or “wrong” than the Yankees losing a game to the Royals, however much more important the event may be.

What should we expect from Intrade? Barring insider trading, we can expect a reasonably efficient aggregation of publicly available information. If there is little public information, we will see probabilities that tend more towards .5 than to the extremes, and they will often be “wrong” if one defines this as being on the wrong side of .5, but non-extreme predictions are supposed to be “wrong” a substantial minority of the time. Of course, if you were able to seduce a Supreme Court clerk, you might have been able to make a better prediction. This doesn’t expose a market flaw any more than the potential profits from insider trading expose a flaw in the conventional stock market.

Roberts’ vote was no more predictable to an outside observer than a baseball pitcher’s unexpectedly good or bad game, however many reasons we may find for it ex post. Unpredictable things happen. This is why the probability was .75 and not 1. Many pundits acted as if striking down the law was near-certain, so I would say the market showed some wisdom in placing the probability at only .75.

Kellogg faculty blogroll