You are currently browsing the tag archive for the ‘axiom of choice’ tag.

Update (May 2017) McLennan and Tourky seem to be the first to make the  argument in this post (link to their paper)

 

They say that when Alfred Tarski came up with his theorem that the axiom of choice is equivalent to the statement that, for every set {A}, {A} and {A\times A} have the same cardinality, he first tried to publish it in the French PNAS. Both venerable referees rejected the paper: Frechet argued there is no novelty in equivalence between two well known theorems; Lebesgue argued that there is no interest in equivalence between two false statments. I don’t know if this ever happened but it’s a cool story. I like to think about it everytime a paper of mine is rejected and the referees contradict each other.

Back to game theory, one often hears that the existence of Nash Equilibrium is equivalent to Brouwer’s fixed point theorem. Of course we all know that Brouwer implies Nash but the other direction is more tricky less known. I heard a satisfying argument for the first time a couple of months ago from Rida. I don’t know whether this is a folk theorem or somebody’s theorem but it is pretty awesome and should appear in every game theory textbook.

So, assume Nash’s Theorem and let {X} be a compact convex set in {\mathbf{R}^n} and {f:X\rightarrow X} be a continuous function. We claim that {f} has a fixed point. Indeed, consider the two-player normal-form game in which the set of pure strategies of every player is {X}, and the payoffs under strategy profile {(x,y)\in X^2} is {-\|x-y\|^2} for player I and {-\|f(x)-y\|^2} for player II. Since strategy sets are compact and the payoff function is continuous, the game has an equilibrium in mixed strategies. In fact, the equilibrium strategies must be pure. (Indeed, for every mixed strategy {\mu} of player II, player 1 has a unique best response, the one concentrated on the barycenter of {\mu}). But if {(x,y)} is a pure equilibrium then it is immediate that {x=y=f(x)}.

Update I should add that I believe that the transition from existence of mixed Nash Equilibrium in games with finite strategy sets to existence of mixed Nash Equilibrium in games with compact strategy sets and continuous payoffs is not hard. In the case of the game that I defined above, if {\{x_0,x_1,\dots\}} is a dense subset of {X} and {(\mu_n,\nu_n)\in \Delta(X)\times\Delta(X)} is a mixed equilibrium profile in the finite game with the same payoff functions and in which both players are restricted to the pure strategy set {\{x_1,\dots,x_n\}}, then an accumulation point of the sequence {\{(\mu_n,\nu_n)\}_{n\geq 1}} in the weak{^\ast} topology is a mixed strategy equilibrium in the original game.

 

Department of self-promotion: sequential tests, Blackwell games and the axiom of determinacy.

Links to part 1 and part 2

Recap: At every day {n} an outcome {s_n\in \{0,1\}} is realized (`{1}‘ means a rainy day, `{0}‘ a non-rainy day). An expert claims to know the distribution of the stochastic process that generates the sequence {s_0,s_1,\dots} of outcomes. A sequential test is given by a Borel set {R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{\mathbb N}} of infinite sequences of predictions and outcomes. An expert who delivers forecast {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]} fails on a realization {x=(s_0,s_1,\dots)\in X=\{0,1\}^{\mathbb N}} if {\left(p_0,s_0,p_1,s_1,\dots\right)\in R} where {p_k=f(s_0,\dots,s_{k-1})} is the prediction made by {f} about {s_k} given the previous outcomes {s_0,\dots,s_{k-1}}.

Let me remind you what I mean by `a test that does not reject the truth with probability {1-\epsilon}‘. I am going to write an equivalent definition to the one I used up to now, this time using the language of stochastic processes.

Definition 1 The test does not reject the truth with probabality {1-\epsilon} if for every {\{0,1\}}-valued stochastic process {S_0,S_1,\dots} one has

\displaystyle \mathop{\mathbb P}\bigl(\left(P_0,S_0,P_1,S_1,\dots\right)\in R\bigr)<\epsilon\ \ \ \ \ (1)

where

\displaystyle P_n=\mathop{\mathbb P}\left(S_n=1\left|S_0,\dots,S_{n-1}\right.\right)\ \ \ \ \ (2)

We are going to prove the following theorem:

Theorem 2 Consider a sequential test that does not reject the true expert with probability {1-\epsilon}. Then for every {\delta>0} there exists a {\zeta\in\Delta({\mathcal F})} such that

\displaystyle \zeta\bigl(\left\{\mu|f\text{ fails on }x\right\}\bigr)<\epsilon+\delta\ \ \ \ \ (3)

for every {x\in X}.

So a charlatan, who doesn’t know anything about the true distribution of the process, can randomize a forecast according to {\zeta} and pass the test with high probability regardless of the actual distribution.

— Discussion —

Before we delve into the proof of the theorem, a couple of words about where we are. Recall that a forecast {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]} specifies the Expert’s prediction about rain tomorrow after every possible history {\sigma=(s_0,\dots,s_{n-1})}. We denote by {{\mathcal F}} the set of all such forecasts. The most general tests are given by a function {T:{\mathcal F}\rightarrow 2^X}, and specify for every such {f} the set of realizations {x\in X=\{0,1\}^{\mathbb N}} over which the forecast {f} fails. Since {X} is infinite we know that there exist tests that passes a true expert and are not manipulable by a strategic charlatan.

Sequential tests have the additional property that the test’s verdict depends only on predictions made by {f} along the realized path: When deciding whether a forecast {f} passes or fails when the realization is {x=(s_0,s_1,\dots)} the test only considers the predictions {p_k=f(s_0,\dots,s_{k-1})} made by {f} along x. We also say that the test does not depend on counter-factual predictions, i.e. predictions about the probability of rainy day after histories that never happen. It seems that counter-factual predictions would be irrelevant to testing anyway, but, as the theorem shows, if the test does not use counter-factual prediction then it is manipulable.

One situation in which sequential tests are the only available tests is when, instead of providing his entire forecast {f} before any outcome is realized, at every day {n} the expert only provides his prediction {p_n} about the outcome {s_n} of day {n} just before it is realized. At infinity, all the information available to the tester is the sequence {(p_0,s_0,p_1,s_1,\dots)} of predictions and realized outcomes.

— Sketch of Proof —

We can transform the expert’s story to a two-player normal form zero-sum game as we did before: Nature chooses a realization {x\in X} and Expert chooses a forecast {f\in{\mathcal F}}. Then Expert pays Nature {1} if {f} fails on {x} and {0} otherwise. The fact that the test does not reject the true expert translates to the fact that the maximin of the game is small. If we knew that the minimax is also small then an optimal mixed strategy {\zeta} for the Expert will satisfy (3). We only need to prove the existence of value, or as game theorists say, that the game is determined.

Unfortunately, this time we cannot use Fan’s Theorem since we made no topological assumption about the set {R}, so there is no hope to get semi-continuity of the payoff function. Indeed, as we shall see in a moment, the Normal form representation misses an important part of the expert’s story. Instead of using a normal form game, we are going to write the game in extensive form. I will call this game {\Gamma}.

  1. The game is played in stages {n=0,1,\dots}.
  2. At stage {n} Nature chooses an outcome {s_n} and Expert chooses a prediction {p_n\in [0,1]} simultaneously and independently.
  3. Nature does not monitor past actions of Expert.
  4. Expert monitors past actions {s_0,\dots,s_{n-1}} of Nature.
  5. At infinity, Expert pays Nature {1} if {\left(p_0,s_0,p_1,s_1,\dots\right)\in R} and {0} otherwise.

Now I am going to assume that you are familiar with the concept of strategy in extensive form game, and are aware of Kuhn’s Theorem about the equivalence between behavioral strategies and mixtures of pure strategies (I will make implicit uses of both directions of Kuhn’s Theorem in what follows). We can then look at the normal form representation of this game, in which the players choose pure strategies. A moment’s thought will convince you that this is exactly the game from the previous paragraph: Nature’s set of pure strategies is {X}, Expert’s set of pure strategies is {{\mathcal F}} and the payoff for a strategy profile {(x,f)} is {\mathbf{1}_{T(f)}(x)}. So far no real gain. Extensive form games in which one of the players don’t monitor opponent’s actions need not be determined. In order to get a game with a value we are going to twist the game {\Gamma}, and allow Nature to observe past actions of the Expert player. This makes life more difficult for the Expert. Up to a minor inaccuracy which I will comment about later, the resulting game is what’s called Blackwell game and it admits a value by (a seminal theorem of Donald Martin).

Here is the game after the twist. I call this game {\Gamma^\ast}.

  1. The game is played in stages {n=0,1,\dots}.
  2. At stage {n} Nature chooses an outcome {s_n} and Expert chooses a prediction {p_n\in [0,1]} simultaneously and independently.
  3. Each player monitors past actions of the opponent.
  4. At infinity, Expert pays Nature {1} if {\left(p_0,s_0,p_1,s_1,\dots\right)\in R} and {0} otherwise.

Now if you internalized the method of proving manipulability that I was advocating in the previous two episodes, you know what’s left to prove: that the maximin of {\Gamma^\ast} is small, i.e. that, fore every strategy of Nature, Expert has a response that makes the payoff at most {\epsilon}. We know this is true for the game {\Gamma} but in {\Gamma^\ast} Nature is more powerful.

Here is the most important insight of the proof: The fact that an expert who knows the distribution of the process can somehow pass the test {T} implies that the maximin in {\Gamma} is small, but this fact alone doesn’t say anything about the maximin of {\Gamma^\ast}. To show that the maximin of {\Gamma^\ast} is also small we will use the fact that the way such an expert passes the test {T} is by providing the correct forecast. Until now the distinction was not really important to us. Now it comes into play.

Let {g:\left([0,1]\times \{0,1\}\right)^{<{\mathbb N}}\rightarrow [0,1]} be a behavioral strategy of Nature in {\Gamma^\ast}, i.e. a contingent plan that specifies the probability that Nature play {1} after every. Let {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]} be the pure strategy of Expert in {\Gamma^\ast} that is given by

\displaystyle \begin{array}{rcl} &&f(s_0,s_1,\dots,s_{n-1})=\\&&g\bigl(f(), s_0, f\left(s_0,s_1\right), s_1, \dots,f\left(s_0,s_1,\dots,s_{n-2}\right),s_{n-1}\bigr)\end{array}

So the pure action taken by the Expert player at day {n} is the mixed action that Nature is going to take at day {n} according to her strategy {g}. Now assume that Nature follows {g} and Expert follows {f}. Let {P_n} and {S_n} be the random variables representing the actions taken by Expert and Nature at day {n}. Then the stochastic process {P_0,S_0,P_1,\dots} satisfies (2). Therefore from (1) we get that the expected payoff when Nature plays {g} and Expert plays {f} is indeed smaller than {\epsilon}.

Now for the minor inaccuracy that I mentioned: For Martin’s Theorem we need the set of actions at every stage to be finite. We can handle this obstacle by restricting Expert’s action at every stage to a grid and applying the coupling argument.

— Non-Borel Tests —

What about pathological sequential tests that are given by a non-Borel set {R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{\mathbb N}} ? Well, if, like me, you find it preposterously impossible to choose one sock from each of infinitely many pairs of socks, then perhaps you live in the AD paradise. Here every set is measurable, Blackwell Games are determined even when the payoff function is not Borel, and Theorem 2 is true without the assumption that {R} is Borel. See, the AD universe is a paradise for charlatans, since they can do as well as the true experts.

If, on the other hand, you subscribe to the axiom of choice, then you have a non-manipulable test:

Theorem 3 There exists a sequential test with a non-Borel set {R} that does not reject the truth with probability {1} and such that for every {\zeta\in\Delta({\mathcal F})} there exists some realization {x=(s_0,s_1,\dots)} such that

\displaystyle \zeta\bigl(\left\{\mu|f\text{ fails on }x\right\}\bigr)=1.

— Summary —

If you plan to remember one conclusion from my last three posts, I suggest you pick this: There exist non-manipulable tests, but they must rely on counter-factual predictions, or be extremely pathological.

Muchas gracias to everyone who read to this point. Did I mention that I have a paper about this stuff ?

The economics profession has been for a long time divided into two camps — ADs and ACs.
On the AD camp, there is me, claiming that a bit of exciting bizarreness brought to game theory by the axiom of choice is not worth the headache of checking measurability of strategies and the embarrassment caused by undetermined gale-stewart games; The Axiom of Determinacy, on the other hand, offers a paradise in which Read the rest of this entry »

Kellogg faculty blogroll