You are currently browsing the monthly archive for March 2011.

Department of self-promotion: sequential tests, Blackwell games and the axiom of determinacy.

Links to part 1 and part 2

Recap: At every day {n} an outcome {s_n\in \{0,1\}} is realized (`{1}‘ means a rainy day, `{0}‘ a non-rainy day). An expert claims to know the distribution of the stochastic process that generates the sequence {s_0,s_1,\dots} of outcomes. A sequential test is given by a Borel set {R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{\mathbb N}} of infinite sequences of predictions and outcomes. An expert who delivers forecast {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]} fails on a realization {x=(s_0,s_1,\dots)\in X=\{0,1\}^{\mathbb N}} if {\left(p_0,s_0,p_1,s_1,\dots\right)\in R} where {p_k=f(s_0,\dots,s_{k-1})} is the prediction made by {f} about {s_k} given the previous outcomes {s_0,\dots,s_{k-1}}.

Let me remind you what I mean by `a test that does not reject the truth with probability {1-\epsilon}‘. I am going to write an equivalent definition to the one I used up to now, this time using the language of stochastic processes.

Definition 1 The test does not reject the truth with probabality {1-\epsilon} if for every {\{0,1\}}-valued stochastic process {S_0,S_1,\dots} one has

\displaystyle \mathop{\mathbb P}\bigl(\left(P_0,S_0,P_1,S_1,\dots\right)\in R\bigr)<\epsilon\ \ \ \ \ (1)

where

\displaystyle P_n=\mathop{\mathbb P}\left(S_n=1\left|S_0,\dots,S_{n-1}\right.\right)\ \ \ \ \ (2)

We are going to prove the following theorem:

Theorem 2 Consider a sequential test that does not reject the true expert with probability {1-\epsilon}. Then for every {\delta>0} there exists a {\zeta\in\Delta({\mathcal F})} such that

\displaystyle \zeta\bigl(\left\{\mu|f\text{ fails on }x\right\}\bigr)<\epsilon+\delta\ \ \ \ \ (3)

for every {x\in X}.

So a charlatan, who doesn’t know anything about the true distribution of the process, can randomize a forecast according to {\zeta} and pass the test with high probability regardless of the actual distribution.

— Discussion —

Before we delve into the proof of the theorem, a couple of words about where we are. Recall that a forecast {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]} specifies the Expert’s prediction about rain tomorrow after every possible history {\sigma=(s_0,\dots,s_{n-1})}. We denote by {{\mathcal F}} the set of all such forecasts. The most general tests are given by a function {T:{\mathcal F}\rightarrow 2^X}, and specify for every such {f} the set of realizations {x\in X=\{0,1\}^{\mathbb N}} over which the forecast {f} fails. Since {X} is infinite we know that there exist tests that passes a true expert and are not manipulable by a strategic charlatan.

Sequential tests have the additional property that the test’s verdict depends only on predictions made by {f} along the realized path: When deciding whether a forecast {f} passes or fails when the realization is {x=(s_0,s_1,\dots)} the test only considers the predictions {p_k=f(s_0,\dots,s_{k-1})} made by {f} along x. We also say that the test does not depend on counter-factual predictions, i.e. predictions about the probability of rainy day after histories that never happen. It seems that counter-factual predictions would be irrelevant to testing anyway, but, as the theorem shows, if the test does not use counter-factual prediction then it is manipulable.

One situation in which sequential tests are the only available tests is when, instead of providing his entire forecast {f} before any outcome is realized, at every day {n} the expert only provides his prediction {p_n} about the outcome {s_n} of day {n} just before it is realized. At infinity, all the information available to the tester is the sequence {(p_0,s_0,p_1,s_1,\dots)} of predictions and realized outcomes.

— Sketch of Proof —

We can transform the expert’s story to a two-player normal form zero-sum game as we did before: Nature chooses a realization {x\in X} and Expert chooses a forecast {f\in{\mathcal F}}. Then Expert pays Nature {1} if {f} fails on {x} and {0} otherwise. The fact that the test does not reject the true expert translates to the fact that the maximin of the game is small. If we knew that the minimax is also small then an optimal mixed strategy {\zeta} for the Expert will satisfy (3). We only need to prove the existence of value, or as game theorists say, that the game is determined.

Unfortunately, this time we cannot use Fan’s Theorem since we made no topological assumption about the set {R}, so there is no hope to get semi-continuity of the payoff function. Indeed, as we shall see in a moment, the Normal form representation misses an important part of the expert’s story. Instead of using a normal form game, we are going to write the game in extensive form. I will call this game {\Gamma}.

  1. The game is played in stages {n=0,1,\dots}.
  2. At stage {n} Nature chooses an outcome {s_n} and Expert chooses a prediction {p_n\in [0,1]} simultaneously and independently.
  3. Nature does not monitor past actions of Expert.
  4. Expert monitors past actions {s_0,\dots,s_{n-1}} of Nature.
  5. At infinity, Expert pays Nature {1} if {\left(p_0,s_0,p_1,s_1,\dots\right)\in R} and {0} otherwise.

Now I am going to assume that you are familiar with the concept of strategy in extensive form game, and are aware of Kuhn’s Theorem about the equivalence between behavioral strategies and mixtures of pure strategies (I will make implicit uses of both directions of Kuhn’s Theorem in what follows). We can then look at the normal form representation of this game, in which the players choose pure strategies. A moment’s thought will convince you that this is exactly the game from the previous paragraph: Nature’s set of pure strategies is {X}, Expert’s set of pure strategies is {{\mathcal F}} and the payoff for a strategy profile {(x,f)} is {\mathbf{1}_{T(f)}(x)}. So far no real gain. Extensive form games in which one of the players don’t monitor opponent’s actions need not be determined. In order to get a game with a value we are going to twist the game {\Gamma}, and allow Nature to observe past actions of the Expert player. This makes life more difficult for the Expert. Up to a minor inaccuracy which I will comment about later, the resulting game is what’s called Blackwell game and it admits a value by (a seminal theorem of Donald Martin).

Here is the game after the twist. I call this game {\Gamma^\ast}.

  1. The game is played in stages {n=0,1,\dots}.
  2. At stage {n} Nature chooses an outcome {s_n} and Expert chooses a prediction {p_n\in [0,1]} simultaneously and independently.
  3. Each player monitors past actions of the opponent.
  4. At infinity, Expert pays Nature {1} if {\left(p_0,s_0,p_1,s_1,\dots\right)\in R} and {0} otherwise.

Now if you internalized the method of proving manipulability that I was advocating in the previous two episodes, you know what’s left to prove: that the maximin of {\Gamma^\ast} is small, i.e. that, fore every strategy of Nature, Expert has a response that makes the payoff at most {\epsilon}. We know this is true for the game {\Gamma} but in {\Gamma^\ast} Nature is more powerful.

Here is the most important insight of the proof: The fact that an expert who knows the distribution of the process can somehow pass the test {T} implies that the maximin in {\Gamma} is small, but this fact alone doesn’t say anything about the maximin of {\Gamma^\ast}. To show that the maximin of {\Gamma^\ast} is also small we will use the fact that the way such an expert passes the test {T} is by providing the correct forecast. Until now the distinction was not really important to us. Now it comes into play.

Let {g:\left([0,1]\times \{0,1\}\right)^{<{\mathbb N}}\rightarrow [0,1]} be a behavioral strategy of Nature in {\Gamma^\ast}, i.e. a contingent plan that specifies the probability that Nature play {1} after every. Let {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]} be the pure strategy of Expert in {\Gamma^\ast} that is given by

\displaystyle \begin{array}{rcl} &&f(s_0,s_1,\dots,s_{n-1})=\\&&g\bigl(f(), s_0, f\left(s_0,s_1\right), s_1, \dots,f\left(s_0,s_1,\dots,s_{n-2}\right),s_{n-1}\bigr)\end{array}

So the pure action taken by the Expert player at day {n} is the mixed action that Nature is going to take at day {n} according to her strategy {g}. Now assume that Nature follows {g} and Expert follows {f}. Let {P_n} and {S_n} be the random variables representing the actions taken by Expert and Nature at day {n}. Then the stochastic process {P_0,S_0,P_1,\dots} satisfies (2). Therefore from (1) we get that the expected payoff when Nature plays {g} and Expert plays {f} is indeed smaller than {\epsilon}.

Now for the minor inaccuracy that I mentioned: For Martin’s Theorem we need the set of actions at every stage to be finite. We can handle this obstacle by restricting Expert’s action at every stage to a grid and applying the coupling argument.

— Non-Borel Tests —

What about pathological sequential tests that are given by a non-Borel set {R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{\mathbb N}} ? Well, if, like me, you find it preposterously impossible to choose one sock from each of infinitely many pairs of socks, then perhaps you live in the AD paradise. Here every set is measurable, Blackwell Games are determined even when the payoff function is not Borel, and Theorem 2 is true without the assumption that {R} is Borel. See, the AD universe is a paradise for charlatans, since they can do as well as the true experts.

If, on the other hand, you subscribe to the axiom of choice, then you have a non-manipulable test:

Theorem 3 There exists a sequential test with a non-Borel set {R} that does not reject the truth with probability {1} and such that for every {\zeta\in\Delta({\mathcal F})} there exists some realization {x=(s_0,s_1,\dots)} such that

\displaystyle \zeta\bigl(\left\{\mu|f\text{ fails on }x\right\}\bigr)=1.

— Summary —

If you plan to remember one conclusion from my last three posts, I suggest you pick this: There exist non-manipulable tests, but they must rely on counter-factual predictions, or be extremely pathological.

Muchas gracias to everyone who read to this point. Did I mention that I have a paper about this stuff ?

This puzzle, which appeared on the blog associated with the cute webcomic xkcd, would make a good exercise in a game-theory course:

Alice secretly picks two different real numbers by an unknown process and puts them in two (abstract) envelopes.  Bob chooses one of the two envelopes randomly (with a fair coin toss), and shows you the number in that envelope.  You must now guess whether the number in the other, closed envelope is larger or smaller than the one you’ve seen.

Is there a strategy which gives you a better than 50% chance of guessing correctly, no matter what procedure Alice used to pick her numbers?

A good start is to notice that the possibility that Alice may randomize is a red herring. You need a strategy with a greater than 50% chance of winning against any pair she may choose. If you can do this, it takes care of her randomized strategies for free. (A familiar argument to those who enjoy computing values of zero-sum games.)

In which I talked about Olszewksi and Sandroni’s paper `Manipulability of future independent tests’ and coupling of stochastic processes.

Link to previous episode

Recap Every day {n=0,1,\dots} Nature randomizes the weather {s_n\in\{0,1\}} for that day ({1} for rain, {0} for no rain). An Expert delivers a forecast {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]} which he claims governs Nature’s selection, so that the expert claims that {f(s_0,\dots,s_{n-1})} is the probability of rain at day {n} when the weather in previous days was {s_0,\dots,s_{n-1}}. We denote by {{\mathcal F}} the set of all forecasts. We compare the expert’s forecast to the actual infinite realization {x=(s_0,s_1,\dots)\in X=\{0,1\}^{\mathbb N}} of weather. A test function is given by {T:{\mathcal F}\rightarrow 2^X}. An expert who provided a forecast {f} is rejected if {x\in T(f)}. We say that the test does not reject the true expert with probability {1-\epsilon} if {\mu_f(T(f))<\epsilon} for every forecast {f}, where {\mu_f\in\Delta(X)} is the distribution over realizations induced by {f}.

We now look at a special class of tests, which are sequential and also reject the expert on finite times. Such tests are given by a set {R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{<{\mathbb N}}} of finite sequences of predictions and outcomes: Let {s_k\in\{0,1\}} be the outcome of day {k} and {p_k=f\left(s_0,\dots,s_{k-1}\right)} be the prediction about day {k} generated by the forecast {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]}. Then the expert fails if after some day {n\geq 0} it turns out that {\left(p_0,s_0,\dots,p_n,s_n\right)\in R}. Thus,

\displaystyle T(f)=\bigl\{x=\left(s_0,s_1,\dots)\in X\right)|(p_0,s_0,\dots,p_n,s_n)\in R\text{ for some }n\geq 0\bigr\}\ \ \ \ \ (1)

Note that it is possible (for some choices of {R} — see comment below) that the expert is never off the hook — there is always the potential that he will fail the test. (On the other hand, in Sandroni’s Theorem with set of realizations {X=\{0,1\}^n} at day {n} there is a final verdict, either the expert pass or failed.)

Theorem 1 (Olszewski and Sandroni (2009)) Let {T} be a test of the form (1) that does not reject the true expert with probability {1-\epsilon}. Then for every {\delta>0} there exists {\zeta\in\Delta({\mathcal F})} such that

\displaystyle \zeta\bigl(\left\{\mu|x\in T(\mu)\right\}\bigr)<\epsilon+\delta\ \ \ \ \ (2)

for every {x\in X}.

Again, if the expert randomizes his forecasts {f} according to a distribution {\zeta}, he is unlikely to fail the test regardless of the actual realization {x}. In fact, Olszewksi and Sandroni state and prove their theorem under weaker condition on the test, which they call `future independence’. The proof is the same as the proof given below, so I can skip the definition the future independence without depriving you of any math.

The proof of the theorem follows the idea of the proof of Sandroni’s Theorem for the case of finite set of realizations. We define a game between two players, Nature and Expert: Nature chooses a realization {x} and Expert chooses a forecast {f}, and the payoff that Expert pays Nature is given by {h(x,f)=\mathbf{1}_{T(f)}(x)}, where {\mathbf{1}_{T(f)}} is the indicator function of {T(f)}. In order to use Fan’s Theorem we will need a small twist on the game: We are going to restrict the Expert’s pure strategies set in the game to be such that the prediction {f(s_0,\dots,s_{n-1})} of day {n} is in some finite grid. (it is a good idea at this point to figure out for yourself what goes wrong if we don’t add this restriction):

Fix {\delta>0}. We denote by {{\mathcal F}_\delta} the set of forecasts defined below, in which the experts predictions are restricted at every day to a finite grid in {[0,1]}:

\displaystyle {\mathcal F}_\delta = \bigl\{f\in{\mathcal F} \bigl| f(s_0,\dots,s_{n-1}) = \delta\cdot (k/2^{n+1})\text{ for some }k\in {\mathbb N}\bigr\}.

The following Lemma, which I will prove later, will imply that the Expert player in the game does not lose much if we restrict his strategy set to {{\mathcal F}_\delta}.

Lemma 2 Let {f,f'\in{\mathcal F}} be such that

\displaystyle |f(s_0,\dots,s_{n-1})-f'(s_0,\dots,s_{n-1})|<\delta/2^{n+1}\ \ \ \ \ (3)

for every {n\in {\mathbb N}} and every {(s_0,\dots,s_{n-1})}. Then {|\mu_f(B)-\mu_{f'}(B)|<\delta} for every Borel set {B\subseteq X}.

— Proof of Theorem —

Consider a two-player zero sum games between Nature and Expert: Nature chooses a realization {x\in X} and Expert, simultaneously and independently, chooses a forecast {f\in{\mathcal F}_\delta}. Then Expert pays Nature an amount {h(x,f)=\mathbf{1}_{T(f)}(x)}.

We first claim that the maximin of the game is at most {\epsilon+\delta}. Indeed, let {\mu\in\Delta(X)} be a mixed strategy of Nature. Then {\mu=\mu_f} for some {f\in{\mathcal F}}. Let {f'\in{\mathcal F}_\delta} satisfy (3). If Nature plays {\mu} and Expert responds with {f'} then the expected payoff that Expert pays Nature is

\displaystyle \mu(T(f'))=\mu_f(T(f'))\leq\mu_{f'}(T(f'))+\delta\leq \epsilon+\delta,

where the equality follows from the definition of {f}, the first inequality from Lemma 2 and the second inequality from the fact that {T} does not reject the truth with probability {1-\epsilon}.

If we can show that the minimax is smaller than {\epsilon+\delta} then an optimal mixed strategy {\zeta} of the Expert in the game will satisfy (2). All is left to do is to check that the conditions of Fan’s Theorem are satisifed, so that the minimax equals the maximin. Indeed, the set {{\mathcal F}_\delta} of pure strategies of the Expert is compact as a countable product of finite sets, each equipped with the discrete topology. Moreover, in this topology {f\mapsto h(f,x)} is l.s.c. for every {x\in X} (Skip the next paragraph if this is obvious).

Lower semi-continuity means that {\liminf_{m\rightarrow \infty}h(f^m,x)\geq h(f,x)} whenever {f^m\rightarrow f}. For the case of indicator functions the inequality means that if {h(f,x)=1} then {h(f^m,x)=1} for sufficiently large {m}. So let {x=(s_0,s_1,\dots)\in\{0,1\}^{\mathbb N}} and assume that {h(f,x)=1} . Then {(p_0,s_0,\dots,p_{n-1},s_{n-1})\in R} for some {n} where {p_k=f(s_0,\dots,s_{k-1})}. Convergence of {f^m} to {f} in the product topology is pointwise convergence, so {f^m(\sigma)\rightarrow f(\sigma)} for every {\sigma\in\{0,1\}^{<{\mathbb N}}}. Since {f^m\in {\mathcal F}_\delta} for every {m} the last limit means that {f^m(\sigma) = f(\sigma)} for sufficiently large {m}. Take {m} large enough so that the equality is satisfied for every {\sigma} of length {n} or smaller. Then for such {m} the predictions made by {f^m} along {x} equal those made by {f} until day {n}. It follows that {h(f^m,x)=1}, as desired.

— Weather coupling —

The best way I know to prove Lemma 2 is by using coupling. This is a powerful technique to prove relations between two random variables by embedding them in the same probability space.

Imagine that Zeus randomizes the weather in Athens according to {\mu_f} and Baal randomizes the weather in Askelon according to {\mu_{f'}}.

One way Zeus can perform the randomization task is using a sequence {U_0,U_1,U_2,\dots} of i.i.d random variables distributed uniformly on {[0,1]}: After he decided on the weather {s_0,s_1,\dots,s_{n-1}\in\{0,1\}} Zeus makes the {n}-th rainy (i.e. decide that {s_n=1}) iff {U_n\leq f(s_0,\dots,s_{n-1})}. Assume now that Baal uses the same technique only with {f'} instead of {f} to randomize the weather {t_0,t_1,\dots} in Askelon and that Baal uses the same sequence {U_0,U_1,\dots} of i.i.d uniform random variables that Zeus uses.

Then the distribution of the weather in Askelon is {\mu_{f'}}. Moreover because of (3), the weather in Askelon is likely to be the same as the weather in Athens. In fact the probability that the weather will differ at some day is at most {\sum_{n\geq 0}\delta/2^{n+1}=\delta} ! Now {\mu_{f}(B)} is the probability that the realized weather sequence {s_0,s_1,\dots} in Athens is in {B} and {\mu_{f'}(B)} is the probability that the realized weather sequence {t_0,t_1,\dots} in Askelon is in {B}. Since the probability that these sequences differ is smaller than {\delta} it follows that {|\mu_f(B)-\mu_{f'}(B)|<\delta}, as desired.

I am visiting the rationality center in the Hebrew University, and I am presenting some papers from the expert testing literature. Here are the lecture notes for the first talk. If you read this and find typos please let me know. The next paragraph contains the background story, and can be safely skipped.

A self-proclaimed expert opens a shop with a sign at the door that says `Here you can buy probabilities’. So the expert is a kind of a fortune-teller, he provides a service, or a product, and the product that the expert provides is a real number: the probability of some event or more generally the distribution of some random variable. You can ask for the probability of rain tomorrow, give the expert some green papers with a picture of George Washington and receive in return a paper with a real number between 0 and 1. The testing literature asks whether you can, after the fact, check the quality of the product you got from the expert, i.e. whether the expert gave you the correct probability or whether he just emptied your pocket for a worthless number.

So, let {X} be a set of realizations. Nature randomizes an element from {X} according to some distribution and an expert claims to know Nature’s distribution. A test is given by a function {T:\Delta(X)\rightarrow 2^X}: the expert delivers a forecast {\mu\in\Delta(X)} and fails if the realization {x} turned out to be in {T(\mu)}. A good test will be such that only `true’ experts, i.e. those who deliver the correct {\mu}, will not fail.

— Manipulability —

I start with Sandroni’s paper (pdf). The following definition formalises the idea that the true expert is unlikely to fail the test

Definition 1 The test {T} does not reject the truth with probability {1-\epsilon} if {\mu(T(\mu))<\epsilon} for every {\mu\in \Delta(X)}.

If a test does not reject the truth with probability {1-\epsilon} then a true expert, who knows how nature randomizes the realization, is unlikely to fail the test. Alas, a charlatan who acts strategically is also unlikely to fail

Theorem 2 (Sandroni (2003)) Let {X} be finite. If {T:\Delta(X)\rightarrow 2^X} is a test that does not reject the true expert with probability {1-\epsilon} then there exists {\zeta\in\Delta\bigl(\Delta(X)\bigr)} such that {\zeta\bigl(\{\mu|x\in T(\mu)\}\bigr)<\epsilon} for every {x\in X}.

So, a charlatan who knows nothing about how Nature chooses {x} can randomize a forecast {\mu} according to {\zeta}, and is unlikely to fail, regardless of the realization. We say that the test is manipulable.

For Sandroni’s Theorem we do not need to assume any structure on {X}. However the situation we have in mind is that {X=S^n} for some finite set {S} of outcome. So at every day {0\leq k<n} Nature randomizes an outcome of {s_k\in S} and the Expert claims to know the stochastic process that governs Nature’s choices.

Proof: Let {Y\subseteq \Delta(X)} be a finite set such that {T(Y)=T(\Delta(X))}.

Consider the following two-player zero-sum game with the players called Nature and Expert. Nature is the maximizer with pure strategies set {X}, and Expert is the minimizer, with pure strategies set {Y}. If Nature plays {x\in X} and Expert plays {y\in Y} then Expert pays Nature {1} if {x\in T(y)} and {0} otherwise.

By von-Neumann’s Minimax Theorem the game admits a value {v} in mixed strategies, so that the maximin equals the minimax. We claim that {v<\epsilon}. Indeed, let {\mu\in\Delta(X)} be a mixed strategy of Nature and let {y\in Y} be such that {T(y)=T(\mu)}. Then the expected payoff that Expert pays Nature if Nature plays {\mu} and Expert plays {y} is {\mu(T(y))=\mu(T(\mu)<\epsilon} since the game does not reject the true expert with probability {1-\epsilon}.

Now let {\zeta\in\Delta(Y)} be a mixed optimal strategy of the Expert in the game. Then

\displaystyle \zeta\bigl(\{\mu\in Y|x\in T(\mu)\}\bigr)\leq v<\epsilon

for every pure strategy {x} of Nature, since the left hand side is the expected payoff that the Expert pays Nature if he plays {\zeta} and Nature plays {x}. \Box

The argument in the proof of Sandroni’s Theorem captures the essence of all the manipulability theorems we will see. We define a zero-sum game between two players, Nature and Expert. The minimax theorem is the core of the proof: The fact that the maximin is smaller than {\epsilon} follows from the assumption that the test is unlikely to reject the true expert. The fact that minimax is smaller than {\epsilon} implies that the test is manipulable. In the middle, we use the most wonderful miracle that the minimax and maximin are equal.

— Fan’s Theorem —

Here is the more general minimax theorem which we will use later: In a zero-sum game, if one of the players have a compact set of pure strategies, and the payoff to that player is u.s.c. in his own strategy then the game admits a value in mixed strategies.

Proposition 3 (Ky Fan)Consider a two-player zero-sum game in normal form with pure strategy sets {X,Y} and payoff function {h:X\times Y\rightarrow [0,1]}. If {X} is a compact metrizable space and {h(\cdot,y)} is upper-semi continuous for every {y\in Y} then the game has a value in mixed strategies, i.e.

\displaystyle \sup_{\mu\in \Delta(X)}\inf_{y\in Y}\int h(x,y) \mu(\text{d} x)=\inf_{\zeta\in\Delta_f(Y)}\sup_{x\in X}\int h(x,y)\zeta(\text{d} y).

and all suprema are attained.

The set {Y} is not topologized and {\Delta_f(Y)} is the set of distributions over {Y} with finite support, so the integral on the right side is just a summation. The challenge in using Fan’s Theorem is to find a topology over {X} which is sufficiently weak to make {X} compact and sufficiently strong to make the functions {h(\cdot,y)} upper semi-continuous.

— Non-manipulability —

Sandroni’s Theorem does not hold when {X} is infinite, precisely because the game we defined in the proof will have no value. We now take a game without a value and translate it to a non-manipulable test for an infinite set {X} of realizations. I only know about one game without a value (`I know a larger such game’ quipped Sergiu when I said it in the talk): Two players pick simultaneously and independently natural numbers and the player that chose the largest number wins. Here is the translation of the game to a test:

Example 1 Let {X=\mathbb{N}}. Let {T:\Delta(X)\rightarrow 2^X} be the test that is given by

\displaystyle T(\mu)=\{x\in{\mathbb N} | \mu([x,\infty))<\epsilon\}.

Then {T} does not reject the true expert with probability {1-\epsilon}, and for every {\zeta\in\Delta(\Delta(X))} there exists a realization {x\in X} such that

\displaystyle \zeta\bigl(\{\mu\in\Delta(X)|x\in T(\mu)\}\bigr)>1-\epsilon.

Not only the test is not manipulable, also for every {\zeta\in \Delta(\Delta(X))} that a charlatan might employs to randomize his worthless forecast {\mu}, there is some realization {x} under which he fails with high probability.

Proof: For every {\mu\in\Delta(X)} let {t(\mu)} ({t} for tail) be given by {t(\mu)=\min\{t\in {\mathbb N}|\mu\left([t,\infty)\right)<\epsilon\}}. Then {T} can be equivalently written as {T(\mu)=[t(\mu),\infty)}.

We first show that the test has small probability to reject a true expert. Indeed,

\displaystyle \mu\left(T(\mu)\right)=\mu\left([t(\mu),\infty)\right)<\epsilon

for every {\mu\in\Delta(X)}, where the first equality follows from the definition of {T} and the second from the definition of {t}.

Now let {\zeta\in\Delta(\Delta(X))}. Let {\bar\zeta\in\Delta(\mathbb{N})} be the push-forward of {\zeta} under {t} and let {x\geq t(\bar\zeta)}. Then

\displaystyle \begin{array}{rcl} \lefteqn{\zeta\bigl(\{\mu\in\Delta(X)|x\in T(\mu)\}\bigr)=}\\&&\zeta\bigl(\{\mu\in\Delta(X)|x\geq t(\mu)\}\bigr)=\bar\zeta\left([0,x]\right)\geq 1-\epsilon.\end{array}

The first equality follows from the definition of {T}, the second from the definition of {\bar\zeta} and the inequality from the choice of {x}. \Box

— Sequential Tests —

So, Sandroni’s Theorem only holds for finite sets {X}. We are now aiming at proving a manipulability theorem for infinite set {X} by adding some structure on the set {X} and some restrictions on the test. From now on we assume that {X=\{0,1\}^{\mathbb N}}. The set {S=\{0,1\}} is the set of outcomes. Everything works for arbitrary finite set of outcomes but I stick with two outcomes for notational convenience. Note that {X} a compact metrizable space in the product topology.

There is an almost equivalent representation of and element {\mu\in\Delta(X)} as the conditional probability that {s_n=1} given {s_0,\dots,s_{n-1}} for a {\mu}-randomly chosen {x=(s_0,s_1,\dots)}. Let {\mathcal{F}} be the set of functions {f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]}, then this gives rise to natural map {f\in\mathcal{F}\mapsto\mu_f\Delta(X)}, which is surjective and continuous when the domain is equipped with the product topology and the range with the weak{^\ast} topology. It will be useful to identify {\mathcal{F}} with {\Delta(X)}, so I will think of a test as a function {T:\mathcal{F}\rightarrow 2^X}.

For every {f\in\mathcal{F}} and every realization {x=(s_0,s_1,\dots)}, the sequence of forecasts of {f} along {x} is given by {(p_0,p_1,\dots)} where {p_k=f(s_0,\dots,s_{k-1})}.

Definition 4 A test {T:\mathcal{F}\rightarrow 2^X} is sequential if {x\in T(f_1)} implies {x\in T(f_2)} for every {f_1,f_2\in\mathcal{F}} and every realization {x} such that the forecasts made by {f_1} and {f_2} along {x} are the same.

Equivalently, a sequential test is given by a subset {R\subseteq ([0,1]\times \{0,1\})^{\mathbb N}} ({R} for rejection): {x\in T(f)} iff {(p_0,s_0,p_1,s_1,\dots)\in R} for every realization {x=(s_0,s_1,\dots)} and every forecast {f\in\mathcal{F}}, where {(p_0,p_1,\dots)} is the sequence of predictions of {f} along {x}.

— Next Episode (Thursday 12:00) —

I will talk about Olszewski and Sandroni’s paper `The manipulability of future independent tests’ and my paper `Many inspections are manipulable’ . Main goal is to prove that sequential tests are always manipulable.

There are two equivalent ways to understand the best response property of a Nash Equilibrium strategy. First, we can say that the player plays a mixed strategy whose expected payoff is maximal among all possible mixed strategies. Second, we can say that the player randomly chooses a pure strategy from the set of pure strategies whose expected payoff is maximal among all possible pure strategies.

So far so good, and every student of game theory is aware of this equivalence. What I think is less known is that the two perspectives are not identical for {\epsilon}-best response and {\epsilon}-equilibrium: A mixed strategy whose expected payoff is almost optimal might put some positive (though small) probability on a pure strategy which gives a horrible payoff. In this post I am going to explain why I used to think the difference between the two perspectives is inconsequential, and why, following a conversation with Ayala Mashiah-Yaakovi about her work on subgame perfect equilibrium in Borel games, I changed my mind.

Read the rest of this entry »

Amsrefs is a package for preparing bibliographic lists. If, like me, you use bibtex then you may find this post informative. If you enter your bibliographic items into the tex file manually, \emph-asizing titles and consulting Chicago Manual of Style to check whether the publisher should appear before or after the publication year then please have mercy on your co-authors and start eating with fork and knife. You spill typos all over the place.

So I have tried using amsrefs recently. Pro: Bibliographic items are entered in the tex file using a command similar to \bibitem, no need to keep a separate bib file and running  bibtex. This is more convenient, especially if your folders are as messy as mine. Cons: Bibliographic items are not sorted, they  appear in the pdf in the same order they appear in the tex file. Worse, all the entries in your bibliographic list appear in the pdf document, even those you don’t cite in the main text. The referee will search for their name, find the paper in the list of references, then search for the citation and get pissed when it’s not there: apparently you know about their paper but have nothing to say about it. Another con: You are in charge of capitalization of the journal and paper titles. Chicago Manual of Style anybody ?

Bottom line: I think I will return to bibtex. Am I missing anything ?

Kellogg faculty blogroll