In which I talked about Olszewksi and Sandroni’s paper Manipulability of future independent tests’ and coupling of stochastic processes.

Link to previous episode

Recap Every day ${n=0,1,\dots}$ Nature randomizes the weather ${s_n\in\{0,1\}}$ for that day (${1}$ for rain, ${0}$ for no rain). An Expert delivers a forecast ${f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]}$ which he claims governs Nature’s selection, so that the expert claims that ${f(s_0,\dots,s_{n-1})}$ is the probability of rain at day ${n}$ when the weather in previous days was ${s_0,\dots,s_{n-1}}$. We denote by ${{\mathcal F}}$ the set of all forecasts. We compare the expert’s forecast to the actual infinite realization ${x=(s_0,s_1,\dots)\in X=\{0,1\}^{\mathbb N}}$ of weather. A test function is given by ${T:{\mathcal F}\rightarrow 2^X}$. An expert who provided a forecast ${f}$ is rejected if ${x\in T(f)}$. We say that the test does not reject the true expert with probability ${1-\epsilon}$ if ${\mu_f(T(f))<\epsilon}$ for every forecast ${f}$, where ${\mu_f\in\Delta(X)}$ is the distribution over realizations induced by ${f}$.

We now look at a special class of tests, which are sequential and also reject the expert on finite times. Such tests are given by a set ${R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{<{\mathbb N}}}$ of finite sequences of predictions and outcomes: Let ${s_k\in\{0,1\}}$ be the outcome of day ${k}$ and ${p_k=f\left(s_0,\dots,s_{k-1}\right)}$ be the prediction about day ${k}$ generated by the forecast ${f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]}$. Then the expert fails if after some day ${n\geq 0}$ it turns out that ${\left(p_0,s_0,\dots,p_n,s_n\right)\in R}$. Thus,

$\displaystyle T(f)=\bigl\{x=\left(s_0,s_1,\dots)\in X\right)|(p_0,s_0,\dots,p_n,s_n)\in R\text{ for some }n\geq 0\bigr\}\ \ \ \ \ (1)$

Note that it is possible (for some choices of ${R}$ — see comment below) that the expert is never off the hook — there is always the potential that he will fail the test. (On the other hand, in Sandroni’s Theorem with set of realizations ${X=\{0,1\}^n}$ at day ${n}$ there is a final verdict, either the expert pass or failed.)

Theorem 1 (Olszewski and Sandroni (2009)) Let ${T}$ be a test of the form (1) that does not reject the true expert with probability ${1-\epsilon}$. Then for every ${\delta>0}$ there exists ${\zeta\in\Delta({\mathcal F})}$ such that

$\displaystyle \zeta\bigl(\left\{\mu|x\in T(\mu)\right\}\bigr)<\epsilon+\delta\ \ \ \ \ (2)$

for every ${x\in X}$.

Again, if the expert randomizes his forecasts ${f}$ according to a distribution ${\zeta}$, he is unlikely to fail the test regardless of the actual realization ${x}$. In fact, Olszewksi and Sandroni state and prove their theorem under weaker condition on the test, which they call future independence’. The proof is the same as the proof given below, so I can skip the definition the future independence without depriving you of any math.

The proof of the theorem follows the idea of the proof of Sandroni’s Theorem for the case of finite set of realizations. We define a game between two players, Nature and Expert: Nature chooses a realization ${x}$ and Expert chooses a forecast ${f}$, and the payoff that Expert pays Nature is given by ${h(x,f)=\mathbf{1}_{T(f)}(x)}$, where ${\mathbf{1}_{T(f)}}$ is the indicator function of ${T(f)}$. In order to use Fan’s Theorem we will need a small twist on the game: We are going to restrict the Expert’s pure strategies set in the game to be such that the prediction ${f(s_0,\dots,s_{n-1})}$ of day ${n}$ is in some finite grid. (it is a good idea at this point to figure out for yourself what goes wrong if we don’t add this restriction):

Fix ${\delta>0}$. We denote by ${{\mathcal F}_\delta}$ the set of forecasts defined below, in which the experts predictions are restricted at every day to a finite grid in ${[0,1]}$:

$\displaystyle {\mathcal F}_\delta = \bigl\{f\in{\mathcal F} \bigl| f(s_0,\dots,s_{n-1}) = \delta\cdot (k/2^{n+1})\text{ for some }k\in {\mathbb N}\bigr\}.$

The following Lemma, which I will prove later, will imply that the Expert player in the game does not lose much if we restrict his strategy set to ${{\mathcal F}_\delta}$.

Lemma 2 Let ${f,f'\in{\mathcal F}}$ be such that

$\displaystyle |f(s_0,\dots,s_{n-1})-f'(s_0,\dots,s_{n-1})|<\delta/2^{n+1}\ \ \ \ \ (3)$

for every ${n\in {\mathbb N}}$ and every ${(s_0,\dots,s_{n-1})}$. Then ${|\mu_f(B)-\mu_{f'}(B)|<\delta}$ for every Borel set ${B\subseteq X}$.

— Proof of Theorem —

Consider a two-player zero sum games between Nature and Expert: Nature chooses a realization ${x\in X}$ and Expert, simultaneously and independently, chooses a forecast ${f\in{\mathcal F}_\delta}$. Then Expert pays Nature an amount ${h(x,f)=\mathbf{1}_{T(f)}(x)}$.

We first claim that the maximin of the game is at most ${\epsilon+\delta}$. Indeed, let ${\mu\in\Delta(X)}$ be a mixed strategy of Nature. Then ${\mu=\mu_f}$ for some ${f\in{\mathcal F}}$. Let ${f'\in{\mathcal F}_\delta}$ satisfy (3). If Nature plays ${\mu}$ and Expert responds with ${f'}$ then the expected payoff that Expert pays Nature is

$\displaystyle \mu(T(f'))=\mu_f(T(f'))\leq\mu_{f'}(T(f'))+\delta\leq \epsilon+\delta,$

where the equality follows from the definition of ${f}$, the first inequality from Lemma 2 and the second inequality from the fact that ${T}$ does not reject the truth with probability ${1-\epsilon}$.

If we can show that the minimax is smaller than ${\epsilon+\delta}$ then an optimal mixed strategy ${\zeta}$ of the Expert in the game will satisfy (2). All is left to do is to check that the conditions of Fan’s Theorem are satisifed, so that the minimax equals the maximin. Indeed, the set ${{\mathcal F}_\delta}$ of pure strategies of the Expert is compact as a countable product of finite sets, each equipped with the discrete topology. Moreover, in this topology ${f\mapsto h(f,x)}$ is l.s.c. for every ${x\in X}$ (Skip the next paragraph if this is obvious).

Lower semi-continuity means that ${\liminf_{m\rightarrow \infty}h(f^m,x)\geq h(f,x)}$ whenever ${f^m\rightarrow f}$. For the case of indicator functions the inequality means that if ${h(f,x)=1}$ then ${h(f^m,x)=1}$ for sufficiently large ${m}$. So let ${x=(s_0,s_1,\dots)\in\{0,1\}^{\mathbb N}}$ and assume that ${h(f,x)=1}$ . Then ${(p_0,s_0,\dots,p_{n-1},s_{n-1})\in R}$ for some ${n}$ where ${p_k=f(s_0,\dots,s_{k-1})}$. Convergence of ${f^m}$ to ${f}$ in the product topology is pointwise convergence, so ${f^m(\sigma)\rightarrow f(\sigma)}$ for every ${\sigma\in\{0,1\}^{<{\mathbb N}}}$. Since ${f^m\in {\mathcal F}_\delta}$ for every ${m}$ the last limit means that ${f^m(\sigma) = f(\sigma)}$ for sufficiently large ${m}$. Take ${m}$ large enough so that the equality is satisfied for every ${\sigma}$ of length ${n}$ or smaller. Then for such ${m}$ the predictions made by ${f^m}$ along ${x}$ equal those made by ${f}$ until day ${n}$. It follows that ${h(f^m,x)=1}$, as desired.

— Weather coupling —

The best way I know to prove Lemma 2 is by using coupling. This is a powerful technique to prove relations between two random variables by embedding them in the same probability space.

Imagine that Zeus randomizes the weather in Athens according to ${\mu_f}$ and Baal randomizes the weather in Askelon according to ${\mu_{f'}}$.

One way Zeus can perform the randomization task is using a sequence ${U_0,U_1,U_2,\dots}$ of i.i.d random variables distributed uniformly on ${[0,1]}$: After he decided on the weather ${s_0,s_1,\dots,s_{n-1}\in\{0,1\}}$ Zeus makes the ${n}$-th rainy (i.e. decide that ${s_n=1}$) iff ${U_n\leq f(s_0,\dots,s_{n-1})}$. Assume now that Baal uses the same technique only with ${f'}$ instead of ${f}$ to randomize the weather ${t_0,t_1,\dots}$ in Askelon and that Baal uses the same sequence ${U_0,U_1,\dots}$ of i.i.d uniform random variables that Zeus uses.

Then the distribution of the weather in Askelon is ${\mu_{f'}}$. Moreover because of (3), the weather in Askelon is likely to be the same as the weather in Athens. In fact the probability that the weather will differ at some day is at most ${\sum_{n\geq 0}\delta/2^{n+1}=\delta}$ ! Now ${\mu_{f}(B)}$ is the probability that the realized weather sequence ${s_0,s_1,\dots}$ in Athens is in ${B}$ and ${\mu_{f'}(B)}$ is the probability that the realized weather sequence ${t_0,t_1,\dots}$ in Askelon is in ${B}$. Since the probability that these sequences differ is smaller than ${\delta}$ it follows that ${|\mu_f(B)-\mu_{f'}(B)|<\delta}$, as desired.