You are currently browsing the tag archive for the ‘Blackwell’ tag.

In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.

Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some {\mu\in [0,1]} then your prediction about the outcome tomorrow will be the expectation of {\mu}. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.

The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.

Every day there is a probability {1/2} for eruption of war (W). If no war erupts then the outcome is either bad economy (B) or good economy (G) and is a function of the number of peaceful days since the last war. The function from the number of peaceful days to outcome is an unknown parameter of the process. Thus, a parameter is a function {\theta:\{1,2,\dots\}\rightarrow\{\text{B},\text{G}\}}. I am going to compare the predictions about the future made by two agents: Roxana, who knows {\theta} and Ursula, who faces some uncertainty about {\theta} represented by a uniform belief over the set of all parameters. Both Roxana and Ursula don’t know the future outcomes and since both of them are rational decision makeres, they both use Bayes’ rule to form beliefs about the unknown future given what they have seen in the past.

Consider first Roxana. In the terminology I introduced in previous posts, she faces no structural uncertainty. After a period of {k} consecutive peaceful days Roxana believes that with probability {1/2} the outcome tomorrow will be W and with probability {1/2} the outcome tomorrow will be {\theta(k)}.

Now consider Ursula. While she does not initially know {\theta}, as times goes by she learns it. What do I mean here by learning ? Well, suppose Ursula starts observing the outcomes and she sees G,B,W,B,G,…. From this information Ursula she deduces that {\theta(1)=\text{B}}, so that if a peaceful day follows a war then it has a bad economy. Next time a war pops up Ursula will know to make a prediction about the outcome tomorrow which is as accurate as Roxana’s prediction. Similarly Ursula can deduce that {\theta(2)=\text{G}}. This way Ursula gradually deduces the values of {\theta(k)} while she observes the process. However, and this is the punch line, for every {k\in\{1,2,3,\dots\}} there will be a time when Ursula observes {k} consecutive peaceful day for the first time and at this day her prediction about the next outcome will be {1/2} for war, {1/4} for good economy and {1/4} for bad economy. Thus there will always be infinitely many occasions in which Ursula’s prediction differ from Roxana.

So, Ursula does learn the parameter in the sense that she gradually deduce more and more values of {\theta(k)}. However, because at every point in time she may require a different value of {\theta(k)} — This is the difference between the stationary environment and the i.i.d. environment ! — there may happen infinitely many times in which she has not yet been able to deduce the value of the parameter which she needs in order to make a prediction about the outcome tomorrow.

You may notice that Ursula does succeed in making predictions most of the times. In fact, the situations when she fails become more and more rare, after observing longer and longer blocks of peaceful days. Indeed, Nabil and I formalize this idea and show that this is the case in every stationary environment with structural uncertainty: the observer makes predictions approximately as if he knew the parameter in almost every day. For that, we use a weak notion of merging which was suggested by Lehrer and Smorodinsky. If you are interested then this is a good time to look at our paper.

Finally, the example given above is our adaptation to an example that appeared first in a paper by Boris Yakovlevich Ryabko. Ryabko’s paper is part of a relatively large literature about non-Bayesian predictions in stationary environment. I will explain the relationship between that literature and our paper in another post.

Department of self-promotion: sequential tests, Blackwell games and the axiom of determinacy.

Read the rest of this entry »

Following Jeff, a couple of links to posts about Blackwell’s life and research: Jake Abernethy, Rajiv SethiAnand Sarwate, Kevin Bryan, jesús Fernández-Villaverde (spanish), Andrew Gelman

Rajiv:

It takes a particular kind of strength to manage such a productive research career while tolerating the stresses and strains of personal insult, and carrying the aspirations of so many on one’s shoulders. Blackwell was more than a brilliant mathematician, he was also a human being of extraordinary personal fortitude.

Anand:

I’ll always remember what he told me when I handed him a draft of my thesis. “The best thing about Bayesians is that they’re always right.”

I will have more to say about the Stony Brook conference, but first a word about David Blackwell, who passed away last week. We game theorists know Blackwell for several seminal contributions. Blackwell’s approachability theorem is at the heart of Aumann and Maschler’s result about repeated games with incomplete information which Eilon mentions below, and also of the calibration results which I mentioned in my presentation in Stony Brook (Alas, I was too nervous and forgot to mention Blackwell as I intended too). Blackwell’s theory of comparison of experiments has been influential in the game-theoretic study of value of information, and Olivier presented a two-person game analogue for Blackwell’s theorem in his talk. Another seminal contribution of Blackwell, together with Lester Dubins, is the theorem about merging of opinions, which is the major tool in the Ehuds’ theory of Bayesian learning in repeated games. And then there are his contributions to the theory of infinite games with Borel payoffs (now known as Blackwell games) and Blackwell and Fergurson’s solution to the Big Match game.

Read the rest of this entry »

Blackwell and Girshick about the concept of strategy:

Imagine that you are to play the White pieces in a single game of  chess, and that you discover you are unable to be present for the occasion. There is available a deputy, who will represent you on  the occasion, and who will carry out your instructions exactly,  but who is absolutely unable to make any decisions of his own volition. Thus, in order to guarantee that your deputy will be able to conduct the White pieces throughout the game, your instructions to him must envisage every possible circumstance in which he may be required to move, and must specify, for each such circumstance, what his choice is to be. Any such complete set of instructions constitutes what we shall call a strategy.

Now think about an infinite game, like repeated prisoner’s dilemma. If we take the idea about strategy as set of instructions seriously then not every function from past histories to an action is something we would like to call a strategy, because not every  function can be described by a set of instructions ! This should be clear even before we formalize what instructions mean, simply because the set of possible `instructions’ is countable, as every such instructions is just a sentence in English.

So what should be the formal definition of a strategy in these games, a definition that will capture the intuition of a complete set of instructions that specify  what your choice is to be for each possible circumstances? You know what I think.

Recent Comments

Join 120 other followers

Follow

Get every new post delivered to your Inbox.

Join 120 other followers

%d bloggers like this: