You are currently browsing the tag archive for the ‘Blackwell’ tag.

In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.

Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some {\mu\in [0,1]} then your prediction about the outcome tomorrow will be the expectation of {\mu}. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.

The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.

Read the rest of this entry »

Department of self-promotion: sequential tests, Blackwell games and the axiom of determinacy.

Read the rest of this entry »

Following Jeff, a couple of links to posts about Blackwell’s life and research: Jake Abernethy, Rajiv SethiAnand Sarwate, Kevin Bryan, jesús Fernández-Villaverde (spanish), Andrew Gelman

Rajiv:

It takes a particular kind of strength to manage such a productive research career while tolerating the stresses and strains of personal insult, and carrying the aspirations of so many on one’s shoulders. Blackwell was more than a brilliant mathematician, he was also a human being of extraordinary personal fortitude.

Anand:

I’ll always remember what he told me when I handed him a draft of my thesis. “The best thing about Bayesians is that they’re always right.”

I will have more to say about the Stony Brook conference, but first a word about David Blackwell, who passed away last week. We game theorists know Blackwell for several seminal contributions. Blackwell’s approachability theorem is at the heart of Aumann and Maschler’s result about repeated games with incomplete information which Eilon mentions below, and also of the calibration results which I mentioned in my presentation in Stony Brook (Alas, I was too nervous and forgot to mention Blackwell as I intended too). Blackwell’s theory of comparison of experiments has been influential in the game-theoretic study of value of information, and Olivier presented a two-person game analogue for Blackwell’s theorem in his talk. Another seminal contribution of Blackwell, together with Lester Dubins, is the theorem about merging of opinions, which is the major tool in the Ehuds’ theory of Bayesian learning in repeated games. And then there are his contributions to the theory of infinite games with Borel payoffs (now known as Blackwell games) and Blackwell and Fergurson’s solution to the Big Match game.

Read the rest of this entry »

Blackwell and Girshick about the concept of strategy:

Imagine that you are to play the White pieces in a single game of  chess, and that you discover you are unable to be present for the occasion. There is available a deputy, who will represent you on  the occasion, and who will carry out your instructions exactly,  but who is absolutely unable to make any decisions of his own volition. Thus, in order to guarantee that your deputy will be able to conduct the White pieces throughout the game, your instructions to him must envisage every possible circumstance in which he may be required to move, and must specify, for each such circumstance, what his choice is to be. Any such complete set of instructions constitutes what we shall call a strategy.

Now think about an infinite game, like repeated prisoner’s dilemma. If we take the idea about strategy as set of instructions seriously then not every function from past histories to an action is something we would like to call a strategy, because not every  function can be described by a set of instructions ! This should be clear even before we formalize what instructions mean, simply because the set of possible `instructions’ is countable, as every such instructions is just a sentence in English.

So what should be the formal definition of a strategy in these games, a definition that will capture the intuition of a complete set of instructions that specify  what your choice is to be for each possible circumstances? You know what I think.

Join 117 other followers

Follow

Get every new post delivered to your Inbox.

Join 117 other followers

%d bloggers like this: