You are currently browsing the tag archive for the ‘learning’ tag.
Six years ago, I decided to teach intermediate microeconomics. I described my views on how it should be taught in an earlier post. The notes for that course grew into a textbook that is now available in Europe and in the US this April. I am particularly delighted at being able to sport Paolo Ucello’s `The Hunt’ upon the cover. The publishers, Cambridge University Press, asked me to provide an explanation for why I had chosen this, and it appears on the rear cover. Should you make your way to Oxford, be sure to stop by the Ashmolean Museum to see it, the painting of course, in all its glory. I day dream, that like Samuelson’s `Economics’, it will sell bigly.
This post describes the main theorem in my new paper with Nabil. Scroll down for open questions following this theorem. The theorem asserts that a Bayesian agent in a stationary environment will learn to make predictions as if he knew the data generating process, so that the as time goes by structural uncertainty dissipates. The standard example is when the sequence of outcomes is i.i.d. with an unknown parameter. As times goes by the agent learns the parameter.
The formulation of `learning to make predictions’ goes through merging, which traces back to Blackwell and Dubins. I will not give Blackwell and Dubin’s definition in this post but a weaker definition, suggested by Kalai and Lehrer.
A Bayesian agent observes an infinite sequence of outcomes from a finite set . Let
represent the agent’s belief about the future outcomes. Suppose that before observing every day’s outcome the agent makes a probabilistic prediction about it. I denote by
the element in
which represents the agent’s prediction about the outcome of day
just after he observed the outcomes
of previous days. In the following definition it is instructive to think about
as the true data generating process, i.e., the process that generates the sequence of outcomes, which may be different from the agent’s belief.
Definition 1 (Kalai and Lehrer) Let
. Then
merges with
if for
-almost every realization
it holds that
Assume now that the agent’s belief is stationary, and let
be its ergodic decomposition. Recall that in this decomposition
ranges over ergodic beliefs and
represents structural uncertainty. Does the agent learn to make predictions ? Using the definition of merging we can ask, does
merges with
? The answer, perhaps surprisingly, is no. I gave an example in my previous post.
Let me now move to a weaker definition of merging, that was first suggested by Lehrer and Smorodinsky. This definition requires the agent to make correct predictions in almost every period.
Definition 2 Let
. Then
weakly merges with
if
-almost every realization
it holds that
for a set
of periods of density
.
The definition of weak merging is natural: patient agents whose belief weakly merges with the true data generating process will make almost optimal decisions. Kalai, Lehrer and Smorodinsky discuss these notions of mergings and also their relationship with Dawid’s idea of calibration.
I am now in a position to state the theorem I have been talking about for two months:
Theorem 3 Let
be stationary, and let
be its ergodic decomposition. Then
weakly merges with
for
-almost every
.
In words: An agent who has some structural uncertainty about the data generating process will learn to make predictions in most periods as if he knew the data generating process.
Finally, here are the promised open questions. They deal with the two qualification in the theorem. The first question is about the “-almost every
” in the theorem. As Larry Wasserman mentioned this is unsatisfactory in some senses. So,
Question 1 Does there exists a stationary
(equivalently a belief
over ergodic beliefs) such that
weakly merges with
for every ergodic distribution
?
The second question is about strengthening weak merging to merging. We already know that this cannot be done for arbitrary belief over ergodic processes, but what if
is concentrated on some natural family of processes, for example hidden markov processes with a bounded number of hidden states ? Here is the simplest setup for which I don’t know the answer.
Question 2 The outcome of the stock market at every day is either U or D (up or down). An agent believes that this outcome is a stochastic function of an unobserved (hidden) state of the economy which can be either G or B (good or bad): When the hidden state is B the outcome is U with probability
(and D with probability
), and when the state is G the outcome is U with probability
. The hidden state changes according to a markov process with transition probability
,
. The parameter is
and the agent has some prior
over the parameter. Does the agent’s belief about outcomes merge with the truth for
-almost every
?.
In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.
Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some then your prediction about the outcome tomorrow will be the expectation of
. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.
The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.
Every day there is a probability for eruption of war (W). If no war erupts then the outcome is either bad economy (B) or good economy (G) and is a function of the number of peaceful days since the last war. The function from the number of peaceful days to outcome is an unknown parameter of the process. Thus, a parameter is a function
. I am going to compare the predictions about the future made by two agents: Roxana, who knows
and Ursula, who faces some uncertainty about
represented by a uniform belief over the set of all parameters. Both Roxana and Ursula don’t know the future outcomes and since both of them are rational decision makeres, they both use Bayes’ rule to form beliefs about the unknown future given what they have seen in the past.
Consider first Roxana. In the terminology I introduced in previous posts, she faces no structural uncertainty. After a period of consecutive peaceful days Roxana believes that with probability
the outcome tomorrow will be W and with probability
the outcome tomorrow will be
.
Now consider Ursula. While she does not initially know , as times goes by she learns it. What do I mean here by learning ? Well, suppose Ursula starts observing the outcomes and she sees G,B,W,B,G,…. From this information Ursula she deduces that
, so that if a peaceful day follows a war then it has a bad economy. Next time a war pops up Ursula will know to make a prediction about the outcome tomorrow which is as accurate as Roxana’s prediction. Similarly Ursula can deduce that
. This way Ursula gradually deduces the values of
while she observes the process. However, and this is the punch line, for every
there will be a time when Ursula observes
consecutive peaceful day for the first time and at this day her prediction about the next outcome will be
for war,
for good economy and
for bad economy. Thus there will always be infinitely many occasions in which Ursula’s prediction differ from Roxana.
So, Ursula does learn the parameter in the sense that she gradually deduce more and more values of . However, because at every point in time she may require a different value of
— This is the difference between the stationary environment and the i.i.d. environment ! — there may happen infinitely many times in which she has not yet been able to deduce the value of the parameter which she needs in order to make a prediction about the outcome tomorrow.
You may notice that Ursula does succeed in making predictions most of the times. In fact, the situations when she fails become more and more rare, after observing longer and longer blocks of peaceful days. Indeed, Nabil and I formalize this idea and show that this is the case in every stationary environment with structural uncertainty: the observer makes predictions approximately as if he knew the parameter in almost every day. For that, we use a weak notion of merging which was suggested by Lehrer and Smorodinsky. If you are interested then this is a good time to look at our paper.
Finally, the example given above is our adaptation to an example that appeared first in a paper by Boris Yakovlevich Ryabko. Ryabko’s paper is part of a relatively large literature about non-Bayesian predictions in stationary environment. I will explain the relationship between that literature and our paper in another post.
A Bayesian agent is observing a sequence of outcomes in . The agent does not know the outcomes in advance, so he forms some belief
over sequences of outcomes. Suppose that the agent believes that the number
of successes in
consecutive outcomes is distributed uniformly in
and that all configuration with
successes are equally likely:
for every where
.
You have seen this belief already though maybe not in this form. It is a belief of an agent who tosses an i.i.d. coin and has some uncertainty over the parameter of the coin, given by a uniform distribution over
.
In this post I am gonna make a fuss about the fact that as time goes by the agent learns the parameter of the coin. The word `learning’ has several legitimate formalizations and today I am talking about the oldest and probably the most important one — consistency of posterior beliefs. My focus is somewhat different from that of textbooks because 1) As in the first paragraph, my starting point is the belief about outcome sequences, before there are any parameters and 2) I emphasize some aspects of consistency which are unsatisfactory in the sense that they don’t really capture our intuition about learning. Of course this is all part of the grand marketing campaign for my paper with Nabil, which uses a different notion of learning, so this discussion of consistency is a bit of a sidetrack. But I have already came across some VIP who i suspect was unaware of the distinction between different formulations of learning, and it wasn’t easy to treat his cocky blabbering in a respectful way. So it’s better to start with the basics.
Let be a finite set of outcomes. Let
be a belief over the set
of infinite sequences of outcomes, also called realizations. A decomposition of
is given by a set
of parameters, a belief
over
, and, for every
a belief
over
such that
. The integral in the definition means that the agent can think about the process as a two stages randomization: First a parameter
is drawn according to
and then a realization
is drawn according to
. Thus, a decomposition captures a certain way in which a Bayesian agent arranges his belief. Of course every belief
admits many decompositions. The extreme decompositions are:
- The Trivial Decomposition. Take
and
.
- Dirac’s Decomposition. Take
and
. A “parameter” in this case is a measure
that assigns probability 1 to the realization
.
Not all decompositions are equally exciting. We are looking for decompositions in which the parameter captures some `fundamental property’ of the process. The two extreme cases mentioned above are usually unsatisfactory in this sense. In Dirac’s decomposition, there are as many parameters as there are realizations; parameters simply copy realizations. In the trivial decomposition, there is a single parameter and thus cannot discriminate between different interesting properties. For stationary process, there is a natural decomposition in which the parameters distinguish between fundamental properties of the process. This is the ergodic decomposition, according to which the parameters are the ergodic beliefs. Recall that in this decomposition, a parameter captures the empirical distribution of blocks of outcomes in the infinite realization.
So what about learning ? While observing a process, a Bayesian agent will update his belief about the parameter. We denote by the posterior belief about the parameter
at the beginning of period
after observing the outcome sequence
. The notion of learning I want to talk about in this post is that this belief converges to a belief that is concentrated on the true parameter
. The example you should have in mind is the coin toss example I started with: while observing the outcomes of the coin the agent becomes more and more certain about the true parameter of the coin, which means his posterior belief becomes concentrated around a belief that gives probability
to the true parameter.
Definition 1 A decomposition of
is consistent if for
-almost every
it holds that
for
-almost every realization
.
In this definition, is Dirac atomic measure on
and the convergence is weak convergence of probability measures. No big deal if you don’t know what it means since it is exactly what you expect.
So, we have a notion of learning, and a seminal theorem of L.J. Doob (more on that below) implies that the ergodic decomposition of every stationary process is consistent. While this is not something that you will read in most textbooks (more on that below too), it is still well known. Why do Nabil and I dig further into the issue of learnability of the ergodic decomposition ? Two reasons. First, one has to write papers. Second, there is something unsatisfactory about the concept of consistency as a formalization of learning. To see why, consider the belief that outcomes are i.i.d. with probability
for success. This belief is ergodic, so from the perspective of the ergodic decomposition the agent `knows the process’ and there is nothing else to learn. But let’s look at Dirac’s decomposition instead of the ergodic decomposition. Then the parameter space equals the space of all realizations. Suppose the true parameter (=realization) is
, then after observing the first
outcomes of the process the agent’s posterior belief about the parameter is concentrated on all
that agrees with
on the first
coordinates. These posterior beliefs converge to
, so that Dirac decomposition is also consistent ! We may say that we learn the parameter, but “learning the parameter” in this environment is just recording the past. The agent does not gain any new insight about the future of the process from learning the parameter.
In my next post I will talk about other notions of learning, originating in a seminal paper of Blackwell and Dubins, which capture the idea that an agent who learns a parameter can make predictions as if he new the parameter. Let me also say here that this post and the following ones are much influenced by a paper of Jackson, Kalai, Smorodinsky. I will talk more on that paper in another post.
For the rest of this post I am going to make some comments about Bayesian consistency which, though again standard, I don’t usually see them in textbooks. Especially I don’t know of a reference for the version of Doob’s Theorem which I give below, so if any reader can give me such a reference it will be helpful.
First, you may wonder whether every decomposition is consistent. The answer is no. For a trivial example, take a situation where are the same for every
. More generally troubles arrise when the realization does not pin down the parameter. Formally, let us say that function
pins down or identifies the parameter if
for -almost every
. If such
exists then the decomposition is identifiable.
We have the following
Theorem 2 (Doob’s Theorem) A decomposition is identifiable if and only if it is consistent.
The `if’ part follows immediately from the definitions. The `only if’ part is deep, but not difficult: it follows immediately from the martingale convergence theorem. Indeed, Doob’s Theorem is usually cited as the first application of martingales theory.
Statisticians rarely work with this abstract formulation of decomposition which I use. For this reason, the theorem is usually formulated only for the case that and
is i.i.d.
. In this case the fact that the decomposition is identifiable follows from the strong law of large numbers. Doob’s Theorem then implies the standard consistency of Bayesian estimator of the parameter
.
When I give a presentation about expert testing there is often a moment in which it dawns for the first time on somebody in the audience that I am not assuming that the processes are stationary or i.i.d. This is understandable. In most modeling sciences and in statistics stationarity is a natural assumption about a stochastic process and is often made without stating. In fact most processes one comes around are stationary or some derivation of a stationary process (think the white noise, or i.i.d. sampling, or markov chains in their steady state). On the other hand, most game theorists and micro-economists who work with uncertainty don’t know what is a stationary process even if they have heard the word (This is a time for you to pause and ask yourself if you know what’s stationary process). So a couple of introductory words about stationary processes is a good starting point to promote my paper with Nabil
First, a definition: A stationary process is a sequence of random variables such that the joint distribution of
is the same for all
-s. More explicitly, suppose that the variables assume values in some finite set
of outcomes. Stationarity means that for every
, the probability
is independent in
. As usual, one can talk in the language of random variables or in the language of distributions, which we Bayesianists also call beliefs. A belief
about the infinite future is stationary if it is the distribution of a stationary process.
Stationarity means that Bob, who starts observing the process at day , does not view this specific day as having any cosmic significance. When Alice arrives two weeks later at day
and starts observing the process she has the same belief about her future as Bob had when he first arrives (Note that Bob’s view at day
about what comes ahead might be different from Alice’s since he has learned something meanwhile, more on that later). In other words, each agent can denote by
the first day in which they start observing the process, but there is nothing in the process itself that day
corresponds to. In fact, when talking about stationary processes it will clear our thinking if we think of them as having infinite past and infinite future
. We just happen to pop up at day
.
The first example of a stationary process is an i.i.d. process, such as the outcomes of repeated tossing of a coin with hsome probability of success. If the probability of success is unknown then a Bayesian agent must have some prior
about
: The agent believes that
is randomized according to
and then the outcomes are i.i.d. conditioned on
. A famous theorem of De-Finetti (wikipedia) characterizes all beliefs that are `mixtures of i.i.d.’ in this sense. All these beliefs are stationary.
Another example of stationary processes is Markov processes in their steady state. Again, we can generalize to situations in which the transition matrix is not known and one has some belief about it. Such situations are rather natural, but I don’t think there is a nice characterization of the processes that are mixtures of markov processes in this sense (that is, I don’t know of a De-Finetti Theorem for markov processes.) Still more general example is Markov process of some finite memory, for example when the outcome today depends on the history only through the outcomes of the last two days.
As an example of a stationary process which is not a Markov process of any finite memory consider a Hidden Markov model, according to which the outcome at every day is a function of an underlying, unobserved Markov process. If the hidden process is stationary then so is the observed process. This is an important property of stationary processes, which is obvious from the definition:
Theorem 1 Let
be a stationary process with values in some finite set
. Then the process
is stationary for every function
.
As can be seen in all these examples, when one lives in a stationary environment then one has some (possibly degenerated) uncertainty about the parameters of the process. For example we have some uncertainty about the parameter of the coin or the markov chain or the hidden markov process. I still haven’t defined however what I mean by parameters of the process; What lurks behind is the ergodic decomposition theorem, which is an analogue of De-Finetti’s Theorem for stationary processes. I will talk about it in my next post. For now, let me say a word about the implications of uncertainty about parameters in economic modeling, which may account in part for the relative rareness of stationary processes in microeconomics (I will give another reason for that misfortune later):
Let Craig be a rational agent (=Bayesian expected utility maximizer) who lives in a stationary environment in which a coin is tossed every day. Craig has some uncertainty over the parameter of the coin, represented by a belief . At every day, before observing the outcome of the coin, Craig takes an action. Craig’s payoff at every day depends on the action he took, the outcome of the coin, and possibly some other random objects which follow a stationary process observed by Craig. We observe the sequence of Craig’s actions. This process is not generally a stationary process. The reason is that Craig’s actions are functions of his posterior beliefs about the parameter of the coin, and this posterior belief does not follow a stationary process: as time goes by, Craig learns the parameter of the coin. His behavior in day
, when he doesn’t know the parameter is typically different from his behavior at day
when he already has a good idea about the parameter.
I said earlier that in stationarity environment, the point in time which we denote by does not correspond to anything about the process itself but only reflect the point in time in which we start observing the process. In this example this is indeed the case with Craig, who starts observing the coin process at time
. It is not true for us. Our subject matter is not the coin, but Craig. And time
has a special meaning for Craig. Bottom line: Rational agents in a stationary environment will typically not behave in a stationary way.
To be continued…
Recent Comments