You are currently browsing the tag archive for the ‘uncertainty’ tag.
In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.
Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some then your prediction about the outcome tomorrow will be the expectation of
. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.
The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.
Every day there is a probability for eruption of war (W). If no war erupts then the outcome is either bad economy (B) or good economy (G) and is a function of the number of peaceful days since the last war. The function from the number of peaceful days to outcome is an unknown parameter of the process. Thus, a parameter is a function
. I am going to compare the predictions about the future made by two agents: Roxana, who knows
and Ursula, who faces some uncertainty about
represented by a uniform belief over the set of all parameters. Both Roxana and Ursula don’t know the future outcomes and since both of them are rational decision makeres, they both use Bayes’ rule to form beliefs about the unknown future given what they have seen in the past.
Consider first Roxana. In the terminology I introduced in previous posts, she faces no structural uncertainty. After a period of consecutive peaceful days Roxana believes that with probability
the outcome tomorrow will be W and with probability
the outcome tomorrow will be
.
Now consider Ursula. While she does not initially know , as times goes by she learns it. What do I mean here by learning ? Well, suppose Ursula starts observing the outcomes and she sees G,B,W,B,G,…. From this information Ursula she deduces that
, so that if a peaceful day follows a war then it has a bad economy. Next time a war pops up Ursula will know to make a prediction about the outcome tomorrow which is as accurate as Roxana’s prediction. Similarly Ursula can deduce that
. This way Ursula gradually deduces the values of
while she observes the process. However, and this is the punch line, for every
there will be a time when Ursula observes
consecutive peaceful day for the first time and at this day her prediction about the next outcome will be
for war,
for good economy and
for bad economy. Thus there will always be infinitely many occasions in which Ursula’s prediction differ from Roxana.
So, Ursula does learn the parameter in the sense that she gradually deduce more and more values of . However, because at every point in time she may require a different value of
— This is the difference between the stationary environment and the i.i.d. environment ! — there may happen infinitely many times in which she has not yet been able to deduce the value of the parameter which she needs in order to make a prediction about the outcome tomorrow.
You may notice that Ursula does succeed in making predictions most of the times. In fact, the situations when she fails become more and more rare, after observing longer and longer blocks of peaceful days. Indeed, Nabil and I formalize this idea and show that this is the case in every stationary environment with structural uncertainty: the observer makes predictions approximately as if he knew the parameter in almost every day. For that, we use a weak notion of merging which was suggested by Lehrer and Smorodinsky. If you are interested then this is a good time to look at our paper.
Finally, the example given above is our adaptation to an example that appeared first in a paper by Boris Yakovlevich Ryabko. Ryabko’s paper is part of a relatively large literature about non-Bayesian predictions in stationary environment. I will explain the relationship between that literature and our paper in another post.
A Bayesian agent is observing a sequence of outcomes in . The agent does not know the outcomes in advance, so he forms some belief
over sequences of outcomes. Suppose that the agent believes that the number
of successes in
consecutive outcomes is distributed uniformly in
and that all configuration with
successes are equally likely:
for every where
.
You have seen this belief already though maybe not in this form. It is a belief of an agent who tosses an i.i.d. coin and has some uncertainty over the parameter of the coin, given by a uniform distribution over
.
In this post I am gonna make a fuss about the fact that as time goes by the agent learns the parameter of the coin. The word `learning’ has several legitimate formalizations and today I am talking about the oldest and probably the most important one — consistency of posterior beliefs. My focus is somewhat different from that of textbooks because 1) As in the first paragraph, my starting point is the belief about outcome sequences, before there are any parameters and 2) I emphasize some aspects of consistency which are unsatisfactory in the sense that they don’t really capture our intuition about learning. Of course this is all part of the grand marketing campaign for my paper with Nabil, which uses a different notion of learning, so this discussion of consistency is a bit of a sidetrack. But I have already came across some VIP who i suspect was unaware of the distinction between different formulations of learning, and it wasn’t easy to treat his cocky blabbering in a respectful way. So it’s better to start with the basics.
Let be a finite set of outcomes. Let
be a belief over the set
of infinite sequences of outcomes, also called realizations. A decomposition of
is given by a set
of parameters, a belief
over
, and, for every
a belief
over
such that
. The integral in the definition means that the agent can think about the process as a two stages randomization: First a parameter
is drawn according to
and then a realization
is drawn according to
. Thus, a decomposition captures a certain way in which a Bayesian agent arranges his belief. Of course every belief
admits many decompositions. The extreme decompositions are:
- The Trivial Decomposition. Take
and
.
- Dirac’s Decomposition. Take
and
. A “parameter” in this case is a measure
that assigns probability 1 to the realization
.
Not all decompositions are equally exciting. We are looking for decompositions in which the parameter captures some `fundamental property’ of the process. The two extreme cases mentioned above are usually unsatisfactory in this sense. In Dirac’s decomposition, there are as many parameters as there are realizations; parameters simply copy realizations. In the trivial decomposition, there is a single parameter and thus cannot discriminate between different interesting properties. For stationary process, there is a natural decomposition in which the parameters distinguish between fundamental properties of the process. This is the ergodic decomposition, according to which the parameters are the ergodic beliefs. Recall that in this decomposition, a parameter captures the empirical distribution of blocks of outcomes in the infinite realization.
So what about learning ? While observing a process, a Bayesian agent will update his belief about the parameter. We denote by the posterior belief about the parameter
at the beginning of period
after observing the outcome sequence
. The notion of learning I want to talk about in this post is that this belief converges to a belief that is concentrated on the true parameter
. The example you should have in mind is the coin toss example I started with: while observing the outcomes of the coin the agent becomes more and more certain about the true parameter of the coin, which means his posterior belief becomes concentrated around a belief that gives probability
to the true parameter.
Definition 1 A decomposition of
is consistent if for
-almost every
it holds that
for
-almost every realization
.
In this definition, is Dirac atomic measure on
and the convergence is weak convergence of probability measures. No big deal if you don’t know what it means since it is exactly what you expect.
So, we have a notion of learning, and a seminal theorem of L.J. Doob (more on that below) implies that the ergodic decomposition of every stationary process is consistent. While this is not something that you will read in most textbooks (more on that below too), it is still well known. Why do Nabil and I dig further into the issue of learnability of the ergodic decomposition ? Two reasons. First, one has to write papers. Second, there is something unsatisfactory about the concept of consistency as a formalization of learning. To see why, consider the belief that outcomes are i.i.d. with probability
for success. This belief is ergodic, so from the perspective of the ergodic decomposition the agent `knows the process’ and there is nothing else to learn. But let’s look at Dirac’s decomposition instead of the ergodic decomposition. Then the parameter space equals the space of all realizations. Suppose the true parameter (=realization) is
, then after observing the first
outcomes of the process the agent’s posterior belief about the parameter is concentrated on all
that agrees with
on the first
coordinates. These posterior beliefs converge to
, so that Dirac decomposition is also consistent ! We may say that we learn the parameter, but “learning the parameter” in this environment is just recording the past. The agent does not gain any new insight about the future of the process from learning the parameter.
In my next post I will talk about other notions of learning, originating in a seminal paper of Blackwell and Dubins, which capture the idea that an agent who learns a parameter can make predictions as if he new the parameter. Let me also say here that this post and the following ones are much influenced by a paper of Jackson, Kalai, Smorodinsky. I will talk more on that paper in another post.
For the rest of this post I am going to make some comments about Bayesian consistency which, though again standard, I don’t usually see them in textbooks. Especially I don’t know of a reference for the version of Doob’s Theorem which I give below, so if any reader can give me such a reference it will be helpful.
First, you may wonder whether every decomposition is consistent. The answer is no. For a trivial example, take a situation where are the same for every
. More generally troubles arrise when the realization does not pin down the parameter. Formally, let us say that function
pins down or identifies the parameter if
for -almost every
. If such
exists then the decomposition is identifiable.
We have the following
Theorem 2 (Doob’s Theorem) A decomposition is identifiable if and only if it is consistent.
The `if’ part follows immediately from the definitions. The `only if’ part is deep, but not difficult: it follows immediately from the martingale convergence theorem. Indeed, Doob’s Theorem is usually cited as the first application of martingales theory.
Statisticians rarely work with this abstract formulation of decomposition which I use. For this reason, the theorem is usually formulated only for the case that and
is i.i.d.
. In this case the fact that the decomposition is identifiable follows from the strong law of large numbers. Doob’s Theorem then implies the standard consistency of Bayesian estimator of the parameter
.
Four agents are observing infinite streams of outcomes in . None of them knows the future outcomes and as good Bayesianists they represent their beliefs about unknowns as probability distributions:
- Agent 1 believes that outcomes are i.i.d. with probability
of success.
- Agent 2 believes that outcomes are i.i.d. with probability
of success. She does not know
; She believes that
is either
or
, and attaches probability
to each possibility.
- Agent 3 believes that outcomes follow a markov process: every day’s outcome equals yesterday’s outcome with probability
.
- Agent 4 believes that outcomes follow a markov process: every day’s outcome equals yesterday’s outcome with probability
. She does not know
; Her belief about
is the uniform distribution over
.
I denote by the agents’ beliefs about future outcomes.
We have an intuition that Agents 2 and 4 are in a different situations from Agents 1 and 3, in the sense that are uncertain about some fundamental properties of the stochastic process they are facing. I will say that they have `structural uncertainty’. The purpose of this post is to formalize this intuition. More explicitly, I am looking for a property of a belief over
that will distinguish between beliefs that reflect some structural uncertainty and beliefs that don’t. This property is ergodicity.
Definition 1 Let
be a stationary process with values in some finite set
of outcomes. The process is ergodic if for every block
of outcomes it holds that
A belief
is ergodic if it is the distribution of an ergodic process
Before I explain the definition let me write the ergodicity condition for the special case of the block for some
(this is a block of size 1):
In the right side of (1) we have the (subjective) probability that on day we will see the outcome
. Because of stationarity this is also the belief that we will see the outcome
on every other day. In the left side of (1) we have no probabilities at all. What is written there is the frequency of appearances of the outcome
in the realized sequence. This frequency is objective and has nothing to do with our beliefs. Therefore, the probabilities that a Bayesian agent with ergodic belief attaches to observing some outcome is a number that can be measured from the process: just observe it long enough and check the frequency in which this outcome appears. In a way, for ergodic processes the frequentist and subjective interpretations of probability coincide, but there are legitimate caveats to this statement, which I am not gonna delve into because my subject matter is not the meaning of probability. For my purpose it’s enough that ergodicity captures the intuition we have about the four agents I started with: Agents 1 and 3 both give probability
to success in each day. This means that if they are sold a lottery ticket that gives a prize if there is a success at day, say, 172, they both price this lottery ticket the same way. However, Agent 1 is certain that in the long run the frequency of success will be
. Agent 2 is certain that it will be either
or
. In fancy words,
is ergodic and
is not.
So, ergodic processes capture our intuition of `processes without structural uncertainty’. What about situations with uncertainty ? What mathematical creature captures this uncertainty ? Agent 2’s uncertainty seems to be captured by some probability distribution over two ergodic processes — the process “i.i.d. ” and the process “i.i.d.
”. Agent 2 is uncertain which of these processes he is facing. Agent 4’s uncertainty is captured by some probability distribution over a continuum of markov (ergodic) processes. This is a general phenomena:
Theorem 2 (The ergodic decomposition theorem) Let
be the set of ergodic distributions over
. Then for every stationary belief
there exists a unique distribution
over
such that
.
The probability distribution captures uncertainty about the structure of the process. In the case that
is an ergodic processes
is degenerated and there is no structural uncertainty.
Two words of caution: First, my definition of ergodic processes is not the one you will see in textbooks. The equivalence to the textbook definition is an immediate consequence of the so called ergodic theorem, which is a generalization of the law of large numbers for ergodic processes. Second, my use of the word `uncertainty’ is not universally accepted. The term traces back at least to Frank Knight, who made the distinction between risk or “measurable uncertainty” and what is now called “Knightian uncertainty” which cannot be measured. Since Knight wrote in English and not in Mathematish I don’t know what he meant, but modern decision theorists, mesmerized by the Ellsberg Paradox, usually interpret risk as a Bayesian situation and Knightian uncertainty, or “ambiguity”, as a situation which falls outside the Bayesian paradigm. So if I understand correctly they will view the situations of these four agents mentioned above as situations of risk only without uncertainty. The way in which I use “structural uncertainty” was used in several theory papers. See this paper of Jonathan and Nabil. And this and the paper which I am advertising in these posts, about disappearance of uncertainty over time. (I am sure there are more.)
To be continued…
When I give a presentation about expert testing there is often a moment in which it dawns for the first time on somebody in the audience that I am not assuming that the processes are stationary or i.i.d. This is understandable. In most modeling sciences and in statistics stationarity is a natural assumption about a stochastic process and is often made without stating. In fact most processes one comes around are stationary or some derivation of a stationary process (think the white noise, or i.i.d. sampling, or markov chains in their steady state). On the other hand, most game theorists and micro-economists who work with uncertainty don’t know what is a stationary process even if they have heard the word (This is a time for you to pause and ask yourself if you know what’s stationary process). So a couple of introductory words about stationary processes is a good starting point to promote my paper with Nabil
First, a definition: A stationary process is a sequence of random variables such that the joint distribution of
is the same for all
-s. More explicitly, suppose that the variables assume values in some finite set
of outcomes. Stationarity means that for every
, the probability
is independent in
. As usual, one can talk in the language of random variables or in the language of distributions, which we Bayesianists also call beliefs. A belief
about the infinite future is stationary if it is the distribution of a stationary process.
Stationarity means that Bob, who starts observing the process at day , does not view this specific day as having any cosmic significance. When Alice arrives two weeks later at day
and starts observing the process she has the same belief about her future as Bob had when he first arrives (Note that Bob’s view at day
about what comes ahead might be different from Alice’s since he has learned something meanwhile, more on that later). In other words, each agent can denote by
the first day in which they start observing the process, but there is nothing in the process itself that day
corresponds to. In fact, when talking about stationary processes it will clear our thinking if we think of them as having infinite past and infinite future
. We just happen to pop up at day
.
The first example of a stationary process is an i.i.d. process, such as the outcomes of repeated tossing of a coin with hsome probability of success. If the probability of success is unknown then a Bayesian agent must have some prior
about
: The agent believes that
is randomized according to
and then the outcomes are i.i.d. conditioned on
. A famous theorem of De-Finetti (wikipedia) characterizes all beliefs that are `mixtures of i.i.d.’ in this sense. All these beliefs are stationary.
Another example of stationary processes is Markov processes in their steady state. Again, we can generalize to situations in which the transition matrix is not known and one has some belief about it. Such situations are rather natural, but I don’t think there is a nice characterization of the processes that are mixtures of markov processes in this sense (that is, I don’t know of a De-Finetti Theorem for markov processes.) Still more general example is Markov process of some finite memory, for example when the outcome today depends on the history only through the outcomes of the last two days.
As an example of a stationary process which is not a Markov process of any finite memory consider a Hidden Markov model, according to which the outcome at every day is a function of an underlying, unobserved Markov process. If the hidden process is stationary then so is the observed process. This is an important property of stationary processes, which is obvious from the definition:
Theorem 1 Let
be a stationary process with values in some finite set
. Then the process
is stationary for every function
.
As can be seen in all these examples, when one lives in a stationary environment then one has some (possibly degenerated) uncertainty about the parameters of the process. For example we have some uncertainty about the parameter of the coin or the markov chain or the hidden markov process. I still haven’t defined however what I mean by parameters of the process; What lurks behind is the ergodic decomposition theorem, which is an analogue of De-Finetti’s Theorem for stationary processes. I will talk about it in my next post. For now, let me say a word about the implications of uncertainty about parameters in economic modeling, which may account in part for the relative rareness of stationary processes in microeconomics (I will give another reason for that misfortune later):
Let Craig be a rational agent (=Bayesian expected utility maximizer) who lives in a stationary environment in which a coin is tossed every day. Craig has some uncertainty over the parameter of the coin, represented by a belief . At every day, before observing the outcome of the coin, Craig takes an action. Craig’s payoff at every day depends on the action he took, the outcome of the coin, and possibly some other random objects which follow a stationary process observed by Craig. We observe the sequence of Craig’s actions. This process is not generally a stationary process. The reason is that Craig’s actions are functions of his posterior beliefs about the parameter of the coin, and this posterior belief does not follow a stationary process: as time goes by, Craig learns the parameter of the coin. His behavior in day
, when he doesn’t know the parameter is typically different from his behavior at day
when he already has a good idea about the parameter.
I said earlier that in stationarity environment, the point in time which we denote by does not correspond to anything about the process itself but only reflect the point in time in which we start observing the process. In this example this is indeed the case with Craig, who starts observing the coin process at time
. It is not true for us. Our subject matter is not the coin, but Craig. And time
has a special meaning for Craig. Bottom line: Rational agents in a stationary environment will typically not behave in a stationary way.
To be continued…
Recent Comments