When I give a presentation about expert testing there is often a moment in which it dawns for the first time on somebody in the audience that I am not assuming that the processes are stationary or i.i.d. This is understandable. In most modeling sciences and in statistics stationarity is a natural assumption about a stochastic process and is often made without stating. In fact most processes one comes around are stationary or some derivation of a stationary process (think the white noise, or i.i.d. sampling, or markov chains in their steady state). On the other hand, most game theorists and micro-economists who work with uncertainty don’t know what is a stationary process even if they have heard the word (This is a time for you to pause and ask yourself if you know what’s stationary process). So a couple of introductory words about stationary processes is a good starting point to promote my paper with Nabil

First, a definition: A stationary process is a sequence ${\zeta_0,\zeta_1,\dots}$ of random variables such that the joint distribution of ${(\zeta_n,\zeta_{n+1},\dots)}$ is the same for all ${n}$-s. More explicitly, suppose that the variables assume values in some finite set ${A}$ of outcomes. Stationarity means that for every ${a_0,\dots,a_k\in A}$, the probability ${\mathop{\mathbb P}(\zeta_n=a_0,\dots,\zeta_{n+k}=a_{n+k})}$ is independent in ${n}$. As usual, one can talk in the language of random variables or in the language of distributions, which we Bayesianists also call beliefs. A belief ${\mu\in\Delta(A^\mathbb{N})}$ about the infinite future is stationary if it is the distribution of a stationary process.

Stationarity means that Bob, who starts observing the process at day ${n=0}$, does not view this specific day as having any cosmic significance. When Alice arrives two weeks later at day ${n=14}$ and starts observing the process she has the same belief about her future as Bob had when he first arrives (Note that Bob’s view at day ${n=14}$ about what comes ahead might be different from Alice’s since he has learned something meanwhile, more on that later). In other words, each agent can denote by ${0}$ the first day in which they start observing the process, but there is nothing in the process itself that day ${0}$ corresponds to. In fact, when talking about stationary processes it will clear our thinking if we think of them as having infinite past and infinite future ${\dots,\zeta_{-2},\zeta_{-1},\zeta_0,\zeta_1,\zeta_2,\dots}$. We just happen to pop up at day ${0}$.

The first example of a stationary process is an i.i.d. process, such as the outcomes of repeated tossing of a coin with hsome probability ${\theta}$ of success. If the probability of success is unknown then a Bayesian agent must have some prior ${\lambda\in \Delta([0,1])}$ about ${\theta}$: The agent believes that ${\theta}$ is randomized according to ${\lambda}$ and then the outcomes are i.i.d. conditioned on ${\theta}$. A famous theorem of De-Finetti (wikipedia) characterizes all beliefs that are `mixtures of i.i.d.’ in this sense. All these beliefs are stationary.

Another example of stationary processes is Markov processes in their steady state. Again, we can generalize to situations in which the transition matrix is not known and one has some belief about it. Such situations are rather natural, but I don’t think there is a nice characterization of the processes that are mixtures of markov processes in this sense (that is, I don’t know of a De-Finetti Theorem for markov processes.) Still more general example is Markov process of some finite memory, for example when the outcome today depends on the history only through the outcomes of the last two days.

As an example of a stationary process which is not a Markov process of any finite memory consider a Hidden Markov model, according to which the outcome at every day is a function of an underlying, unobserved Markov process. If the hidden process is stationary then so is the observed process. This is an important property of stationary processes, which is obvious from the definition:

Theorem 1 Let ${\dots,\zeta_{-2},\zeta_{-1},\zeta_0,\zeta_1,\dots}$ be a stationary process with values in some finite set ${H}$. Then the process ${\dots,f(\zeta_{-2}),f(\zeta_{-1}),f(\zeta_0),f(\zeta_1),\dots}$ is stationary for every function ${f:H\rightarrow A}$.

As can be seen in all these examples, when one lives in a stationary environment then one has some (possibly degenerated) uncertainty about the parameters of the process. For example we have some uncertainty about the parameter of the coin or the markov chain or the hidden markov process. I still haven’t defined however what I mean by parameters of the process; What lurks behind is the ergodic decomposition theorem, which is an analogue of De-Finetti’s Theorem for stationary processes. I will talk about it in my next post. For now, let me say a word about the implications of uncertainty about parameters in economic modeling, which may account in part for the relative rareness of stationary processes in microeconomics (I will give another reason for that misfortune later):

Let Craig be a rational agent (=Bayesian expected utility maximizer) who lives in a stationary environment in which a coin is tossed every day. Craig has some uncertainty over the parameter of the coin, represented by a belief ${\lambda\in\Delta([0,1])}$. At every day, before observing the outcome of the coin, Craig takes an action. Craig’s payoff at every day depends on the action he took, the outcome of the coin, and possibly some other random objects which follow a stationary process observed by Craig. We observe the sequence of Craig’s actions. This process is not generally a stationary process. The reason is that Craig’s actions are functions of his posterior beliefs about the parameter of the coin, and this posterior belief does not follow a stationary process: as time goes by, Craig learns the parameter of the coin. His behavior in day ${0}$, when he doesn’t know the parameter is typically different from his behavior at day ${14}$ when he already has a good idea about the parameter.

I said earlier that in stationarity environment, the point in time which we denote by ${0}$ does not correspond to anything about the process itself but only reflect the point in time in which we start observing the process. In this example this is indeed the case with Craig, who starts observing the coin process at time ${0}$. It is not true for us. Our subject matter is not the coin, but Craig. And time ${0}$ has a special meaning for Craig. Bottom line: Rational agents in a stationary environment will typically not behave in a stationary way.

To be continued…