`At every day {n\in\mathbb{N}} a player takes an action’. This is the starting point of many models of repeated interaction. We let time run to infinity to reflect the fact that players don’t have in mind a fixed termination point for the game. We do, however, fix the starting point {n=0}, which I think in many cases is unnatural: By the time I realize I know the bartender in my local Starbucks and maybe I should start tipping, I already lost count of the number of times I have been there. This is why I would like to model it as a game with infinite past. Also, it will be cool to have a paper that starts with `At every day {n\in\mathbb{Z}}‘ for a change. But, as I am sure many game theorists have independently discovered, it is not clear how to proceed.

Suppose we have two players; you already know their names. Alice plays at even days and Bob plays at odd days. Each chooses an action from {\{0,1\}}. It is tempting to define a pure strategy for Alice as a collection of functions {f_n:\{0,1\}^{\mathbb{N}}\rightarrow\{0,1\}}: For every even {n\in\mathbb{Z}}, the argument for {f_n} is the sequence of previous actions of Bob and the output is Alice’s action at day {n}. Strategies of Bob can be defined analogously. But consider:

Example 1 Assume that Alice`s strategy is `At every day {n} play {0} if Bob has already played {0} infinitely many times and play {1} otherwise’ and Bob’s strategy is `At every day {n} play {0} if Alice has already played {1} infinitely many times and play {0} otherwise’.

In Example 1 there is no play path that is consistent with both strategies. We described each player’s plan, but for some reason it can’t be that both players play according to their plan.

Example 2 Assume that both Bob and Alice play `tit for tat’, which means play the same action that your opponent played yesterday.

In Example 2 there are two games paths that are consistent with the strategy profile. We described exactly each player’s plan but for some reason, these plans don’t pin down completely what will actually happen in the game.

Following von-Neumann and Morgenstern, for me to call something a game, you need to have a normal-form representation, which means that any strategy profile should uniquely determine the outcome of the game. This is why I used to view these examples as a dead end. And, since there is no game, I wouldn’t think there is anything to say about rationality in this context. Or so I thought

von-Neumann and Morgenstern are not the only players here. Zermelo, for example, did not think about games in terms of their normal forms. And, as an interesting discussion over at Thoughts, Arguments and Rants blog (which, because of my inferiority complex towards philosophers I am too shy to join) shows, neither do everyone around the blogosphere. They start with an example by Andrew Bacon of a zero-sum game in which each player has a winning strategy in the sense that they win in every play consistent with that strategy. Of course, the outcome is undefined if both players use their `winning strategies’. They use the example to argue about the possibility of being purely rational.

Incidentally, for those of us who don’t like infinite games because the world is finite or something, it might be interesting to note that Andrew describes his example of a game with infinite past using a Zeno-type arguments which I already said I don’t buy: At each {n=1,2,\dots}, at {1/n} hours past 12pm a player picks an action.