Unfortunately (or maybe fortunately) players tend to be boundedly rational. Our computational power is bounded, our memory is bounded, the number of people that an organization can hire is bounded. So it makes sense to study games in which players can use only a restricted set of “simple” strategies. This topic has been extensively studied in the late 80′s and 90′s in the context of repeated games. Eran advocated in previous posts the set of computable strategies. This set of strategies indeed rules out complex strategies, but it still allows unbounded memory.

Two families of strategies in repeated games have been studied in the past to model players with bounded computational power: strategies with bounded recall and strategies implementable by finite automata. A strategy with recall k can recall only the last k action profiles chosen by the players; whatever happened in the far past is forgotten. An automaton is a finite state machine: it has a finite number of states, and in each stage one of the states is designated the “current” state. In every stage, the machine has an output, which is a function of the current state, and it moves to a new state (which is the new “current” state) as a function of its current “current” state and of its input. If the set of inputs is the set of action profiles, and its set of outputs is the set of actions of player i, then an automaton can implements a strategy for player i. Unlike strategies with recall k, an automaton can remember events from the far past, but because the number of its states is bounded, the number of events that it can remember is bounded.

Prominent game theorists, like Abreu, Aumann, Ben Porath, Kalai, Lehrer, Neyman, Papadimitriou, Rubinstein, Sabourian, Sorin, studied repeated games played by finite automata, and repeated games played by players with bounded recall. Here I will restrict myself to finite automata.

In a nut-shell, the theoretical literature is divided into two strands:

1) Two-player zero-sum T-stage repeated games, where each player i is restricted to use strategies that can be implemented by automata with n_i states. Here the question is how the value depends on T, n_1 and n_2. This strand of literature allows one to answer questions like: how does the relative memory size of the two players affect the value of the game? Or, how much a player should invest in increasing his memory so that his payoff significantly increase?

2) Two-player non-zero-sum infinitely repeated games, where the players have lexicographic preferences: each player tries to maximize his long-run average payoff, but, subject to that, he tries to minimize the size of the automaton that he uses. Abreu and Rubinstein proved that the set of equilibrium payoffs significantly shrinks, and one does not obtain a folk theorem. Rather, the set of equilibrium payoffs are only those payoffs that are (a) feasible, (b) individually rational (w.r.t. the min-max value in pure strategies), and (c) can be supported by coordinated play: e.g., whenever player 1 plays T, player 2 plays L, and whenever player 1 plays B, player 2 plays R.

In practice, the memory size of the players is not fixed: players can increase their memory at a given cost, and sometimes they can decrease their memory size thereby reducing their expenses. This raises the following question: suppose that memory is costly, say, each memory cell costs x cents, and the utility is linear: the payoff of a player is the different between, say, his long-run average payoff in the game and the cost of his memory (x times his memory size). Say that a vector y=(y1,y2) is an Bounded Computational Capacity equilibrium payoff if (a) it is the limit of equilibrium payoffs, as the memory cost x goes to 0, and (b) the cost of the corresponding automata (that implement the sequence of equilibria) goes to 0 as x goes to 0. What is now the set of Bounded Computational Capacity equilibrium payoffs?

It is interesting to note that a Bounded Computational Capacity equilibrium payoff need not be a Nash equilibrium payoff, and a Nash equilibrium payoff need not be a Bounded Computational Capacity equilibrium payoff. Do I have an example that this actually happens? Unfortunately not.

It turns out that the set of mixed-strategy Bounded Computational Capacity equilibrium payoffs includes once again the set of feasible and individually rational payoffs (w.r.t. the min-max value in pure strategies). In this context, a mixed-strategy is a probability distribution over pure automata (so a mixed strategy is a mixed automton, and NOT a behavior automaton. The output of a behavior automaton is a mixed output). A mixed automaton is equivalent to having a decision maker randomly choose an agent to play the game, when each agent implements a pure automaton. Mixed automata naturally appear when one player does not know the automaton that the other player is going to use, and therefore he is in fact facing a distribution over automata, which is a mixed automaton.

So, to those of us who know Abreu and Rubinstein (1988), it turns out that their result depends on two assumptions: (a) memory is free, and (b) players are restricted to use pure strategies (that is, pure automata). These two requirements imply that the players must use simple strategies, and they will both use automata of the same size. Once memory is costly, and players can randomly choose their pure automaton, they can restore their ability to choose complex strategies, thereby restoring the folk theorem.

Well, a small step for research, a tiny step for humanity.

## 2 comments

August 20, 2010 at 4:01 pm

benwow. thanks! a penetrating summary.

a friend of mine used to dismiss the noncomputable with the offhand ‘it is not of eternal significance’ which i love.

but fundamentally, who are you and why are you doing this to yourself?!

August 21, 2010 at 4:57 am

benan article about bounded computational capacity that makes me feel like I’m not smart enough to understand?

i can’t tell if i’m in on the joke or if i am the joke…