Every finite-horizon game in extensive form in which the number of actions in every node is finite has a subgame-perfect equilibrium. Indeed, consider a smallest subgame, that is, a subgame that contains no proper subgames. By Nash’s theorem this subgame has a Nash equilibrium in mixed strategies. Since this subgame contains no proper subgame, this Nash equilibrium is a subgame-perfect equilibrium of the subgame. We can then replace this whole subgame by a single node with terminal payoff that coincides with the equilibrium payoff of the subgame, and continue inductively to construct a subgame-perfect equilibrium (in mixed strategies) in the whole game.

When uncountably many actions are available at some nodes, a subgame-perfect equilibrium need not exist (Harris, Reny, and Robson, 1995).

What happens in infinite-horizon games? When horizon is infinite, a Nash equilibrium need not exist. Indeed, consider a one-player infinite-horizon game in which the player, Alice, has two possible actions, A and B, at every stage. Alice’s payoff is 1-(1/n), where n is the first stage in which she chooses the action B, if n is finite. If Alice never chooses the action B, her payoff is 0. In this game, Alice would like to choose the action B for the first time as late as possible, and, whenever she chooses B, she is better off waiting one more stage. Therefore Alice has no optimal strategy, and there is no equilibrium.

Plainly this toy-game does have an ε-equilibrium: choose B for the first time at stage 1/ε.

Does there exist a subgame-perfect ε-equilibrium in every infinite-horizon perfect-information extensive-form game? Apparently the answer is negative. The following example, to appear in a new paper of János Flesch, Jeroen Kuipers, Ayala Mashiah-Yaakovi, Gijs Schoenmakers, Eran Shmaya, your humble servant, and Koos Vrieze, proves this point.

Alice and Bob play an infinite-stage game. In each stage one of them is the active player who chooses the action (and the other does nothing). The active player has two actions, A and B, that correspond to the identity of the next active player. That is, if the current active player chooses action A, then Alice is the active player in the next stage, while if the current active player chooses action B, then Bob is the active player in the next stage.

What are the payoffs?

  • If there is a stage t such that, from stage t and on, Alice is the active player, then the payoff to Alice is -1 and Bob’s payoff is 2. In such a case we say that the game absorbs with Alice.
  • If there is a stage t such that, from stage t and on, Bob is the active player, then the payoff to Alice is -2 and Bob’s payoff is 1. In such a case we say that the game absorbs with Bob.
  • Otherwise the payoff to both is 0.

Let us argue that in this game there is no subgame-perfect ε-equilibrium, provided ε is sufficiently small. So fix ε>0 sufficiently small and suppose by contradiction that there is a subgame-perfect ε-equilibrium.

In any subgame in which Bob is the first active player he can guarantee 1 by always choosing B. In particular, Bob’s payoff in this subgame must be at least 1-ε.

Fix now a subgame in which Alice is the first active player. Can it be that the play absorbs with Alice with probability less than ε? If this happens, then, since Bob’s payoff is at least 1-ε, the probability that the play absorbs with Bob is at least 1-3ε. But then Alice’s payoff is way below -1, which she can guarantee by always playing A. Thus, in any subgame in which Alice is the first active player, play absorbs with Alice with probability at least ε.

By Levy’s 0-1 law this conclusion implies that with probability 1 the play absorbs with Alice. This in turn implies that play will not absorb with Bob: he will never start an infinite sequence of B’s. But then Alice has no incentive to play A: she will always play B and get 0.

This example shows that apart of the well-beloved concept of ε-equilibrium, we do not have a good concept of subgame-perfect ε-equilibrium in infinite-horizon games. In another post I will comment on the concept of trembling-hand equilibrium in infinite-horizon perfect-information games.