I work these days on a paper that touches on the concepts of common p-belief and iterated p-belief. I was startled to learn that these two concepts, that look similar, are in fact different. After corresponding with Stephen Morris I actually understood why they are different. I decided to share this revelation with my dedicated readers. Here is what I gathered. Plainly, Stephen is to be thanked for his clear explanations to me; I am to be blamed for my incoherent babbling below.

More often then not, players have incomplete information about the game they are playing: a player may not know how the exact outcome of the game depends on the players’ pure strategies, he may not know the set of pure strategies of the other players, he may not know what the other players know about the game, and plainly the players need not agree in their assessment concerning the outcome of the game. For example, it seems very likely at this point that the peace talks among Israel and the Palestinians will die next week. What will happen afterward? Another war? only with the Palestinians or also with Hezbollah? A Gandhi-style resistance? The players may not agree on the most probable outcome. The Palestinians may believe that they will choose a non-violent resistance and they may assign low probability that the situation will deteriorate into an all-out war, while Israel may estimate that the situation will become a war with high probability.

The first to provide a model to study such situations was John Harsanyi in 1967-1968. Here I will present an equivalent model, proposed by Aumann in 1974, for two players. The reader can easily generalize it to any number of players. In this model, there is a finite set Ω of “states of the world”. Each state of the world is a list of the statements that are true in that state, for example,  “if I play T and John plays L I will get 5 and John will get 2”, and “John has two possible pure strategies, L and R”. To be concrete, suppose that the set of states of the world contains six states, which we number 1,2,3,4,5,6. Each player has some information regarding the true state of the world. This information is given by a partition of Ω. For example, the partition of player 1 can be F1 = { {1,2}, {3}, {4}, {5,6} }. If the true state of the world is ω, all that player 1 knows is that the true state of the world is the element in the partition that contains ω. For example, if the true state of the world is ω=1, then player 1 only knows that the true state of the world is either 1 or 2. Suppose that the partition of player 2 is F2 = { {1,3,4}, {2,5,6} }. The fact that the partitions of the players are different tells us that they will have different information: when the true state of the world is ω=1, player 1 knows that the true state is either 1 or 2, whereas player 2 knows that the true state is 1,3 or4. In particular, player 1 does not know what player 2 knows (and vice versa).

Finally, we add a probability distribution P over Ω. The interpretation of this probability distribution P is that this is the prior belief of the players over the states of the world, before they learned their information. So the situation evolves as follows. First a state of the world ω in Ω is chosen according to the probability distribution P. Then each player learns the element of his partition that contains the chosen state of the world. And then the game continues: if the situation describes a game between me and John, then I and John choose pure strategies from our strategy sets.

For example, if the true state of the world is ω=1, then I (player 1) learn that the true state of the world is in the set {1,2}, and John (player 2) learns that the true state of the world is in the set {1,3,4}. Suppose that the probability distribution P is the uniform distribution: P(ω) = 1/6 for each ω in Ω. Then after I learn my information, I assign  probability 1/2 to the true state being 1 and probability 1/2 to the true state being 2; John on the other hand assigns  probability 1/3 to the true state being 1, 3 or 4.

As usual, an event is a subset E of Ω. The probability that a player assigns to an event depends on his information. For example, if E = {1,3,4} then the probability that I assign to E is P1(E | 1) = P1(E | 2) = 1/2, P1(E | 3) = P1(E | 4) = 1, and P1(E | 5) = P1(E | 6) = 0. Indeed, if ω=1 then I assign equal probabilities to states 1 and 2. Event E obtains only if the true state is 1, and this occurs with my subjective probability 1/2. If the true state of the world is 3 then I know that this is the true state of the world, and then I know that E obtains. John assigns different probability to the event E, depending on his information. On can verify that P2(E | 1) = P2(E | 3) = P2(E | 4) = 1, while P2(E | 2) = P2(E | 5) = P2(E | 6) = 0.

We now get to the notion of p-belief, defined by Monderer and Samet in 1989. Let p be a number between 0 and 1. We say that a player p-believes in the event E at state of the world ω if the subjective probability that he assigns to E at ω is at least p. Formally, this translates to P(E | ω) ≥ p. We denote by $B_i^p(E)$ the set of all state of the world at which player i p-believes in the event E. As we saw above, in our example, $B_1^{0.6}(E) = {3,4}$.

In games we care about the beliefs of one player on the beliefs of the other players. We would therefore like to talk about concepts of iterated p-belief. Recall that an event is common knowledge at a state of the world ω if both players know it at ω, both players know that both players know it at ω, both players know that both player know that both player know it at ω, etc. ,ad infinitum. An equivalent way to say this is to say that each player knows the event at ω, each player knows that the other player knows the event at ω, each player knows that the other player knows that he knows the event at ω, etc., ad infinitum. The two equivalent definitions can be written formally as follows: denote by $K_i(E)$ the collection of all states of the world at which player i knows the event E. This is the set of all states of the world ω such that the element of the partition that contains ω is a subset of E; if such an ω is the true state of the world, then the information that player i receives tells him that the true state is in the element of his partition that contains ω, so it must be in E, and therefore he knows that the event E obtains. Denote by K(E) = K_1(E) ∩K_2(E): this is the collection of all states of the world at which both players know that E obtains.

The first definition of common knowledge is that both players know the event at ω, both players know that both players the event at ω, both players know that both player know that both player know the event at ω, etc., ad infinitum. Formally this is: ω is in the set K(E) ∩ K(K(E)) ∩ K(K(K(E))) ∩ …

The second definition of common knowledge is that each player knows the event at ω, each player knows that the other player knows the event at ω, each player knows that the other player knows that he knows the event at ω, etc., ad infinitum. Formally this is: ω is in the set(K_1(E) ∩ K_1(K_2(E)) ∩ K_1(K_2(K_1(E))) ∩ …) ∩ (K_2(E) ∩ K_2(K_1(E)) ∩ K_2(K_1(K_2(E))) ∩ …).

Why are the two definitions equivalent? This follows because of two properties of the knowledge operator: (1) K_i(A ∩ B) = K_i(A) ∩ K_i(B), and (2) K_i(K_i(A)) = K_i(A). Therefore by simplifying the expression for the first definition we get the expression for the second definition.

For beliefs the analogue two definitions turn out not to be equivalent. Denote by B^p(E) = B_1^p(E) ∩ B_2^p(E). This is the collection of states of the world at which both players assign probability at least p to the event E. For common belief, the analogue definitions to common knowledge are as follows:

An event E is common p-belief at a state of the world ω if both players p-belief in E at ω, both players p-believe that both players p-believe in E at ω, etc., ad infinitum: ω is in B^p(E) ∩ B^p(B^p(E)) ∩ B^p(B^p(B^p(E))) ∩ …

An event E is iterated p-belief at a state of the world ω if each player p-believes in E at ω, each player p-believes that the other player p-believes in E at ω, etc., ad infinitum: ω is in (B_1^p(E) ∩ B_1^p(B_2^p(E)) ∩ B_1^p(B_2^p(B_1^p(E))) ∩ …) ∩(B_2^p(E) ∩ B_2^p(B_1^p(E)) ∩ B_2^p(B_1^p(B_2^p(E))) ∩ …)

Surprisingly (at least to me at first sight), these two definitions are not equivalent. If an event if common p-belief at ω then it is iterated p-belief at ω, but the other direction does not hold. The reason is that for the equivalence for common knowledge we used the two properties (1) and (2). The second, which translates to B_i^p(B_i^p(C)) = B_i^p(C), holds in general. The first, which translates to B_i^p(A ∩ C) = B_i^p(A) ∩ B_i^p(C) does not hold in general: it may well be that A ∩ C is much smaller than A and C, I assign probability at least p to the larger sets A and C, but I assign a probability smaller than p to the intersection A ∩ C.

Example 3 in Morris (1999) shows that these two definitions differ.

  • The set of states of the world is Ω = { 1,2,3,4,5,6 }.
  • The partition of player 1 is F1 = { {1,2}, {3}, {4}, {5,6} }.
  • The partition of player 2 is F2 = { {1,3,4}, {2,5,6} }.
  • The probability distribution P is P(ω) = 1/6 for each ω in Ω.

Consider the event E = {1,2,3}, and set p=0.6. Then

P1(E | 1) = P1(E | 2) = P1(E | 3) = 1, P1(E | 4) = P1(E | 5) = P1(E | 6) = 0,

P2(E | 1) = P2(E | 3) = P2(E | 4) = 2/3, P2(E | 2) = P2(E | 5) = P2(E | 6) = 1/3.

Therefore, B_1^p(E) = {1,2,3}, and B_2^p(E) = {1,3,4}.

Set F = {1,3,4}. Then

P1(F | 1) = P1(F | 2) = 1/2, P1(F | 3) = P1(F | 4) = 1, P1(F | 5) = P1(F | 6) = 0,

P2(F | 1) = P2(F | 3) = P2(F | 4) = 1, P2(F | 2) = P2(F | 5) = P2(F | 6) = 0.

Therefore, B_1^p(F) = {3,4} and B_2^p(F) = {1,3,4}.

Set G = {3,4}. Then

P1(G | 1) = P1(G | 2) = 0, P1(G | 3) = P1(G | 4) = 1, P1(G | 5) = P1(G | 6) = 0,

P2(G | 1) = P2(G | 3) = P2(G | 4) = 2/3, P2(G | 2) = P2(G | 5) = P2(G | 6) = 0.

Therefore, B_1^p(G) = {3,4}, and B_2^p(G) = {1,3,4}.

We deduce that: B_1^p(E) = E, B_2^p(E) = F, B_1^p(F) = G, B_2^p(G) = F, B_2^p(F) = F. This implies that the smallest set that is iterated p-belief at the state of the world 3 is E ∩ F ∩ G = {3}.

Let us now calculate the smallest set that is iterated p-belief at the state of the world ω=3. We already saw that B^p(E) = {1,3}. Set then H={1,3}. Then

P1(H | 1) = P1(H | 2) = 1/2, P1(H | 3) = 1, P1(H | 4) = P1(H | 5) = P1(H | 6) = 0,

P2(H | 1) = P2(H | 3) = P2(H | 4) = 2/3, P2(H | 2) = P2(H | 5) = P2(H | 6) = 0.

Therefore, B_1^p(H) = {3}, and B_2^p(H) = {1,3,4}, so that B^p(B^p(E)) = B^p(H) = {3}. Set J={3}. Then B_1^p(J) = J, and B_2^p(J) = Ø. Thus,B^p( B^p(B^p(E))) = Ø, and at the state of the world ω there is no non-trivial event that is common p-belief.