You are currently browsing the monthly archive for May 2022.

The goal of this post is to highlight a feature of the Bayesian Persuasion problem that seems useful but as far as I can tell not explicitly stated anywhere.

Let \mathcal{S} \subset \mathbb{R} be the finite set of states and \mathcal{A} \subset \mathbb{R} the finite set of actions the receiver can take. Elements of each are denoted by \omega_{j} and a_{i}, respectively.

The value to the receiver (he/him) of choosing action a_i in state \omega_j is V_R(a_i, \omega_j). The payoff to the  sender (she/her) if the action that the receiver takes in state \omega_j is a_i is denoted V_S(a_i, \omega_j). Sender and receiver share  a common prior p over \mathcal{S}.

The persuasion problem can be formulated in terms of choosing a distribution over state and action pairs such that an obedience constraint holds for the receiver. Call this the obedience formulation. The more popular formulation, in terms of finding a concave envelope is, as will be seen below, is equivalent.

The sender commits to a mapping from the state to an action recommendation. Given a recommendation, the receiver can update their prior over states. Once the algebraic dust settles, it turns out all that matters is the joint probability of action and state. Thus, the sender’s problem reduces to choosing x(\omega_j,a_i), the joint probability of action a_i and state  \omega_j.  The sender’s optimization problem is:

\max_{x(\omega,a)} \sum_{i=1}^{|\mathcal{A}|}\sum_{j=1}^{|\mathcal{S}|}V_{S}(a_{i}, \omega_j)x(\omega_{j},a_{i})

subject to

\sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{i}, \omega_j)x(\omega_{j},a_{i}) \geq \sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{k}, \omega_j)x(\omega_{j},a_{i})\,\, \forall a_{i}, a_{k}

\sum_{i=1}^{|\mathcal{A}|}x(\omega_{j},a_{i}) = p(\omega_{j})\,\, \forall \omega_j \in \mathcal{S}

x(\omega_{j},a_{i}) \geq 0 \,\, \forall  \omega_j \in \mathcal{S} \& a_i \in \mathcal{A}

The first constraint is the obedience constraints (OC) which ensure that it is in the receiver’s interest to follow the sender’s recommendation.

The second ensures that the total probability weight assigned to actions recommended in state \omega_j matches the prior probability of state \omega_j being realized.

The difficulty is dealing with the OC constraints. Many relax them using the method of Lagrange multipliers and try to pin down the values of the multipliers. We take a different approach.

For each action a_i, let K_i be the polyhedral cone defined by:

\sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{i}, \omega_j)x(\omega_{j},a_{i}) \geq \sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{k}, \omega_j)x(\omega_{j},a_{i})\,\, \forall a_{k}

By the Weyl-Minkowski theorem,  K_i is characterized by its generators (extreme rays), which we will denote by G_i = \{g^1_i, \ldots, g^{t_i}_i\}. The generators are non-negative and non-zero and can be normalized so that they sum to 1 which allows us to interpret them as probability distributions over the state space. For any generator (probability distribution) in G_i, action a_i is the best response of the receiver. Many of the qualitative features of a solution to the persuasion problem that have been identified in the literature are statements about the generators of K_i.

As each vector in K_i can be expressed as a non-negative linear combination of its generators, we can rewrite the constraints as follows: \sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig^r_i(\omega_j) =  p(\omega_j) \,\, \forall j \in \mathcal{S}

\mu^r_i \geq 0\,\, \forall i \in \mathcal{A},r = 1, \ldots, t_i

If one adds up the constraints, we see that

\sum_{j=1}^{|\mathcal{S}|}\sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig_i^r(\omega_j) =  1. In other words, the prior distribution p(\cdot) can be expressed as a convex combination of other distributions. In this way arrive at another popular formulation of the persuasion problem which is in terms of finding a decomposition of the prior into a convex combination of possible posteriors. Notice, the set of possible posterior distributions that need to be considered  are limited to the generators of the corresponding cones K_i. We illustrate why this is useful in a moment. For now, let me point out a connection to the other magical phrase that appears in the context of persuasion: concavification.

First, express the persuasion problem in terms of the weight assigned to the generators: \max \sum_{a_i \in \mathcal{A}}\sum_{r=1}^{t_i}[\sum_{\omega_j \in \mathcal{S}}V_S(a_i, \omega_j)g_i^r(\omega_j)]\mu^r_i

subject to \sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig^r_i(\omega_j) =  p(\omega_{j}) \,\, \forall j \in \mathcal{S}

\mu^r_i \geq 0\,\, \forall i \in \mathcal{A},r = 1, \ldots, t_i

Letting y be the vector of dual variables, the linear programming dual of the persuasion problem is:

\min \sum_{\omega_j \in \mathcal{S}}p(\omega_j)y_j

subject to \sum_{\omega_j \in \mathcal{S}}g^r_i (\omega_j)y_j  \geq \sum_{\omega_j \in \mathcal{S}}V_S(a_i, \omega_j)g_i^r(\omega_j)\,\, \forall i \in \mathcal{A}\,\, r = 1, \ldots t_i

This dual problem characterizes the sender’s optimal value in terms of a concave envelope (since Kamenica & Gentzkow (2011) this is the most popular way to state a solution to the persuasion problem). Notice, the approach taken here shows clearly that the receiver’s preferences alone determine the set of posterior beliefs that will play a role. This idea is implicit in Lipinowski and Mathevet (2017). They introduce the notion of a posterior cover: a collection of sets of posterior beliefs, over each of which a given function is convex. When the given function is the best response correspondence of the receiver, this reduces precisely to the cone of the obedience constraints.

To demonstrate the usefulness of the conic representation of the obedience constraints, lets examine a persuasion problem with one dimensional state and action space described in Kolotillin and Wolitzky (2020). To simplify notation assume that \mathcal{S} = \{1, 2, \ldots, |\mathcal{S}|\} and \mathcal{A} = \{1, \ldots, |\mathcal{A}|\}. Write \omega_j as j and a_i as i

The goal of KW2020 is to provide a general approach to understanding qualitative properties of the optimal signal structure under two substantive assumptions. The first is that the sender’s utility is increasing in the receiver’s action. The second is called aggregate downcrossing, which implies that the receiver’s optimal action is increasing in his belief about the state.

1) V_S(i, j) > V_S(i-1, j)\,\, \forall i \in \mathcal{A} \setminus \{1\}\,\, \forall j \in \mathcal{S}

2) For all probability distributions q over \mathcal{S},

\sum_{j \in \mathcal{S}} [V_R(i,j)- V_R(i-1, j)]q_{j} \geq 0

\Rightarrow \sum_{j\in \mathcal{S}} [V_R(i', j)- V_R(i'-1, j)]q_{j} \geq 0\,\, \forall i' < i,\,\, i, i' \in \mathcal{A}

A result in KW2020 is that it is without loss to assume each induced posterior distribution has at most binary support. We show this to be an immediate consequence of the properties of the generators of each K_i.

Given condition 1, the sender only cares about the downward obedience constraints. We show that the adjacent downward constraints suffice.

Note that for any i, i-1, i-2 \in \mathcal{A},

\sum_{j \in \mathcal{S}} [V_R(i, j)- V_R(i-2, j)]x(i,j)=

\sum_{j \in \mathcal{S} } [V_R(i, j)- V_R(i-1, j)]x(i,j)

+ \sum_{j \in \mathcal{S} } [V_R(i-1, j)- V_R(i-2, j)]x(i,j)

The first term on the right hand side of the equality is non-negative by virtue of the adjacent obedience constraint holding. The second term is non-negative by aggregate downcrossing. Hence, each K_i is described by a single inequality:

\sum_{j\in\mathcal{S}}[V_{R}(i, j)-V_{R}(i-1, j)]x(i,j) \geq 0.

It is straightforward to see that each generator takes one of the following forms:

1) A single non-zero component with weight 1 assigned to a state j \in \mathcal{S} where V_{R}(i, j)-V_{R}(i-1, j)\geq0 .

2) Two non-zero components one assigned to a state j \in \mathcal{S} where V_{R}(i, j)-V_{R}(i-1, j) \geq 0 and one to a state j'\in \mathcal{S} where V_{R}(i, j')-V_{R}(i-1, j') < 0. The value assigned to component j is \frac{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}}{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}  +|V_{R}(i, j')-V_{R}(i-1, j')|^{-1} } while the value assigned to component j' is \frac{|V_{R}(i, j')-V_{R}(i-1, j')|^{_1}}{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}  +|V_{R}(i, j')-V_{R}(i-1, j')|^{-1} }.

Note that items 1 and 2 correspond to  Lemma 1 & Theorem 1 of KW2020 and follow immediately from the properties of the cone of the OC constraints.

Kellogg faculty blogroll