You are currently browsing rvohra’s articles.

The goal of this post is to highlight a feature of the Bayesian Persuasion problem that seems useful but as far as I can tell not explicitly stated anywhere.

Let \mathcal{S} \subset \mathbb{R} be the finite set of states and \mathcal{A} \subset \mathbb{R} the finite set of actions the receiver can take. Elements of each are denoted by \omega_{j} and a_{i}, respectively.

The value to the receiver (he/him) of choosing action a_i in state \omega_j is V_R(a_i, \omega_j). The payoff to the  sender (she/her) if the action that the receiver takes in state \omega_j is a_i is denoted V_S(a_i, \omega_j). Sender and receiver share  a common prior p over \mathcal{S}.

The persuasion problem can be formulated in terms of choosing a distribution over state and action pairs such that an obedience constraint holds for the receiver. Call this the obedience formulation. The more popular formulation, in terms of finding a concave envelope is, as will be seen below, is equivalent.

The sender commits to a mapping from the state to an action recommendation. Given a recommendation, the receiver can update their prior over states. Once the algebraic dust settles, it turns out all that matters is the joint probability of action and state. Thus, the sender’s problem reduces to choosing x(\omega_j,a_i), the joint probability of action a_i and state  \omega_j.  The sender’s optimization problem is:

\max_{x(\omega,a)} \sum_{i=1}^{|\mathcal{A}|}\sum_{j=1}^{|\mathcal{S}|}V_{S}(a_{i}, \omega_j)x(\omega_{j},a_{i})

subject to

\sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{i}, \omega_j)x(\omega_{j},a_{i}) \geq \sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{k}, \omega_j)x(\omega_{j},a_{i})\,\, \forall a_{i}, a_{k}

\sum_{i=1}^{|\mathcal{A}|}x(\omega_{j},a_{i}) = p(\omega_{j})\,\, \forall \omega_j \in \mathcal{S}

x(\omega_{j},a_{i}) \geq 0 \,\, \forall  \omega_j \in \mathcal{S} \& a_i \in \mathcal{A}

The first constraint is the obedience constraints (OC) which ensure that it is in the receiver’s interest to follow the sender’s recommendation.

The second ensures that the total probability weight assigned to actions recommended in state \omega_j matches the prior probability of state \omega_j being realized.

The difficulty is dealing with the OC constraints. Many relax them using the method of Lagrange multipliers and try to pin down the values of the multipliers. We take a different approach.

For each action a_i, let K_i be the polyhedral cone defined by:

\sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{i}, \omega_j)x(\omega_{j},a_{i}) \geq \sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{k}, \omega_j)x(\omega_{j},a_{i})\,\, \forall a_{k}

By the Weyl-Minkowski theorem,  K_i is characterized by its generators (extreme rays), which we will denote by G_i = \{g^1_i, \ldots, g^{t_i}_i\}. The generators are non-negative and non-zero and can be normalized so that they sum to 1 which allows us to interpret them as probability distributions over the state space. For any generator (probability distribution) in G_i, action a_i is the best response of the receiver. Many of the qualitative features of a solution to the persuasion problem that have been identified in the literature are statements about the generators of K_i.

As each vector in K_i can be expressed as a non-negative linear combination of its generators, we can rewrite the constraints as follows:

\sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig^r_i(\omega_j) =  p(\omega_{j}) \,\, \forall j \in \mathcal{S}

\mu^r_i \geq 0\,\, \forall i \in \mathcal{A},r = 1, \ldots, t_i

If one adds up the constraints, we see that

\sum_{j=1}^{|\mathcal{S}|}\sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig_i^r(\omega_j) =  1. In other words, the prior distribution p(\cdot) can be expressed as a convex combination of other distributions. In this way arrive at another popular formulation of the persuasion problem which is in terms of finding a decomposition of the prior into a convex combination of possible posteriors. Notice, the set of possible posterior distributions that need to be considered  are limited to the generators of the corresponding cones K_i. We illustrate why this is useful in a moment. For now, let me point out a connection to the other magical phrase that appears in the context of persuasion: concavification.

First, express the persuasion problem in terms of the weight assigned to the generators:

\max \sum_{a_i \in \mathcal{A}}\sum_{r=1}^{t_i}[\sum_{\omega_j \in \mathcal{S}}V_S(a_i, \omega_j)g_i^r(\omega_j)]\mu^r_i

subject to

\sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig^r_i(\omega_j) =  p(\omega_{j}) \,\, \forall j \in \mathcal{S}

\mu^r_i \geq 0\,\, \forall i \in \mathcal{A},r = 1, \ldots, t_i

Letting y be the vector of dual variables, the linear programming dual of the persuasion problem is:

\min \sum_{\omega_j \in \mathcal{S}}p(\omega_j)y_j

subject to

\sum_{\omega_j \in \mathcal{S}}g^r_i (\omega_j)y_j  \geq \sum_{\omega_j \in \mathcal{S}}V_S(a_i, \omega_j)g_i^r(\omega_j)\,\, \forall i \in \mathcal{A}\,\, r = 1, \ldots t_i

This dual problem characterizes the sender’s optimal value in terms of a concave envelope (since Kamenica & Gentzkow (2011) this is the most popular way to state a solution to the persuasion problem). Notice, the approach taken here shows clearly that the receiver’s preferences alone determine the set of posterior beliefs that will play a role. This idea is implicit in Lipinowski and Mathevet (2017). They introduce the notion of a posterior cover: a collection of sets of posterior beliefs, over each of which a given function is convex. When the given function is the best response correspondence of the receiver, this reduces precisely to the cone of the obedience constraints.

To demonstrate the usefulness of the conic representation of the obedience constraints, lets examine a persuasion problem with one dimensional state and action space described in Kolotillin and Wolitzki (2020). To simplify notation assume that \mathcal{S} = \{1, 2, \ldots, |\mathcal{S}|\} and \mathcal{A} = \{1, \ldots, |\mathcal{A}|\}. Write \omega_j as j and a_i as i

The goal of KW2020 is to provide a general approach to understanding qualitative properties of the optimal signal structure under two substantive assumptions. The first is that the sender’s utility is increasing in the receiver’s action. The second is called aggregate downcrossing, which implies that the receiver’s optimal action is increasing in his belief about the state.

1) V_S(i, j) > V_S(i-1, j)\,\, \forall i \in \mathcal{A} \setminus \{1\}\,\, \forall j \in \mathcal{S}

2) For all probability distributions q over \mathcal{S},

\sum_{j \in \mathcal{S}} [V_R(i,j)- V_R(i-1, j)]q_{j} \geq 0

\Rightarrow \sum_{j\in \mathcal{S}} [V_R(i', j)- V_R(i'-1, j)]q_{j} \geq 0\,\, \forall i' < i,\,\, i, i' \in \mathcal{A}

A result in KW2020 is that it is without loss to assume each induced posterior distribution has at most binary support. We show this to be an immediate consequence of the properties of the generators of each $K_i$.

Given condition 1, the sender only cares about the downward obedience constraints. We show that the adjacent downward constraints suffice.

Note that for any i, i-1, i-2 \in \mathcal{A},

\sum_{j \in \mathcal{S}} [V_R(i, j)- V_R(i-2, j)]x(i,j)=

\sum_{j \in \mathcal{S} } [V_R(i, j)- V_R(i-1, j)]x(i,j)

+ \sum_{j \in \mathcal{S} } [V_R(i-1, j)- V_R(i-2, j)]x(i,j)

The first term on the right hand side of the equality is non-negative by virtue of the adjacent obedience constraint holding. The second term is non-negative by aggregate downcrossing. Hence, each $K_i$ is described by a single inequality:

\sum_{j\in\mathcal{S}}[V_{R}(i, j)-V_{R}(i-1, j)]x(i,j) \geq 0.

It is straightforward to see that each generator takes one of the following forms:

1) A single non-zero component with weight 1 assigned to a state j \in \mathcal{S} where V_{R}(i, j)-V_{R}(i-1, j)\geq0 .

2) Two non-zero components one assigned to a state j \in \mathcal{S} where V_{R}(i, j)-V_{R}(i-1, j) \geq 0 and one to a state j'\in \mathcal{S} where V_{R}(i, j')-V_{R}(i-1, j') < 0. The value assigned to component j is

\frac{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}}{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}  +|V_{R}(i, j')-V_{R}(i-1, j')|^{-1} } while the value assigned to component j'

is \frac{|V_{R}(i, j')-V_{R}(i-1, j')|^{_1}}{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}  +|V_{R}(i, j')-V_{R}(i-1, j')|^{-1} }.

Note that items 1 and 2 correspond to  Lemma 1 & Theorem 1 of KW2020 and follow immediately from the properties of the cone of the OC constraints.

Serious infectious diseases are a prime example of a public bad (non-exclusive and non-congestible). We limit them by restricting behavior and or getting individuals to internalize the externalities they generate. For example, one could mandate masks in public places. To be effective this requires monitoring and punishment. Unpleasant, but we know how to do this.  Or, one could hold those who don’t wear masks responsible for the costs they impose on those whom they infect. Unclear exactly how we would implement this, so impractical. However, it is still interesting to speculate about how one might do this. Coase pointed out that if one could tie the offending behavior to something that was excludable, we would be in business.

To my mind an obvious candidate is medical care. A feature of infectious diseases, is that behavior which increases the risk of infection to others also increases it for oneself. Thus, those who wish to engage in behavior that increases the risk of infection should be allowed to do so provided they waive the right to medical treatment for a defined period should they contract the infection. If this is unenforceable, perhaps something `weaker’ such as treatment will not be covered by insurance or the subject will be accorded lowest priority when treatment capacity is scarce.

How exactly could such a scheme be implemented? To begin with one needs to define which behaviors count, get the agent to explicitly waive the right when engaging in it and then making sure medical facilities are made aware of it.  We have some ready made behaviors that make it easy. Going to a  bar, gym and indoor dining. The rough principle is any activity with a $$ R_0 > 1 $$ whose access is controlled by a profit seeking entity. The profit seeking entity obtains the waiver from the interested agent as a condition of entry (this would have to be monitored by the state). The waiver releases the profit entity from liability. Waiver enters a database that is linked to health records (probably the biggest obstacle).

The efficacy of lockdowns was debated at the start of the pandemic and continues to this day. Sweden, famously, chose not to implement a lockdown. As Anders Tegnell, remarked:
`Closedown, lockdown, closing borders — nothing has a historical scientific basis, in my view.’

Lyman Stone of the American Enterprise Institute expresses it more forcefully:

`Here’s the thing: there’s no evidence of lockdowns working. If strict lockdowns actually saved lives, I would be all for them, even if they had large economic costs. But the scientific and medical case for strict lockdowns is paper-thin.’

The quotes above reveal an imprecision at the heart of the debate. What exactly is a lockdown? Tegnell uses a variety of words that suggest a variety of policies that can be lumped together. Stone is careful to say there is no evidence in support of strict lockdowns. Suggesting that `less’ strict lockdowns might be beneficial. So, lets speculate about the other extreme: no lockdown relaxed or otherwise.

The presence of infected individuals increases the cost of certain interactions. If infected individuals don’t internalize this cost, then, in the absence of any intervention we will observe a reduction in the efficient level of economic activity.

How will this manifest itself? Agents may adopt relatively low cost precautionary actions like wearing masks. On the consumer side they will substitute away from transactions that expose them to the risk of infection. For example, take out rather than sit down and delivery versus shopping in person. In short, we will see a drop in the demand for certain kinds of transactions. Absent subsidies for hospital care, we should expect an increase in price (or wait times) for medical care further incentivizing precautionary actions on the part of individuals.

The various models of network formation in the face of contagion we have (eg Erol and Vohra (2020)) all suggest we will see changes in how individuals socialize. They will reduce the variety of their interactions and concentrate them in cliques of `trusted’ agents.

On the supply side, firms will have to make costly investments to reduce the risk of infection to customers and possibly workers. The ability of firms to pass these costs onto their customers or their suppliers will depend on the relative bargaining power of each. Restaurant workers, for example, may demand increased compensation for the risks they bear, but this will occur at the same time as a drop in demand for their services.

To summarize, a `no lockdown’ policy will, over time, resemble a lockdown policy. Thus, the question is whether there is a  coordinated lockdown policy that is superior to an uncoordinated one that emerges endogenously.

One of the delights of pursuing a relatively young discipline is that one meets its pioneers. As one grows old in the discipline, so do the pioneers, who eventually pass into `the undiscovered country from whose bourn no traveler returns.’ Overlooked, at least by me, was that one also meets, in the chrysalis stage, those who will eventually lead the discipline into the next century. It was the untimely passing of William Sandholm on July 6th of this year, that brought this to mind.

I first met Bill in 1998. I had just moved to MEDS and he was on his way out as a new minted PhD. He, a shiny new penny and myself on the way to becoming so much loose change.

Within a decade, Bill rose to prominence as an authority on Evolutionary Game Theory. His book, “Population Games and Evolutionary Dynamics” became the standard reference for population games. The concept of evolutionary implementation can be credited to him.

Bill was also a provider of public goods. He wrote and made freely available software for working with evolutionary dynamics, served on panels and editorial boards.

As I recall Bill, I am reminded of a line from Mary Chase’s play, Harvey uttered by the main character Elwood Dowd:

Years ago my mother used to say to me, she’d say, ‘In this world, Elwood, you must be’ – she always called me Elwood – ‘In this world, Elwood, you must be oh so smart or oh so pleasant.’ Well, for years I was smart. I recommend pleasant. You may quote me.

Bill was both.

Will widely available and effective tests for COVID-19 awaken the economy from its COVID induced coma? Paul Romer, for one, thinks so. But what will each person do with the information gleaned from the test? Should we expect someone who has tested positive for the virus to stay home and someone who has tested negative to go to work? If the first receives no compensation for staying home, she will leave for work. The second, anticipating that infected individuals have an incentive to go to work, will choose to stay home. As a result, the fraction of the population out and about will have an infection rate exceeding that in the population at large.

In a new paper by Rahul Deb, Mallesh Pai, Akhil Vohra and myself we argue that widespread testing alone will not solve this problem. Testing in concert with subsidies will. We propose a model in which both testing and transfers are targeted. We use it to jointly determine where agents should be tested and how they should be incentivized. The idea is straightforward. In the absence of widespread testing to distinguish between those who are infected and those who are not, we must rely on individuals to sort themselves. They are in the best position to determine the likelihood they are infected (e.g. based on private information about exposures, how rigorously they have been distancing etc.). Properly targeted testing with tailored transfers give them the incentive to do so.

We also distinguish between testing at work and testing `at home’. An infected person who leaves home to be tested at work poses an infection risk to others who choose to go outside. Testing `at home’ should be interpreted as a way to test an individual without increasing exposure to others. Our model also suggests who should be tested at work and who should be tested at home.

On the 3rd of July, 1638, George Garrard  wrote Viscount Wentworth to tell him:

The Plague is in Cambridge; no Commencement at either of the Universities this year.

On October 2nd of that same year, Cambridge canceled all lectures. Even if history does not repeat (but historians do), one is tempted to look to the past for hints about the future.

From the Annals of Cambridge  (compiled by Charles Henry Cooper ) we learn that the plague combined with the residency requirements for a degree at Oxford, prompted a rush of Oxford students to Cambridge to obtain their Masters of Arts degree. We know this from an anonymous letter to Oxford’s Chancellor:

…..many of Batchelor of Arts of Oxford came this Year for their Degrees of Masters of Arts here, which this Year they could not obtain at Oxford, which I endeavored to prevent……..

This prompted a complaint to Cambridge. Its vice-chancellor replied,

I Pray receive this assurance from me, and I doubt not but the Practice of our University will make it good……

Oxford, in the meantime, maintained country homes for its scholars where they could hide from the Black Death. The plague lowered property values which allowed the colleges to expand their land holdings.

What effect on the intellectual life of the University? Anna Campbell’s 1931 book entitled `The Black Death and Men of Learning‘ estimates that about a third of European intellectual leaders perished during the plague and Universities were in a precarious position.

James Courtenay, writing in 1980  with access to more detailed data about Oxford suggests a less bleak outcome.

The mortality rate was not particularly high , either of brilliant or of marginal scholars and masters. The enrollment levels across the next few decades do not seem to have been seriously affected.

He notes an argument for a drop in the quality of higher education but that would have been a response to a drop in the quality of primary education.

Some days ago I learnt that a job offer to a promising postdoc I advise evaporated. Not unexpected in these times, but disappointing nevertheless . There are now about 300 Universities with hiring pauses or freezes in place.

For Universities that are tuition driven, this is understandable. For those with large endowments of which a large portion are unrestricted this is puzzling. It is true that about 75% of all US university  endowment funds are invested in equities and these have declined since the start of the pandemic. But, the 3 month treasury rate is, at the time I write this, at 0.22%. Why aren’t they borrowing? More generally, why don’t we see consumption smoothing?

An interesting paper by Brown, Dimmock, Kang, and Weisbenner (2014) documents how University endowments respond to shocks. They write:

Our primary finding is that university endowments respond asymmetrically to contemporaneous positive and negative financial shocks. In response to contempo- raneous positive shocks, endowments tend to leave current payouts unchanged. Such behavior is consistent with endowments following their stated payout policies, which are based on past endowment values and not current returns, in order to smooth payouts (e.g., pay out 5 percent of the past three-year average of endowment values).

However, following contemporaneous negative shocks, endowments actively reduce payout rates. Unlike their response to positive shocks, this behavior is inconsistent with endowments following their standard smoothing rules. This asymmetry in the response to positive and negative shocks is especially strong if we explicitly control for the payout rate that is implied by the universities’ stated payout rules (something we do for a subsample of the endowments for which we have sufficient information to precisely document their payout rules). We also fail to find consistent evidence that universities change endowment payouts to offset shocks to other sources of university revenues. These findings, which we confirm through several robustness checks, suggest that endowments’ behavior differs from that predicted by several normative models of endowment behavior.

They argue that their data supports the idea that Universities are engaged in endowment hoarding, i.e.,  maintenance of the endowment is treated as an end in itself. The Association for American Universities argues that endowment hoarding is a myth, see item 9 at this link.  Their response confirms the 3 year average rule but is silent on the asymmetric response to shocks reported above.

More generally, one might ask what is the purpose of a University endowment? Hansmann (1990) offers an interesting discussion of why a University even has an endowment (other enterprises are run through a mixture of debt and equity).  Tobin (1974) articulated one for modeling purposes which I suspect captures what many have in mind:

The trustees of an endowed institution are the guardians of the future against the claims of the present. Their task is to preserve equity among generations. The trustees of an endowed university … assume the institution to be immortal.

If one takes the principle of intergenerational equity seriously, then, would it not make sense to borrow from a better future into a worse present? Unless, of course, it is expected that the future will be even worse than today.

The race to publish COVID-19 related papers is on, and I am already behind. Instead, I will repurpose a paper by Eduard Talamas and myself on networks and infections which is due out in GEB.

It is prompted by the following question: if you are given the option to distribute—without cost to you or anyone else—a perfectly safe but only moderately effective vaccine for a viral infection, should you? That we’ve posed it means the answer must be no or at least maybe not.

Unsurprisingly, it has to do with incentives. When the risk of becoming infected from contact declines, individuals tend to be less circumspect about coming into contact with others. This is risk compensation, first suggested by Charles Adams  in 1879 and popularized by Sam Peltzman in the 1970’s.

Therefore, the introduction of a vaccine has two effects. On the one hand, it reduces the probability that an individual becomes infected upon contact. On the other hand, it decreases individuals’ incentives to take costly measures to avoid contact. If the second effect outweighs the first, there will be an increase in infections upon the introduction of a moderately effective vaccine.

These are statements about infection rates not welfare. Individuals make trade-offs. In this case between the risk of infection and the costs of avoiding it. Therefore, observing that an individual’s infection probability will increase upon introduction of a partially effective vaccine is insufficient to argue against introduction.

In our paper, Eduard and I show that the introduction of a vaccine whose effectiveness falls below some threshold could make everyone worse off, even when each individual is perfectly rational and bears the full cost of becoming infected. If the vaccine is highly effective, this outcome is reversed. This is because risky interactions can be strategic complements. An individual’s optimal amount of risky interactions can be increasing in the amount of risky interactions that others take.

To illustrate, call two individuals that engage in risky interactions partners. Every risky interaction that Ann’s partner Bob has with Chloe affects Ann’s incentives to have risky interactions with Chloe in two countervailing ways. It increases Chloe’s infection probability. But it also increases the probability that Ann is infected conditional on Chloe being infected—because if Chloe is infected, chances are that Ann’s partner Bob is also infected. Given that a risky interaction between Ann and Chloe only increases the probability that Ann becomes infected when Chloe is infected and Ann is not, the combination of these effects can lead to an increase in Ann’s incentives to engage with Chloe and her partners when Bob engages with Chloe.

One might ask, given the huge trove of papers on epidemiological models, this effect must have been identified before and discussed? No, or at least not as far as we could tell. This is because we depart from from a standard feature of these models. We allow agents to strategically choose their partners— instead of only allowing them to choose the number of partners, and then having matches occur uniformly at random.

This morning, a missive from the Econometrics society arrived in my in box announcing “two modest fees associated with the submission and publication of papers in its three journals.” As of May 1st 2020, the Society will assess a submission fee of $50 and a page charge of $10 per page for accepted papers. With papers on the short side running to around 30 pages and 10 page appendices this comes out to about $400. By the standards of the natural sciences this is indeed modest.

At the low end the American Meteorological Society charges $120 per page, no submission fee. In the middle tier, the largest open-access publishers — BioMed Central and PLoS — charge $1,350–2,250 to publish peer-reviewed articles in many of their journals, and their most selective offerings charge $2,700–2,900. At the luxury end of the market is the Proceedings of the National Academy which starts out at $1590 for 6 pages and rises upto $4,215 for a 12 page paper.

My colleague Aislinn Bohren has suggested rewarding referees with free page coupons: publish one page free for each five pages you referee. This may suffer the same fate as the Capitol Hill Baby Sitting co-operative.

In the short run the effect will be to drive papers to JET and GEB as not all academics have research budgets which will cover the fees. An alternative is to submit the paper for $50. If accepted, decline to have it published. Send it elsewhere and send a copy of the acceptance letter to one’s promotion and tenure committee. Voila, a new category in the CV: accepted at Econometrica but not published.



With the move to on-line classes after spring break in the wake of Covid-19, my University has allowed students to opt to take some, all or none of their courses as pass/fail this semester. By making it optional, students have the opportunity to engage in signaling. A student doing well entering into spring break may elect to take the course for a regular grade confident they will gain a high grade. A student doing poorly entering into spring break may elect to take the course pass/fail. It is easy to concoct a simplified model (think Grossman (1981) or Milgrom (1981)) where there is no equilibrium in which all students elect to take the course pass/fail. The student confident of being at the top of the grade distribution has an incentive to choose the regular grading option. The next student will do the same for fear of signaling they had a poor grade and so on down the line. In equilibrium all the private information will unravel.

This simple intuition ignores the heterogeneity in student conditions. It is possible that a student with a good score going into spring break may now face straitened circumstances after spring break. How they decide depends on what inferences they think, others, employers, for example, make about grades earned during this period. Should an employer simply ignore any grades earned during this period and Universities issue Covid-19 adjusted GPAs? Should an employer conclude that a student with a poor grade is actually a good student (because they did not choose the pass/fail option) who has suffered bad luck?

Kellogg faculty blogroll