You are currently browsing the category archive for the ‘Mechanism design’ category.

Serious infectious diseases are a prime example of a public bad (non-exclusive and non-congestible). We limit them by restricting behavior and or getting individuals to internalize the externalities they generate. For example, one could mandate masks in public places. To be effective this requires monitoring and punishment. Unpleasant, but we know how to do this.  Or, one could hold those who don’t wear masks responsible for the costs they impose on those whom they infect. Unclear exactly how we would implement this, so impractical. However, it is still interesting to speculate about how one might do this. Coase pointed out that if one could tie the offending behavior to something that was excludable, we would be in business.

To my mind an obvious candidate is medical care. A feature of infectious diseases, is that behavior which increases the risk of infection to others also increases it for oneself. Thus, those who wish to engage in behavior that increases the risk of infection should be allowed to do so provided they waive the right to medical treatment for a defined period should they contract the infection. If this is unenforceable, perhaps something weaker’ such as treatment will not be covered by insurance or the subject will be accorded lowest priority when treatment capacity is scarce.

How exactly could such a scheme be implemented? To begin with one needs to define which behaviors count, get the agent to explicitly waive the right when engaging in it and then making sure medical facilities are made aware of it.  We have some ready made behaviors that make it easy. Going to a  bar, gym and indoor dining. The rough principle is any activity with a $$R_0 > 1$$ whose access is controlled by a profit seeking entity. The profit seeking entity obtains the waiver from the interested agent as a condition of entry (this would have to be monitored by the state). The waiver releases the profit entity from liability. Waiver enters a database that is linked to health records (probably the biggest obstacle).

The efficacy of lockdowns was debated at the start of the pandemic and continues to this day. Sweden, famously, chose not to implement a lockdown. As Anders Tegnell, remarked:
Closedown, lockdown, closing borders — nothing has a historical scientific basis, in my view.’

Lyman Stone of the American Enterprise Institute expresses it more forcefully:

Here’s the thing: there’s no evidence of lockdowns working. If strict lockdowns actually saved lives, I would be all for them, even if they had large economic costs. But the scientific and medical case for strict lockdowns is paper-thin.’

The quotes above reveal an imprecision at the heart of the debate. What exactly is a lockdown? Tegnell uses a variety of words that suggest a variety of policies that can be lumped together. Stone is careful to say there is no evidence in support of strict lockdowns. Suggesting that less’ strict lockdowns might be beneficial. So, lets speculate about the other extreme: no lockdown relaxed or otherwise.

The presence of infected individuals increases the cost of certain interactions. If infected individuals don’t internalize this cost, then, in the absence of any intervention we will observe a reduction in the efficient level of economic activity.

How will this manifest itself? Agents may adopt relatively low cost precautionary actions like wearing masks. On the consumer side they will substitute away from transactions that expose them to the risk of infection. For example, take out rather than sit down and delivery versus shopping in person. In short, we will see a drop in the demand for certain kinds of transactions. Absent subsidies for hospital care, we should expect an increase in price (or wait times) for medical care further incentivizing precautionary actions on the part of individuals.

The various models of network formation in the face of contagion we have (eg Erol and Vohra (2020)) all suggest we will see changes in how individuals socialize. They will reduce the variety of their interactions and concentrate them in cliques of trusted’ agents.

On the supply side, firms will have to make costly investments to reduce the risk of infection to customers and possibly workers. The ability of firms to pass these costs onto their customers or their suppliers will depend on the relative bargaining power of each. Restaurant workers, for example, may demand increased compensation for the risks they bear, but this will occur at the same time as a drop in demand for their services.

To summarize, a no lockdown’ policy will, over time, resemble a lockdown policy. Thus, the question is whether there is a  coordinated lockdown policy that is superior to an uncoordinated one that emerges endogenously.

Will widely available and effective tests for COVID-19 awaken the economy from its COVID induced coma? Paul Romer, for one, thinks so. But what will each person do with the information gleaned from the test? Should we expect someone who has tested positive for the virus to stay home and someone who has tested negative to go to work? If the first receives no compensation for staying home, she will leave for work. The second, anticipating that infected individuals have an incentive to go to work, will choose to stay home. As a result, the fraction of the population out and about will have an infection rate exceeding that in the population at large.

In a new paper by Rahul Deb, Mallesh Pai, Akhil Vohra and myself we argue that widespread testing alone will not solve this problem. Testing in concert with subsidies will. We propose a model in which both testing and transfers are targeted. We use it to jointly determine where agents should be tested and how they should be incentivized. The idea is straightforward. In the absence of widespread testing to distinguish between those who are infected and those who are not, we must rely on individuals to sort themselves. They are in the best position to determine the likelihood they are infected (e.g. based on private information about exposures, how rigorously they have been distancing etc.). Properly targeted testing with tailored transfers give them the incentive to do so.

We also distinguish between testing at work and testing at home’. An infected person who leaves home to be tested at work poses an infection risk to others who choose to go outside. Testing at home’ should be interpreted as a way to test an individual without increasing exposure to others. Our model also suggests who should be tested at work and who should be tested at home.

An agent with an infectious disease confers a negative externality on the rest of the community. If the cost of infection is sufficiently high, they are encouraged and in some cases required to quarantine themselves. Is this the efficient outcome? One might wonder if a Coasian approach would generate it instead. Define a right to walk around when infected which can be bought and sold. Alas, infection has the nature of public bad which is non-rivalrous and non-excludable. There is no efficient, incentive compatible individually rational (IR) mechanism for the allocation of such public bads (or goods). So, something has to give. The mandatory quarantine of those who might be infected can be interpreted as relaxing the IR constraint for some.

If one is going to relax the IR constraint it is far from obvious that it should be the IR constraint of the infected. What if the costs of being infected vary dramatically? Imagine a well defined subset of the population bears a huge cost for infection while the cost for everyone else is minuscule. If that subset is small, then, the mandatory quarantine (and other mitigation strategies) could be far from efficient. It might be more efficient for the subset that bears the larger cost of infection to quarantine themselves from the rest of the community.

Over a rabelaisian feast with convivial company, conversation turned to a twitter contretemps between economic theorists known to us at table. Its proximate cause was the design of the incentive auction for radio spectrum. The curious can dig around on twitter for the cut and thrust. A summary of the salient economic issues might be helpful for those following the matter.

Three years ago, in the cruelest of months, the FCC conducted an auction to reallocate radio spectrum. It had a procurement phase in which spectrum would be purchased from current holders and a second phase in which it was resold to others. The goal was to shift spectrum, where appropriate, from current holders to others who might use this scarce resource more efficiently.

It is the procurement phase that concerns us. The precise details of the auction in this phase will not matter. Its design is rooted in Ausubel’s clinching auction by way of Bikhchandani et al (2011) culminating in Milgrom and Segal (2019).

The pricing rule of the procurement auction was chosen under the assumption that each seller owned a single license. If invalid, it allows a seller with multiple licenses to engage in what is known as supply reduction to push up the price. Even if each seller initially owned a single license, a subset of sellers could benefit from merging their assets and coordinating their bids (or an outsider could come in and aggregate some sellers prior to the auction). A recent paper by my colleagues Doraszelski, Seim, Sinkinson and Wang offers estimates of how much sellers might have gained from strategic supply reduction.

Was the choice of price rule a design flaw? I say, compared to what? How about the VCG mechanism? It would award a seller owning multiple licenses the marginal product associated with their set of licenses. In general, if the assets held by sellers are substitutes for each other, the marginal product of a set will exceed the sum of the marginal products of its individual elements. Thus, the VCG auction would have left the seller with higher surplus than they would have obtained under the procurement auction assuming no supply reduction. As noted in Paul Milgrom’s  book, when goods are substitutes, the VCG auction creates an incentive for mergers. This is formalized in Sher (2010). The pricing rule of the procurement auction could be modified to account for multiple ownership (see Bikhchandani et al (2011)) but it would have the same qualitative effect. A seller would earn a higher surplus than they would have obtained under the procurement auction assuming no supply reduction. A second point of comparison would be to an auction that was explicitly designed to discourage mergers of this kind. If memory serves, this reduces the auction to a posted price mechanism.

Was there anything that could have been done to discourage  mergers? The auction did have reserve prices, so an upper limit was set on how much would be paid for licenses. Legal action is a possibility but its not clear whether that could have been pursued without delaying the auction.

Stepping back, one might ask a more basic question: should the reallocation of spectrum have been done by auction? Why not follow Coase and let the market sort it out? The orthodox answer is no because of hold-up and transaction costs. However, as Thomas Hazlett has argued, there are transaction costs on the auction side as well.

I don’t often go to empirical talks, but when I do, I fall asleep. Recently, while so engaged, I dreamt of the replicability crisis’ in Economics (see Chang and Li (2015)). The penultimate line of their abstract is the following bleak assessment:

Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable.’

Eager to help my empirical colleagues snatch victory from the jaws of defeat, I did what all theorists do. Build a model. Here it is.

The journal editor is the principal and the agent is an author. Agent has a paper characterized by two numbers ${(v, p)}$. The first is the value of the findings in the paper assuming they are replicable. The second is the probability that the findings are indeed replicable. The expected benefit of the paper is ${pv}$. Assume that ${v}$ is common knowledge but ${p}$ is the private information of agent. The probability that agent is of type ${(v,p)}$ is ${\pi(v,p)}$.

Given a paper, the principal can at a cost ${K}$ inspect the paper. With probability ${p}$ the inspection process will replicate the findings of the paper. Principal proposes an incentive compatible direct mechanism. Agent reports their type, ${(v, p)}$. Let ${a(v, p)}$ denote the interim probability that agent’s paper is provisionally accepted. Let ${c(v, p)}$ be the interim probability of agent’s paper not being inspected given it has been provisionally accepted. If a provisionally accepted paper is not inspected, it is published. If a paper subject to inspection is successfully replicated, the paper is published. Otherwise it is rejected and, per custom, the outcome is kept private. Agent cares only about the paper being accepted. Hence, agent cares only about

$\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p.$

The principal cares about replicability of papers and suffers a penalty of ${R > K}$ for publishing a paper that is not replicable. Principal also cares about the cost of inspection. Therefore she maximizes

$\displaystyle \sum_{v,p}\pi(v,p)[pv - (1-p)c(v,p)R]a(v,p) - K \sum_{v,p}\pi(v,p)a(v,p)(1-c(v,p))$

$\displaystyle = \sum_{v,p}\pi(v,p)[pv-K]a(v,p) + \sum_{v,p}\pi(v,p)a(v,p)c(v,p)[K - (1-p)R].$

The incentive compatibility constraint is
$\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p \geq a(v, p')c(v,p') + a(v, p')(1-c(v,p'))p.$

Recall, an agent cannot lie about the value component of the type.
We cannot screen on ${p}$, so all that matters is the distribution of ${p}$ conditional on ${v}$. Let ${p_v = E(p|v)}$. For a given ${v}$ there are only 3 possibilities: accept always, reject always, inspect and accept. The first possibility has an expected payoff of

$\displaystyle vp_v - (1-p_v) R = (v+R) p_v - R$

for the principal. The second possibility has value zero. The third has value ${ vp_v -K }$.
The principal prefers to accept immediately over inspection if

$\displaystyle (v+R) p_v - R > vp_v - K \Rightarrow p_v > (R-K)/R.$

The principal will prefer inspection to rejection if ${ vp_v \geq K}$. The principal prefers to accept rather than reject depends if ${p_v \geq R/(v+R).}$
Under a suitable condition on ${p_v}$ as a function of ${v}$, the optimal mechanism can be characterized by two cutoffs ${\tau_2 > \tau_1}$. Choose ${\tau_2}$ to be the smallest ${v}$ such that

$\displaystyle p_v \geq \max( (R/v+R), ((R-K)/R) ).$

Choose ${\tau_1}$ to be the largest ${v}$ such that ${p_v \leq \min (K/v, R/v+R)}$.
A paper with ${v \geq \tau_2}$ will be accepted without inspection. A paper with ${v \leq \tau_1}$ will be rejected. A paper with ${v \in (\tau_1, \tau_2)}$ will be provisionally accepted and then inspected.

For empiricists the advice would be to should shoot for high ${v}$ and damn the ${p}$!

More seriously, the model points out that even a journal that cares about replicability and bears the cost of verifying this will publish papers that have a low probability of being replicable. Hence, the presence of published papers that are not replicable is not, by itself, a sign of something rotten in Denmark.

One could improve outcomes by making authors bear the costs of a paper not being replicated. This points to a larger question. Replication is costly. How should the cost of replication be apportioned? In my model, the journal bore the entire cost. One could pass it on to the authors but this may have the effect of discouraging empirical research. One could rely on third parties (voluntary, like civic associations, or professionals supported by subscription). Or, one could rely on competing partisan groups pursuing their agendas to keep the claims of each side in check. The last seems at odds with the romantic ideal of disinterested scientists but could be efficient. The risk is partisan capture of journals which would shut down cross-checking.

When analyzing a mechanism it is convenient to assume that it is direct. The revelation principle allows one to argue that this restriction is without loss of generality. Yet, there are cases where one prefers to implement the indirect version of a mechanism rather than its direct counterpart. The clock version of the English ascending auction and the sealed bid second price auction are the most well known example (one hopes not the only). There are few (i.e. I could not immediately recall any) theorems that uniquely characterize a particular indirect mechanism. It would be nice to have more. What might such a characterization depend upon?

1) Direct mechanisms require that agents report their types. A concern for privacy could be used to kill’ off a direct mechanism. However, one would first have to rule out the use of trusted third parties (either human or computers implementing cryptographic protocols).

2) Indirect mechanism can sometimes be thought of as an extensive form game and one might look for refinements of solution concepts for extensive form games that have no counterpart in the direct version of the mechanism. The notion of obviously dominant strategy-proof that appears here is an example. However, indirect mechanisms may introduce equilibria, absent in the direct counterpart, that are compelling for the agents but unattractive for the designers purposes.

3) One feature of observed indirect mechanisms is that they use simple message spaces, but compensate by using multiple rounds of communication. Thus a constraint on message spaces would be needed in a characterization but coupled with a constraint on the rounds of communication.

According to the NY Times,  some Californians

would have to cut their water consumption by 35 percent under the terms of a preliminary plan issued by state officials on Tuesday to meet a 25 percent mandatory statewide reduction in urban water use.

There is an obvious way to achieve this, raise the price of water. If its obvious why isn’t it the first thing California did some years back when it was clear that water was scarce? In some cases the hands of the state are tied by water rights allocated for commercial purposes, so lets focus on household consumption.

We know that the first tool the state reaches for is regulation. See, for example, the following memo from the California State water board. Interestingly, it begins by noting that the state is in the 4th year of a drought! Eventually, it is clear that regulation is insufficient and then, price increases are considered. In fact, the water reduction targets quoted from the NY Times above come from a report by the Governor of the state that also urges the use of

rate structures and other pricing mechanisms

to achieve reductions. Again, why is price last rather first? Is this because the state must maintain a credible reputation for not exploiting its monopoly power with water?

If one is going reduce consumption by raising prices, should it be an across the board price increase? Note that consumption is metered so the exact amount that is purchased by a household is known to the provider. The state also has access to other information: location, home size, family size and income. In principle, the state could price discriminate. Here is an example from the Irvine Ranch Water district. Each household is given an initial allotment’ that depends on household size and area that is landscaped. Exceed the allotment and the price of water rises. For more details and the impact on consumption see the following paper. Is it obvious that this is the correct’ mechanism?

Uber posts a price ${p}$ per ride and keeps a commission ${\alpha}$ on the price. Suppose Uber is the only ride matching service in town. If ${D(p)}$ is the demand function for rides at per ride price ${p}$ and ${S(w)}$ is the supply curve for drivers at wage ${w}$ per ride, Uber must choose ${\alpha}$ and ${p}$ to solve the following:

$\displaystyle \max_{\alpha, p} \alpha p D(p)$

subject to

$\displaystyle D(p) \leq S((1-\alpha)p)$

The last constraint comes from the assumption that Uber is committed to ensuring that every rider seeking a ride at the posted price gets one.

Suppose, Uber did not link the payment to driver to the price charged to rider in this particular way. Then, Uber would solve

$\displaystyle \max_{p,w} pD(p) - wS(w)$

subject to

$\displaystyle D(p) \leq S(w)$

The first optimization problem is clearly more restrictive than the second. Hence, the claim that Uber is not profit maximizing. Which raises the obvious puzzle, why is Uber using a revenue sharing scheme?

Sydney Afriat arrived in Purdue in the late 60’s with a Bentley in tow. Mort Kamien described him as having walked out of the pages of an Ian Flemming novel. Why he brought the Bentley was a puzzle, as there were no qualified mechanics as far as the eye could see. In Indiana, that is a long way. Afriat would take his Bentley on long drives only to be interrupted by mechanical difficulties that necessitated the Bentley being towed to wait for parts or specialized help.

I came upon Afriat when I learnt about the problem of rationalizability.  One has a model of choice and a collection of observations about what an agent selected. Can one rationalize the observed choices by the given model of choice? In Afriat’s seminal paper on the subject, the observations consisted of price-quantity pairs for a vector of goods and a budget. The goal was to determine if the observed choices were consistent with an agent maximizing a concave utility function subject to the budget constraint. Afriat’s paper has prompted many other papers asking the same question for different models of choice. There is an aspect of these papers, including Afriat’s, that I find puzzling.

To illustrate, consider rationalizing expected utility (Eran Shmaya suggested that expected consumption’ might be more accurate). Let ${S = \{1,2 \ldots, n\}}$ be the set of possible states. We are given a sequence of observations ${\{x^{i},p^{i}\}_{i=1}^{m}}$ and a single budget ${b}$. Here ${x^i_j}$ represents consumption in state ${j}$ and ${p^i_j}$ is the unit price of consumption in state ${j}$ in observation ${i}$. We want to know if there is a probability distribution over states, ${v=(v_{1},...,v_{n})}$, such that each ${(x^i, p^i)}$ maximizes expected utility. In other words, ${(x^i, p^i)}$ solves

$\displaystyle \max \sum_{j=1}^{n}v_{j}x^i_{j}$

subject to

$\displaystyle \sum_{j=1}^{n}p^i_{j}x_{j}\leq b$

$\displaystyle x^i_{j}\geq 0\,\,\forall j \in S$

The solution to the above program is obvious. Identify the variable with the largest objective coefficient to constraint ratio and make it as large as possible. It is immediate that a collection of observations ${\{x^{i},p^{i}\}_{i=1}^{m}}$ can be rationalized by a suitable set ${\{v_{j}\} _{j=1}^{n}}$ of non-zero and nonnegative ${v_{j}}$‘s if the following system has a feasible solution:

$\displaystyle \frac{v_{r}}{p^i_r}\geq \frac{v_{j}}{p^i_{j}} \,\,\forall j, \,\, x^i_r> 0$

$\displaystyle \sum_{j \in S}v_{j}=1$

$\displaystyle v_{j}\geq 0\,\,\forall j \in S$

This completes the task as formulated by Afriat. A system of inequalities has been identified, that if feasible means the given observations can be rationalized. How hard is this to do in other cases? As long as the model of choice involves optimization and the optimization problem is well behaved in that first order conditions, say, suffice to characterize optimality, its a homework exercise. One can do this all day, thanks to Afriat; concave, additively separable concave, etc. etc.

Interestingly, no rationalizability paper stops at the point of identifying the inequalities. Even Afriat’s paper goes a step farther and proceeds to characterize’ when the observations can be rationalized. But, feasibility of the inequalities themselves is just such a characterization. What more is needed?

Perhaps, the characterization involving inequalities lacks interpretation’. Or, if the given system for a set of observations was infeasible, we may be interested in the obstacle to feasibility. Afriat’s paper gave a characterization in terms of the strong axiom of revealed preference, i.e., an absence of cycles of certain kinds. But that is precisely the Farkas alternative to the system of inequalities identified in Afriat. The absence of cycles condition follows from the fact that the initial set of inequalities is associated with the problem of finding a shortest path (see the chapter on rationalizability in my mechanism design book). Let me illustrate with the example above. It is equivalent to finding a non-negative and non trivial solution to

$\displaystyle \frac{v_{r}}{v_j}\geq \frac{p^i_{r}}{p^i_{j}} \,\,\forall j, \,\, x^i_r> 0$

Take logs:

$\displaystyle \ln{v_r} - \ln{v_j} \geq \ln{\frac{p^i_{r}}{p^i_{j}}} \,\,\forall j, \,\, x^i_r> 0$

This is exactly the dual to the problem of finding a shortest path in a suitable network (I believe that Afriat has a paper, that I’ve not found, which focuses on systems of the form $b_{rs} >$ $x_s - x_r$ ).The cycle characterization would involve products of terms like ${\frac{p^i_{r}}{p^i_{j}}}$ being less than 1 (or greater than 1 depending on convention). So, what would this add?