You are currently browsing the category archive for the ‘Mechanism design’ category.

Around the mid 2010’s Google introduced automated bidding. Other platforms have followed suit.

Rather than bidding directly for an `eyeball’, an advertiser delegates the bidding to the platform. In order to inform the bids that the platform will submit on their behalf, the advertiser submits two numbers to the platform. One is their budget and the second is their ROI target which can be thought of as  \frac{\#\,\, of\,\, clicks}{cost}. Hence, the ROI is the inverse of the cost per click.

Some observers have remarked that auto-bidding is strange because one asks the auctioneer themselves to bid on one’s behalf. Others have been inspired to focus on the design of auctions when bidders have an ROI constraint. This, I think, is misguided. 

First, just because the auctioneer’s chosen bidding language uses an ROI target does not mean that a ROI constraint enters bidder’s preferences. One should never confuse the message space of a mechanism with the preferences of the agents. 

Second, once a bidder has submitted a budget and an ROI target, the subsequent auction is an irrelevance. Why? Suppose I submit a budget of B. Then, my ROI target, says that the platform must deliver  \frac{B}{cost\,\, per \,\, click} clicks. For example, at a budget of $100 and an ROI target of 2, I am telling the platform that I will give them $100 in return for 200 clicks. Now, the platform, has access not to a finite number of clicks but a flow. They can, given time, satisfy every bid. In short, the platform will get your $100. The only issue is when. The automated auction is merely an elaborate device for determining the rate at which different bidders receive a click. One can think of far simpler procedures to do this. For example, round robin or deplete budgets at a uniform rate.

The goal of this post is to highlight a feature of the Bayesian Persuasion problem that seems useful but as far as I can tell not explicitly stated anywhere.

Let \mathcal{S} \subset \mathbb{R} be the finite set of states and \mathcal{A} \subset \mathbb{R} the finite set of actions the receiver can take. Elements of each are denoted by \omega_{j} and a_{i}, respectively.

The value to the receiver (he/him) of choosing action a_i in state \omega_j is V_R(a_i, \omega_j). The payoff to the  sender (she/her) if the action that the receiver takes in state \omega_j is a_i is denoted V_S(a_i, \omega_j). Sender and receiver share  a common prior p over \mathcal{S}.

The persuasion problem can be formulated in terms of choosing a distribution over state and action pairs such that an obedience constraint holds for the receiver. Call this the obedience formulation. The more popular formulation, in terms of finding a concave envelope is, as will be seen below, is equivalent.

The sender commits to a mapping from the state to an action recommendation. Given a recommendation, the receiver can update their prior over states. Once the algebraic dust settles, it turns out all that matters is the joint probability of action and state. Thus, the sender’s problem reduces to choosing x(\omega_j,a_i), the joint probability of action a_i and state  \omega_j.  The sender’s optimization problem is:

\max_{x(\omega,a)} \sum_{i=1}^{|\mathcal{A}|}\sum_{j=1}^{|\mathcal{S}|}V_{S}(a_{i}, \omega_j)x(\omega_{j},a_{i})

subject to

\sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{i}, \omega_j)x(\omega_{j},a_{i}) \geq \sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{k}, \omega_j)x(\omega_{j},a_{i})\,\, \forall a_{i}, a_{k}

\sum_{i=1}^{|\mathcal{A}|}x(\omega_{j},a_{i}) = p(\omega_{j})\,\, \forall \omega_j \in \mathcal{S}

x(\omega_{j},a_{i}) \geq 0 \,\, \forall  \omega_j \in \mathcal{S} \& a_i \in \mathcal{A}

The first constraint is the obedience constraints (OC) which ensure that it is in the receiver’s interest to follow the sender’s recommendation.

The second ensures that the total probability weight assigned to actions recommended in state \omega_j matches the prior probability of state \omega_j being realized.

The difficulty is dealing with the OC constraints. Many relax them using the method of Lagrange multipliers and try to pin down the values of the multipliers. We take a different approach.

For each action a_i, let K_i be the polyhedral cone defined by:

\sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{i}, \omega_j)x(\omega_{j},a_{i}) \geq \sum_{j=1}^{|\mathcal{S}|}V_{R}(a_{k}, \omega_j)x(\omega_{j},a_{i})\,\, \forall a_{k}

By the Weyl-Minkowski theorem,  K_i is characterized by its generators (extreme rays), which we will denote by G_i = \{g^1_i, \ldots, g^{t_i}_i\}. The generators are non-negative and non-zero and can be normalized so that they sum to 1 which allows us to interpret them as probability distributions over the state space. For any generator (probability distribution) in G_i, action a_i is the best response of the receiver. Many of the qualitative features of a solution to the persuasion problem that have been identified in the literature are statements about the generators of K_i.

As each vector in K_i can be expressed as a non-negative linear combination of its generators, we can rewrite the constraints as follows: \sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig^r_i(\omega_j) =  p(\omega_j) \,\, \forall j \in \mathcal{S}

\mu^r_i \geq 0\,\, \forall i \in \mathcal{A},r = 1, \ldots, t_i

If one adds up the constraints, we see that

\sum_{j=1}^{|\mathcal{S}|}\sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig_i^r(\omega_j) =  1. In other words, the prior distribution p(\cdot) can be expressed as a convex combination of other distributions. In this way arrive at another popular formulation of the persuasion problem which is in terms of finding a decomposition of the prior into a convex combination of possible posteriors. Notice, the set of possible posterior distributions that need to be considered  are limited to the generators of the corresponding cones K_i. We illustrate why this is useful in a moment. For now, let me point out a connection to the other magical phrase that appears in the context of persuasion: concavification.

First, express the persuasion problem in terms of the weight assigned to the generators: \max \sum_{a_i \in \mathcal{A}}\sum_{r=1}^{t_i}[\sum_{\omega_j \in \mathcal{S}}V_S(a_i, \omega_j)g_i^r(\omega_j)]\mu^r_i

subject to \sum_{i=1}^{|\mathcal{A}|}\sum_{r=1}^{t_i}\mu^r_ig^r_i(\omega_j) =  p(\omega_{j}) \,\, \forall j \in \mathcal{S}

\mu^r_i \geq 0\,\, \forall i \in \mathcal{A},r = 1, \ldots, t_i

Letting y be the vector of dual variables, the linear programming dual of the persuasion problem is:

\min \sum_{\omega_j \in \mathcal{S}}p(\omega_j)y_j

subject to \sum_{\omega_j \in \mathcal{S}}g^r_i (\omega_j)y_j  \geq \sum_{\omega_j \in \mathcal{S}}V_S(a_i, \omega_j)g_i^r(\omega_j)\,\, \forall i \in \mathcal{A}\,\, r = 1, \ldots t_i

This dual problem characterizes the sender’s optimal value in terms of a concave envelope (since Kamenica & Gentzkow (2011) this is the most popular way to state a solution to the persuasion problem). Notice, the approach taken here shows clearly that the receiver’s preferences alone determine the set of posterior beliefs that will play a role. This idea is implicit in Lipinowski and Mathevet (2017). They introduce the notion of a posterior cover: a collection of sets of posterior beliefs, over each of which a given function is convex. When the given function is the best response correspondence of the receiver, this reduces precisely to the cone of the obedience constraints.

To demonstrate the usefulness of the conic representation of the obedience constraints, lets examine a persuasion problem with one dimensional state and action space described in Kolotillin and Wolitzky (2020). To simplify notation assume that \mathcal{S} = \{1, 2, \ldots, |\mathcal{S}|\} and \mathcal{A} = \{1, \ldots, |\mathcal{A}|\}. Write \omega_j as j and a_i as i

The goal of KW2020 is to provide a general approach to understanding qualitative properties of the optimal signal structure under two substantive assumptions. The first is that the sender’s utility is increasing in the receiver’s action. The second is called aggregate downcrossing, which implies that the receiver’s optimal action is increasing in his belief about the state.

1) V_S(i, j) > V_S(i-1, j)\,\, \forall i \in \mathcal{A} \setminus \{1\}\,\, \forall j \in \mathcal{S}

2) For all probability distributions q over \mathcal{S},

\sum_{j \in \mathcal{S}} [V_R(i,j)- V_R(i-1, j)]q_{j} \geq 0

\Rightarrow \sum_{j\in \mathcal{S}} [V_R(i', j)- V_R(i'-1, j)]q_{j} \geq 0\,\, \forall i' < i,\,\, i, i' \in \mathcal{A}

A result in KW2020 is that it is without loss to assume each induced posterior distribution has at most binary support. We show this to be an immediate consequence of the properties of the generators of each K_i.

Given condition 1, the sender only cares about the downward obedience constraints. We show that the adjacent downward constraints suffice.

Note that for any i, i-1, i-2 \in \mathcal{A},

\sum_{j \in \mathcal{S}} [V_R(i, j)- V_R(i-2, j)]x(i,j)=

\sum_{j \in \mathcal{S} } [V_R(i, j)- V_R(i-1, j)]x(i,j)

+ \sum_{j \in \mathcal{S} } [V_R(i-1, j)- V_R(i-2, j)]x(i,j)

The first term on the right hand side of the equality is non-negative by virtue of the adjacent obedience constraint holding. The second term is non-negative by aggregate downcrossing. Hence, each K_i is described by a single inequality:

\sum_{j\in\mathcal{S}}[V_{R}(i, j)-V_{R}(i-1, j)]x(i,j) \geq 0.

It is straightforward to see that each generator takes one of the following forms:

1) A single non-zero component with weight 1 assigned to a state j \in \mathcal{S} where V_{R}(i, j)-V_{R}(i-1, j)\geq0 .

2) Two non-zero components one assigned to a state j \in \mathcal{S} where V_{R}(i, j)-V_{R}(i-1, j) \geq 0 and one to a state j'\in \mathcal{S} where V_{R}(i, j')-V_{R}(i-1, j') < 0. The value assigned to component j is \frac{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}}{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}  +|V_{R}(i, j')-V_{R}(i-1, j')|^{-1} } while the value assigned to component j' is \frac{|V_{R}(i, j')-V_{R}(i-1, j')|^{_1}}{|V_{R}(i, j)-V_{R}(i-1, j)|^{_1}  +|V_{R}(i, j')-V_{R}(i-1, j')|^{-1} }.

Note that items 1 and 2 correspond to  Lemma 1 & Theorem 1 of KW2020 and follow immediately from the properties of the cone of the OC constraints.

Serious infectious diseases are a prime example of a public bad (non-exclusive and non-congestible). We limit them by restricting behavior and or getting individuals to internalize the externalities they generate. For example, one could mandate masks in public places. To be effective this requires monitoring and punishment. Unpleasant, but we know how to do this.  Or, one could hold those who don’t wear masks responsible for the costs they impose on those whom they infect. Unclear exactly how we would implement this, so impractical. However, it is still interesting to speculate about how one might do this. Coase pointed out that if one could tie the offending behavior to something that was excludable, we would be in business.

To my mind an obvious candidate is medical care. A feature of infectious diseases, is that behavior which increases the risk of infection to others also increases it for oneself. Thus, those who wish to engage in behavior that increases the risk of infection should be allowed to do so provided they waive the right to medical treatment for a defined period should they contract the infection. If this is unenforceable, perhaps something `weaker’ such as treatment will not be covered by insurance or the subject will be accorded lowest priority when treatment capacity is scarce.

How exactly could such a scheme be implemented? To begin with one needs to define which behaviors count, get the agent to explicitly waive the right when engaging in it and then making sure medical facilities are made aware of it.  We have some ready made behaviors that make it easy. Going to a  bar, gym and indoor dining. The rough principle is any activity with a $$ R_0 > 1 $$ whose access is controlled by a profit seeking entity. The profit seeking entity obtains the waiver from the interested agent as a condition of entry (this would have to be monitored by the state). The waiver releases the profit entity from liability. Waiver enters a database that is linked to health records (probably the biggest obstacle).

The efficacy of lockdowns was debated at the start of the pandemic and continues to this day. Sweden, famously, chose not to implement a lockdown. As Anders Tegnell, remarked:
`Closedown, lockdown, closing borders — nothing has a historical scientific basis, in my view.’

Lyman Stone of the American Enterprise Institute expresses it more forcefully:

`Here’s the thing: there’s no evidence of lockdowns working. If strict lockdowns actually saved lives, I would be all for them, even if they had large economic costs. But the scientific and medical case for strict lockdowns is paper-thin.’

The quotes above reveal an imprecision at the heart of the debate. What exactly is a lockdown? Tegnell uses a variety of words that suggest a variety of policies that can be lumped together. Stone is careful to say there is no evidence in support of strict lockdowns. Suggesting that `less’ strict lockdowns might be beneficial. So, lets speculate about the other extreme: no lockdown relaxed or otherwise.

The presence of infected individuals increases the cost of certain interactions. If infected individuals don’t internalize this cost, then, in the absence of any intervention we will observe a reduction in the efficient level of economic activity.

How will this manifest itself? Agents may adopt relatively low cost precautionary actions like wearing masks. On the consumer side they will substitute away from transactions that expose them to the risk of infection. For example, take out rather than sit down and delivery versus shopping in person. In short, we will see a drop in the demand for certain kinds of transactions. Absent subsidies for hospital care, we should expect an increase in price (or wait times) for medical care further incentivizing precautionary actions on the part of individuals.

The various models of network formation in the face of contagion we have (eg Erol and Vohra (2020)) all suggest we will see changes in how individuals socialize. They will reduce the variety of their interactions and concentrate them in cliques of `trusted’ agents.

On the supply side, firms will have to make costly investments to reduce the risk of infection to customers and possibly workers. The ability of firms to pass these costs onto their customers or their suppliers will depend on the relative bargaining power of each. Restaurant workers, for example, may demand increased compensation for the risks they bear, but this will occur at the same time as a drop in demand for their services.

To summarize, a `no lockdown’ policy will, over time, resemble a lockdown policy. Thus, the question is whether there is a  coordinated lockdown policy that is superior to an uncoordinated one that emerges endogenously.

Will widely available and effective tests for COVID-19 awaken the economy from its COVID induced coma? Paul Romer, for one, thinks so. But what will each person do with the information gleaned from the test? Should we expect someone who has tested positive for the virus to stay home and someone who has tested negative to go to work? If the first receives no compensation for staying home, she will leave for work. The second, anticipating that infected individuals have an incentive to go to work, will choose to stay home. As a result, the fraction of the population out and about will have an infection rate exceeding that in the population at large.

In a new paper by Rahul Deb, Mallesh Pai, Akhil Vohra and myself we argue that widespread testing alone will not solve this problem. Testing in concert with subsidies will. We propose a model in which both testing and transfers are targeted. We use it to jointly determine where agents should be tested and how they should be incentivized. The idea is straightforward. In the absence of widespread testing to distinguish between those who are infected and those who are not, we must rely on individuals to sort themselves. They are in the best position to determine the likelihood they are infected (e.g. based on private information about exposures, how rigorously they have been distancing etc.). Properly targeted testing with tailored transfers give them the incentive to do so.

We also distinguish between testing at work and testing `at home’. An infected person who leaves home to be tested at work poses an infection risk to others who choose to go outside. Testing `at home’ should be interpreted as a way to test an individual without increasing exposure to others. Our model also suggests who should be tested at work and who should be tested at home.

An agent with an infectious disease confers a negative externality on the rest of the community. If the cost of infection is sufficiently high, they are encouraged and in some cases required to quarantine themselves. Is this the efficient outcome? One might wonder if a Coasian approach would generate it instead. Define a right to walk around when infected which can be bought and sold. Alas, infection has the nature of public bad which is non-rivalrous and non-excludable. There is no efficient, incentive compatible individually rational (IR) mechanism for the allocation of such public bads (or goods). So, something has to give. The mandatory quarantine of those who might be infected can be interpreted as relaxing the IR constraint for some.

If one is going to relax the IR constraint it is far from obvious that it should be the IR constraint of the infected. What if the costs of being infected vary dramatically? Imagine a well defined subset of the population bears a huge cost for infection while the cost for everyone else is minuscule. If that subset is small, then, the mandatory quarantine (and other mitigation strategies) could be far from efficient. It might be more efficient for the subset that bears the larger cost of infection to quarantine themselves from the rest of the community.

 

Over a rabelaisian feast with convivial company, conversation turned to a twitter contretemps between economic theorists known to us at table. Its proximate cause was the design of the incentive auction for radio spectrum. The curious can dig around on twitter for the cut and thrust. A summary of the salient economic issues might be helpful for those following the matter.

Three years ago, in the cruelest of months, the FCC conducted an auction to reallocate radio spectrum. It had a procurement phase in which spectrum would be purchased from current holders and a second phase in which it was resold to others. The goal was to shift spectrum, where appropriate, from current holders to others who might use this scarce resource more efficiently.

It is the procurement phase that concerns us. The precise details of the auction in this phase will not matter. Its design is rooted in Ausubel’s clinching auction by way of Bikhchandani et al (2011) culminating in Milgrom and Segal (2019).

The pricing rule of the procurement auction was chosen under the assumption that each seller owned a single license. If invalid, it allows a seller with multiple licenses to engage in what is known as supply reduction to push up the price. Even if each seller initially owned a single license, a subset of sellers could benefit from merging their assets and coordinating their bids (or an outsider could come in and aggregate some sellers prior to the auction). A recent paper by my colleagues Doraszelski, Seim, Sinkinson and Wang offers estimates of how much sellers might have gained from strategic supply reduction.

Was the choice of price rule a design flaw? I say, compared to what? How about the VCG mechanism? It would award a seller owning multiple licenses the marginal product associated with their set of licenses. In general, if the assets held by sellers are substitutes for each other, the marginal product of a set will exceed the sum of the marginal products of its individual elements. Thus, the VCG auction would have left the seller with higher surplus than they would have obtained under the procurement auction assuming no supply reduction. As noted in Paul Milgrom’s  book, when goods are substitutes, the VCG auction creates an incentive for mergers. This is formalized in Sher (2010). The pricing rule of the procurement auction could be modified to account for multiple ownership (see Bikhchandani et al (2011)) but it would have the same qualitative effect. A seller would earn a higher surplus than they would have obtained under the procurement auction assuming no supply reduction. A second point of comparison would be to an auction that was explicitly designed to discourage mergers of this kind. If memory serves, this reduces the auction to a posted price mechanism.

Was there anything that could have been done to discourage  mergers? The auction did have reserve prices, so an upper limit was set on how much would be paid for licenses. Legal action is a possibility but its not clear whether that could have been pursued without delaying the auction.

Stepping back, one might ask a more basic question: should the reallocation of spectrum have been done by auction? Why not follow Coase and let the market sort it out? The orthodox answer is no because of hold-up and transaction costs. However, as Thomas Hazlett has argued, there are transaction costs on the auction side as well.

 

I don’t often go to empirical talks, but when I do, I fall asleep. Recently, while so engaged, I dreamt of the `replicability crisis’ in Economics (see Chang and Li (2015)). The penultimate line of their abstract is the following bleak assessment:

`Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable.’

Eager to help my empirical colleagues snatch victory from the jaws of defeat, I did what all theorists do. Build a model. Here it is.

The journal editor is the principal and the agent is an author. Agent has a paper characterized by two numbers {(v, p)}. The first is the value of the findings in the paper assuming they are replicable. The second is the probability that the findings are indeed replicable. The expected benefit of the paper is {pv}. Assume that {v} is common knowledge but {p} is the private information of agent. The probability that agent is of type {(v,p)} is {\pi(v,p)}.

Given a paper, the principal can at a cost {K} inspect the paper. With probability {p} the inspection process will replicate the findings of the paper. Principal proposes an incentive compatible direct mechanism. Agent reports their type, {(v, p)}. Let {a(v, p)} denote the interim probability that agent’s paper is provisionally accepted. Let {c(v, p)} be the interim probability of agent’s paper not being inspected given it has been provisionally accepted. If a provisionally accepted paper is not inspected, it is published. If a paper subject to inspection is successfully replicated, the paper is published. Otherwise it is rejected and, per custom, the outcome is kept private. Agent cares only about the paper being accepted. Hence, agent cares only about

\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p.

The principal cares about replicability of papers and suffers a penalty of {R > K} for publishing a paper that is not replicable. Principal also cares about the cost of inspection. Therefore she maximizes

\displaystyle \sum_{v,p}\pi(v,p)[pv - (1-p)c(v,p)R]a(v,p) - K \sum_{v,p}\pi(v,p)a(v,p)(1-c(v,p))

\displaystyle = \sum_{v,p}\pi(v,p)[pv-K]a(v,p) + \sum_{v,p}\pi(v,p)a(v,p)c(v,p)[K - (1-p)R].

The incentive compatibility constraint is
\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p \geq a(v, p')c(v,p') + a(v, p')(1-c(v,p'))p.

Recall, an agent cannot lie about the value component of the type.
We cannot screen on {p}, so all that matters is the distribution of {p} conditional on {v}. Let {p_v = E(p|v)}. For a given {v} there are only 3 possibilities: accept always, reject always, inspect and accept. The first possibility has an expected payoff of

\displaystyle vp_v - (1-p_v) R = (v+R) p_v - R

for the principal. The second possibility has value zero. The third has value { vp_v -K }.
The principal prefers to accept immediately over inspection if

\displaystyle (v+R) p_v - R > vp_v - K \Rightarrow p_v > (R-K)/R.

The principal will prefer inspection to rejection if { vp_v \geq K}. The principal prefers to accept rather than reject depends if {p_v \geq R/(v+R).}
Under a suitable condition on {p_v} as a function of {v}, the optimal mechanism can be characterized by two cutoffs {\tau_2 > \tau_1}. Choose {\tau_2} to be the smallest {v} such that

\displaystyle p_v \geq \max( (R/v+R), ((R-K)/R) ).

Choose {\tau_1} to be the largest {v} such that {p_v \leq \min (K/v, R/v+R)}.
A paper with {v \geq \tau_2} will be accepted without inspection. A paper with {v \leq \tau_1} will be rejected. A paper with {v \in (\tau_1, \tau_2)} will be provisionally accepted and then inspected.

For empiricists the advice would be to should shoot for high {v} and damn the {p}!

More seriously, the model points out that even a journal that cares about replicability and bears the cost of verifying this will publish papers that have a low probability of being replicable. Hence, the presence of published papers that are not replicable is not, by itself, a sign of something rotten in Denmark.

One could improve outcomes by making authors bear the costs of a paper not being replicated. This points to a larger question. Replication is costly. How should the cost of replication be apportioned? In my model, the journal bore the entire cost. One could pass it on to the authors but this may have the effect of discouraging empirical research. One could rely on third parties (voluntary, like civic associations, or professionals supported by subscription). Or, one could rely on competing partisan groups pursuing their agendas to keep the claims of each side in check. The last seems at odds with the romantic ideal of disinterested scientists but could be efficient. The risk is partisan capture of journals which would shut down cross-checking.

When analyzing a mechanism it is convenient to assume that it is direct. The revelation principle allows one to argue that this restriction is without loss of generality. Yet, there are cases where one prefers to implement the indirect version of a mechanism rather than its direct counterpart. The clock version of the English ascending auction and the sealed bid second price auction are the most well known example (one hopes not the only). There are few (i.e. I could not immediately recall any) theorems that uniquely characterize a particular indirect mechanism. It would be nice to have more. What might such a characterization depend upon?

1) Direct mechanisms require that agents report their types. A concern for privacy could be used to `kill’ off a direct mechanism. However, one would first have to rule out the use of trusted third parties (either human or computers implementing cryptographic protocols).

2) Indirect mechanism can sometimes be thought of as an extensive form game and one might look for refinements of solution concepts for extensive form games that have no counterpart in the direct version of the mechanism. The notion of obviously dominant strategy-proof that appears here is an example. However, indirect mechanisms may introduce equilibria, absent in the direct counterpart, that are compelling for the agents but unattractive for the designers purposes.

3) One feature of observed indirect mechanisms is that they use simple message spaces, but compensate by using multiple rounds of communication. Thus a constraint on message spaces would be needed in a characterization but coupled with a constraint on the rounds of communication.

According to the NY Times,  some Californians

would have to cut their water consumption by 35 percent under the terms of a preliminary plan issued by state officials on Tuesday to meet a 25 percent mandatory statewide reduction in urban water use.

There is an obvious way to achieve this, raise the price of water. If its obvious why isn’t it the first thing California did some years back when it was clear that water was scarce? In some cases the hands of the state are tied by water rights allocated for commercial purposes, so lets focus on household consumption.

We know that the first tool the state reaches for is regulation. See, for example, the following memo from the California State water board. Interestingly, it begins by noting that the state is in the 4th year of a drought! Eventually, it is clear that regulation is insufficient and then, price increases are considered. In fact, the water reduction targets quoted from the NY Times above come from a report by the Governor of the state that also urges the use of

rate structures and other pricing mechanisms

to achieve reductions. Again, why is price last rather first? Is this because the state must maintain a credible reputation for not exploiting its monopoly power with water?

If one is going reduce consumption by raising prices, should it be an across the board price increase? Note that consumption is metered so the exact amount that is purchased by a household is known to the provider. The state also has access to other information: location, home size, family size and income. In principle, the state could price discriminate. Here is an example from the Irvine Ranch Water district. Each household is given an initial `allotment’ that depends on household size and area that is landscaped. Exceed the allotment and the price of water rises. For more details and the impact on consumption see the following paper. Is it obvious that this is the `correct’ mechanism?

 

 

Kellogg faculty blogroll