You are currently browsing the category archive for the ‘Mechanism design’ category.

I don’t often go to empirical talks, but when I do, I fall asleep. Recently, while so engaged, I dreamt of the replicability crisis’ in Economics (see Chang and Li (2015)). The penultimate line of their abstract is the following bleak assessment:

Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable.’

Eager to help my empirical colleagues snatch victory from the jaws of defeat, I did what all theorists do. Build a model. Here it is.

The journal editor is the principal and the agent is an author. Agent has a paper characterized by two numbers ${(v, p)}$. The first is the value of the findings in the paper assuming they are replicable. The second is the probability that the findings are indeed replicable. The expected benefit of the paper is ${pv}$. Assume that ${v}$ is common knowledge but ${p}$ is the private information of agent. The probability that agent is of type ${(v,p)}$ is ${\pi(v,p)}$.

Given a paper, the principal can at a cost ${K}$ inspect the paper. With probability ${p}$ the inspection process will replicate the findings of the paper. Principal proposes an incentive compatible direct mechanism. Agent reports their type, ${(v, p)}$. Let ${a(v, p)}$ denote the interim probability that agent’s paper is provisionally accepted. Let ${c(v, p)}$ be the interim probability of agent’s paper not being inspected given it has been provisionally accepted. If a provisionally accepted paper is not inspected, it is published. If a paper subject to inspection is successfully replicated, the paper is published. Otherwise it is rejected and, per custom, the outcome is kept private. Agent cares only about the paper being accepted. Hence, agent cares only about $\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p.$

The principal cares about replicability of papers and suffers a penalty of ${R > K}$ for publishing a paper that is not replicable. Principal also cares about the cost of inspection. Therefore she maximizes $\displaystyle \sum_{v,p}\pi(v,p)[pv - (1-p)c(v,p)R]a(v,p) - K \sum_{v,p}\pi(v,p)a(v,p)(1-c(v,p))$ $\displaystyle = \sum_{v,p}\pi(v,p)[pv-K]a(v,p) + \sum_{v,p}\pi(v,p)a(v,p)c(v,p)[K - (1-p)R].$

The incentive compatibility constraint is $\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p \geq a(v, p')c(v,p') + a(v, p')(1-c(v,p'))p.$

Recall, an agent cannot lie about the value component of the type.
We cannot screen on ${p}$, so all that matters is the distribution of ${p}$ conditional on ${v}$. Let ${p_v = E(p|v)}$. For a given ${v}$ there are only 3 possibilities: accept always, reject always, inspect and accept. The first possibility has an expected payoff of $\displaystyle vp_v - (1-p_v) R = (v+R) p_v - R$

for the principal. The second possibility has value zero. The third has value ${ vp_v -K }$.
The principal prefers to accept immediately over inspection if $\displaystyle (v+R) p_v - R > vp_v - K \Rightarrow p_v > (R-K)/R.$

The principal will prefer inspection to rejection if ${ vp_v \geq K}$. The principal prefers to accept rather than reject depends if ${p_v \geq R/(v+R).}$
Under a suitable condition on ${p_v}$ as a function of ${v}$, the optimal mechanism can be characterized by two cutoffs ${\tau_2 > \tau_1}$. Choose ${\tau_2}$ to be the smallest ${v}$ such that $\displaystyle p_v \geq \max( (R/v+R), ((R-K)/R) ).$

Choose ${\tau_1}$ to be the largest ${v}$ such that ${p_v \leq \min (K/v, R/v+R)}$.
A paper with ${v \geq \tau_2}$ will be accepted without inspection. A paper with ${v \leq \tau_1}$ will be rejected. A paper with ${v \in (\tau_1, \tau_2)}$ will be provisionally accepted and then inspected.

For empiricists the advice would be to should shoot for high ${v}$ and damn the ${p}$!

More seriously, the model points out that even a journal that cares about replicability and bears the cost of verifying this will publish papers that have a low probability of being replicable. Hence, the presence of published papers that are not replicable is not, by itself, a sign of something rotten in Denmark.

One could improve outcomes by making authors bear the costs of a paper not being replicated. This points to a larger question. Replication is costly. How should the cost of replication be apportioned? In my model, the journal bore the entire cost. One could pass it on to the authors but this may have the effect of discouraging empirical research. One could rely on third parties (voluntary, like civic associations, or professionals supported by subscription). Or, one could rely on competing partisan groups pursuing their agendas to keep the claims of each side in check. The last seems at odds with the romantic ideal of disinterested scientists but could be efficient. The risk is partisan capture of journals which would shut down cross-checking.

When analyzing a mechanism it is convenient to assume that it is direct. The revelation principle allows one to argue that this restriction is without loss of generality. Yet, there are cases where one prefers to implement the indirect version of a mechanism rather than its direct counterpart. The clock version of the English ascending auction and the sealed bid second price auction are the most well known example (one hopes not the only). There are few (i.e. I could not immediately recall any) theorems that uniquely characterize a particular indirect mechanism. It would be nice to have more. What might such a characterization depend upon?

1) Direct mechanisms require that agents report their types. A concern for privacy could be used to kill’ off a direct mechanism. However, one would first have to rule out the use of trusted third parties (either human or computers implementing cryptographic protocols).

2) Indirect mechanism can sometimes be thought of as an extensive form game and one might look for refinements of solution concepts for extensive form games that have no counterpart in the direct version of the mechanism. The notion of obviously dominant strategy-proof that appears here is an example. However, indirect mechanisms may introduce equilibria, absent in the direct counterpart, that are compelling for the agents but unattractive for the designers purposes.

3) One feature of observed indirect mechanisms is that they use simple message spaces, but compensate by using multiple rounds of communication. Thus a constraint on message spaces would be needed in a characterization but coupled with a constraint on the rounds of communication.

According to the NY Times,  some Californians

would have to cut their water consumption by 35 percent under the terms of a preliminary plan issued by state officials on Tuesday to meet a 25 percent mandatory statewide reduction in urban water use.

There is an obvious way to achieve this, raise the price of water. If its obvious why isn’t it the first thing California did some years back when it was clear that water was scarce? In some cases the hands of the state are tied by water rights allocated for commercial purposes, so lets focus on household consumption.

We know that the first tool the state reaches for is regulation. See, for example, the following memo from the California State water board. Interestingly, it begins by noting that the state is in the 4th year of a drought! Eventually, it is clear that regulation is insufficient and then, price increases are considered. In fact, the water reduction targets quoted from the NY Times above come from a report by the Governor of the state that also urges the use of

rate structures and other pricing mechanisms

to achieve reductions. Again, why is price last rather first? Is this because the state must maintain a credible reputation for not exploiting its monopoly power with water?

If one is going reduce consumption by raising prices, should it be an across the board price increase? Note that consumption is metered so the exact amount that is purchased by a household is known to the provider. The state also has access to other information: location, home size, family size and income. In principle, the state could price discriminate. Here is an example from the Irvine Ranch Water district. Each household is given an initial allotment’ that depends on household size and area that is landscaped. Exceed the allotment and the price of water rises. For more details and the impact on consumption see the following paper. Is it obvious that this is the correct’ mechanism?

Uber posts a price ${p}$ per ride and keeps a commission ${\alpha}$ on the price. Suppose Uber is the only ride matching service in town. If ${D(p)}$ is the demand function for rides at per ride price ${p}$ and ${S(w)}$ is the supply curve for drivers at wage ${w}$ per ride, Uber must choose ${\alpha}$ and ${p}$ to solve the following: $\displaystyle \max_{\alpha, p} \alpha p D(p)$

subject to $\displaystyle D(p) \leq S((1-\alpha)p)$

The last constraint comes from the assumption that Uber is committed to ensuring that every rider seeking a ride at the posted price gets one.

Suppose, Uber did not link the payment to driver to the price charged to rider in this particular way. Then, Uber would solve $\displaystyle \max_{p,w} pD(p) - wS(w)$

subject to $\displaystyle D(p) \leq S(w)$

The first optimization problem is clearly more restrictive than the second. Hence, the claim that Uber is not profit maximizing. Which raises the obvious puzzle, why is Uber using a revenue sharing scheme?

Sydney Afriat arrived in Purdue in the late 60’s with a Bentley in tow. Mort Kamien described him as having walked out of the pages of an Ian Flemming novel. Why he brought the Bentley was a puzzle, as there were no qualified mechanics as far as the eye could see. In Indiana, that is a long way. Afriat would take his Bentley on long drives only to be interrupted by mechanical difficulties that necessitated the Bentley being towed to wait for parts or specialized help.

I came upon Afriat when I learnt about the problem of rationalizability.  One has a model of choice and a collection of observations about what an agent selected. Can one rationalize the observed choices by the given model of choice? In Afriat’s seminal paper on the subject, the observations consisted of price-quantity pairs for a vector of goods and a budget. The goal was to determine if the observed choices were consistent with an agent maximizing a concave utility function subject to the budget constraint. Afriat’s paper has prompted many other papers asking the same question for different models of choice. There is an aspect of these papers, including Afriat’s, that I find puzzling.

To illustrate, consider rationalizing expected utility (Eran Shmaya suggested that expected consumption’ might be more accurate). Let ${S = \{1,2 \ldots, n\}}$ be the set of possible states. We are given a sequence of observations ${\{x^{i},p^{i}\}_{i=1}^{m}}$ and a single budget ${b}$. Here ${x^i_j}$ represents consumption in state ${j}$ and ${p^i_j}$ is the unit price of consumption in state ${j}$ in observation ${i}$. We want to know if there is a probability distribution over states, ${v=(v_{1},...,v_{n})}$, such that each ${(x^i, p^i)}$ maximizes expected utility. In other words, ${(x^i, p^i)}$ solves $\displaystyle \max \sum_{j=1}^{n}v_{j}x^i_{j}$

subject to $\displaystyle \sum_{j=1}^{n}p^i_{j}x_{j}\leq b$ $\displaystyle x^i_{j}\geq 0\,\,\forall j \in S$

The solution to the above program is obvious. Identify the variable with the largest objective coefficient to constraint ratio and make it as large as possible. It is immediate that a collection of observations ${\{x^{i},p^{i}\}_{i=1}^{m}}$ can be rationalized by a suitable set ${\{v_{j}\} _{j=1}^{n}}$ of non-zero and nonnegative ${v_{j}}$‘s if the following system has a feasible solution: $\displaystyle \frac{v_{r}}{p^i_r}\geq \frac{v_{j}}{p^i_{j}} \,\,\forall j, \,\, x^i_r> 0$ $\displaystyle \sum_{j \in S}v_{j}=1$ $\displaystyle v_{j}\geq 0\,\,\forall j \in S$

This completes the task as formulated by Afriat. A system of inequalities has been identified, that if feasible means the given observations can be rationalized. How hard is this to do in other cases? As long as the model of choice involves optimization and the optimization problem is well behaved in that first order conditions, say, suffice to characterize optimality, its a homework exercise. One can do this all day, thanks to Afriat; concave, additively separable concave, etc. etc.

Interestingly, no rationalizability paper stops at the point of identifying the inequalities. Even Afriat’s paper goes a step farther and proceeds to characterize’ when the observations can be rationalized. But, feasibility of the inequalities themselves is just such a characterization. What more is needed?

Perhaps, the characterization involving inequalities lacks interpretation’. Or, if the given system for a set of observations was infeasible, we may be interested in the obstacle to feasibility. Afriat’s paper gave a characterization in terms of the strong axiom of revealed preference, i.e., an absence of cycles of certain kinds. But that is precisely the Farkas alternative to the system of inequalities identified in Afriat. The absence of cycles condition follows from the fact that the initial set of inequalities is associated with the problem of finding a shortest path (see the chapter on rationalizability in my mechanism design book). Let me illustrate with the example above. It is equivalent to finding a non-negative and non trivial solution to $\displaystyle \frac{v_{r}}{v_j}\geq \frac{p^i_{r}}{p^i_{j}} \,\,\forall j, \,\, x^i_r> 0$

Take logs: $\displaystyle \ln{v_r} - \ln{v_j} \geq \ln{\frac{p^i_{r}}{p^i_{j}}} \,\,\forall j, \,\, x^i_r> 0$

This is exactly the dual to the problem of finding a shortest path in a suitable network (I believe that Afriat has a paper, that I’ve not found, which focuses on systems of the form $b_{rs} >$ $x_s - x_r$ ).The cycle characterization would involve products of terms like ${\frac{p^i_{r}}{p^i_{j}}}$ being less than 1 (or greater than 1 depending on convention). So, what would this add?

In my salad days, school masters would assign boys returning from the summer hols an essay: What I did during the summer’. Yes, masters and boys. I served a portion of my youth in a misbegotten penal colony upon a wind blasted heath’. The only females present were master’s wives, matrons and the French mistress. No, not that kind, the kind that offers instruction in French. As you can see, to the lascivious minds of boys, there was no end to the double entendres. However, I digress.

Over the summer Thanh Nguyen and myself completed a paper about stable matchings. The abstract is reproduced below.

The National Resident Matching program strives for a stable matching of medical students to teaching hospitals. With the presence of couples, stable matchings need not exist. For any student preferences, we show that each instance of a stable matching problem has a nearby’ instance with a stable matching. The nearby instance is obtained by perturbing the capacities of the hospitals. Specifically, given a reported capacity $k_h$ for each hospital $h$, we find a redistribution of the slot capacities $k'_h$ satisfying $|k_h-k'_h|\le 4$ for all hospitals $h$ and $\sum_h k_h\le \sum k'_h \le \sum_h k_h+9$, such that a stable matching exists with respect to $k'$. Our approach is general and applies to other type of complementarities, as well as matchings with side constraints and contracts.

In other words, with the addition of at most 9 additional slots, one can guarantee the existence of a stable matchings. This is independent of the size of the market or doctors preferences (it does assume responsive preferences on the part of hospitals). The key tool  is Scarf’s lemma which is a wonderful device for converting results about cardinal matching problems into results about ordinal matching problems. For more on this, consult the paper by Kiralyi and Pap, who should be credited with a formulation of Scarf’s lemma that makes its usefulness evident.

The news of Stanley Reiter’s passing arrived over the weekend. Born in a turbulent age long since passed, he lived a life few of us could replicate. He saw service in WW2 (having lied about his age), and survived the Battle of the Bulge. On the wings of the GI bill he went through City College, which  in those days, was the gate through which many outsiders passed on their way to the intellectual aristocracy.

Perhaps  a minute to recall to what Stan left behind.

Stan, is well known of his important contributions to mechanism design in collaboration with Hurwicz and Mount. The most well known example of this is the notion of the size of the message space of a mechanism. Nisan and Segal pointed out the connection between this and the notion of communication complexity. Stan would have been delighted to learn about the connection between this and extension complexity.

Stan was in fact half a century ahead of the curve in his interest in the intersection of algorithms and economics. He was one of the first scholars to tackle the job shop problem. He proposed a simple index policy that was subsequently implemented and reported on in Business Week: “Computer Planning Unsnarls the Job Shop,” April 2, 1966, pp. 60-61.

In 1965, with G. Sherman, he proposed a local-search algorithm for the TSP (“Discrete optimizing”, SIAM Journal on Applied Mathematics 13, 864-889, 1965). Their algorithm was able to produce a tour at least as good as the tours that were reported in earlier papers. The ideas were extended with Don Rice  to a local search heuristic for  non-concave mixed integer programs along with a computation study of performance.

Stan was also remarkable as a builder. At Purdue, he developed a lively school of economic theory attracting the likes of Afriat, Kamien, Sonnenschein, Ledyard and Vernon Smith. He convinced them all to come telling them Purdue was just like New York! Then, to Northwestern to build two groups one in the Economics department and another (in collaboration with Mort Kamien) in the business school.

I had the opportunity to participate in a delightful workshop on mechanism design and the informed principal organized by Thomas Troeger and Tymofiy Mylovavnov. The setting was a charming schloss‘ (manse rather than castle)  an hour and half outside of Mannheim. They had gathered together a murderer’s row of speakers and auditors. Suffice it to say I was the infimum of the group and lucky to be there.

One (among many) remarkable talks was given by Roger Myerson on his 1983 paper entitled `Mechanism Design by an Informed Principal‘. Kudos to Thomas and Tymofiy for coming up with the idea of doing this. It brought to mind some couplets from Locksley Hall:

When the centuries behind me like a fruitful land reposed;
When I clung to all the present for the promise that it closed:

When I dipt into the future far as human eye could see;
Saw the Vision of the world and all the wonder that would be.—

By the way, the last pair of lines appears on the dedication plaque that graces the USS Voyager (of the Star Trek franchise).

What did Roger do? He tried as best as possible, given the gulf of time, to explain why he had chosen the tack that he did in the paper (axiomatic) and his hope for how it would influence research on the subject.

A principal with private information must propose a mechanism to an agent. However, the choice of mechanism will reveal something of the principal’s private information to the agent. Thus, the problem of mechanism design in this setting is not a straight optimization problem. It is, at a high level, a signaling game. The signals are the set of mechanisms that the principal can propose. Thus, one seeks an equilibrium of this game. But which equilibrium?

In section 7 of the paper, Roger approaches the question axiomatically in the spirit of Nash bargaining. Indeed, Roger made just such an analogy in his talk. Nash did not have in mind any particular bargaining protocol, but a conviction that any reasonable protocol must satisfy some natural invariance conditions. Some decades later Rubinstein arrives with a bargaining protocol to justify Nash’s conviction. So, Roger sought the same here and expressed the wish to see this year a vindication of his hopes.

Lest you think the audience accepted Roger’s axioms uncritically, Thomas Troeger, pointed out Roger’s axiom 1 ruled out some possibly natural settings like Rothschild & Stiglitz. Roger argued that it was right and proper to rule this out and battle joined!

The Nan-Shan, a Siamese steamer under the control of Captain James MacWhirr, on his orders, sails into a typhoon in the South China Sea. Conrad described the captain as

“Having just enough imagination to carry him through each successive day”.

On board, an all-white crew and 200 Chinese laborers, returning home with seven years’ wages stowed in

“a wooden chest with a ringing lock and brass on the corners, containing the savings of his labours: some clothes of ceremony, sticks of incense, a little opium maybe, bits of nameless rubbish of conventional value, and a small hoard of silver dollars, toiled for in coal lighters, won in gambling-houses or in petty trading, grubbed out of earth, sweated out in mines, on railway lines, in deadly jungle, under heavy burdens—amassed patiently, guarded with care, cherished fiercely.”

Ship and souls driven by McWhirr’s will survive the Typhoon. The wooden chest does not. Its contents strewn below deck, the silver dollars are mixed together. It falls to McWhirr to determine how the dollars are to be apportioned between the Chinese laborers to forestall an uprising.

“It seems that after he had done his thinking he made that Bun Hin fellow go down and explain to them the only way they could get their money back. He told me afterwards that, all the coolies having worked in the same place and for the same length of time, he reckoned he would be doing the fair thing by them as near as possible if he shared all the cash we had picked up equally among the lot. You couldn’t tell one man’s dollars from another’s, he said, and if you asked each man how much money he brought on board he was afraid they would lie, and he would find himself a long way short. I think he was right there. As to giving up the money to any Chinese official he could scare up in Fuchau, he said he might just as well put the lot in his own pocket at once for all the good it would be to them. I suppose they thought so, too.”

My former colleague Gene Mumy, writing in the JPE, argued that McWhirr’s solution was arbitrary. We know what McWhirr’s response would have been:

” The old chief says that this was plainly the only thing that could be done. The skipper remarked to me the other day, ‘There are things you find nothing about in books.’ I think that he got out of it very well for such a stupid man.”

Mumy, undeterred, proposed instead a pivotal mechanism (Clark, Groves, Tidemann, Tullock etc). For each agent compute the difference between the total amount of money and the sum of all other claims. If an agent claims at most this amount, they receive their claim. If his claim exceeds this amount, he is penalized. Mumy showed that truth telling was a full information Nash equilibrium of the mechanism.

Saryadar, in a comment in the JPE, criticizes Mumy’s solution on the grounds that it rules out pre-play communication on the part of the agents. Such communication could allow agents to transmit threats (I’m claiming everything) that if credible change the equilibrium outcome. He also hints that the assumption of common knowledge of the contributions is hard to swallow.

Schweinzer and Shimoji revisit the problem with the observation that truth telling is not the only Nash equilibrium of the mechanism proposed by Mumy. Instead, they treat it as problem of implementation under incomplete information. The captain is assumed to know the total amount of money to be divided but not the agents. They propose a mechanism and identify a sufficient condition on beliefs under which truth telling is the unique rationalizable strategy for each agent. The mechanism is in the spirit of a scoring rule, and relies on randomization. I think McWhirr might have objected on the grounds that the money represented the entire savings of the laborers. 