You are currently browsing rvohra’s articles.


When discussing the allocation of indivisible objects, I point to randomization as a method. To emphasize it is not merely a theoretical nicety, but is used to allocate objects that are of high value I give the example of suicide lotteries. I first came across them in Marcus Clarke’s `For The Term of His Natural Life’. It counts as the first Australian novel. Its hero, an Englishman, Rufus Dawes is transported to Australia for a crime he did not commit. In the course of his tribulations, Dawes is shipped to a penal settlement on Norfolk Island, 800 miles east of Australia; even a prison needs a further prison. Robert Hughes, in his heart rending and eloquent account of the founding of Australia, called the `Fatal Shore’, describes Norfolk island in these words:

`….a place of breathtaking barbarity……. On Norfolk Island an Irishman named William Riley received 100 lashes for ”Singing a Song” (no doubt a rebel one) and 50 for asking a warder for a chew of tobacco. Deranged by cruelty and misery, some men would opt for a lifetime at the bottom of the carceral heap by blinding themselves; thus, they reasoned, they would be left alone.’

It is in this portion of his book, that Hughes recalls an eyewitness account of a suicide lottery of the type mentioned in Clarke’s novel. Here is Clarke’s succinct description of it:

The scheme of escape hit upon by the convict intellect was simply this. Three men being together, lots were drawn to determine whom should be murdered. The drawer of the longest straw was the “lucky” man. He was killed. The drawer of the next longest straw was the murderer. He was hanged. The unlucky one was the witness. He had, of course, an excellent chance of being hung also, but his doom was not so certain, and he therefore looked upon himself as unfortunate.

Clarke and Hughes deviate slightly upon the precise incentives that would drive participation in the scheme. As many of the convicts on Norfolk island were Irish, the scheme was concocted as a way to to circumvent the Catholic prohibition on suicide. Hughes suggests that, after the murder, witness and culprit would be shipped back to the mainland for trial. Conditions there were better, so for both there was brief respite and a greater opportunity for escape.

Its an arresting story, that one is loath to give up. But, one is compelled to ask, is it true? If yes, was it common? Tim Causer of King’s College London went back to look at the records and says the answers are `maybe’ and `no’. Here is his summing up:

`Capital offences committed with apparent suicidal intent are an important part of Norfolk Island’s history, but they need to be understood more fully. It should be recognised just how rare they were, that ‘suicide lotteries’ are embellishments upon actual cases of state-assisted suicide and repeating the myth only reinforces the sensationalised interpretation of Norfolk Island’s history.’

You can find the full account here.

Many people say (actually, just one) that the Republican’s have a plan to remove Trump from the Presidency, should he win in November using the 25th amendment. Section 4 of the amendment reads:

`Whenever the Vice President and a majority of either the principal officers of the executive departments or of such other body as Congress may by law provide, transmit to the President pro tempore of the Senate and the Speaker of the House of Representatives their written declaration that the President is unable to discharge the powers and duties of his office, the Vice President shall immediately assume the powers and duties of the office as Acting President.’

The VP is Pence. The President pro tempore of the Senate, is the senior senator of the majority party and Paul Ryan is the Speaker of the House.
The President can object. At which point, Congress resolves the matter, specifically,

`….two-thirds vote of both Houses that the President is unable to discharge the powers and duties of his office, the Vice President shall continue to discharge the same as Acting President; otherwise, the President shall resume the powers and duties of his office.’


Platooning, driverless cars and ride hailing services have all been suggested as ways to reduce congestion. In this post I want to examine the use of coordination via ride hailing services as a way to reduce congestion. Assume that large numbers of riders decide to rely on ride hailing services. Because the services use Google Maps or Waze for route selection, it would be possible to coordinate their choices to reduce congestion.

To think thorough the implications of this, its useful to revisit an example of Arthur Pigou. There is a measure 1 of travelers all of whom wish to leave the same origin ({s}) for the same destination ({t}). There are two possible paths from {s} to {t}. The `top’ one has a travel time of 1 unit independent of the measure of travelers who use it. The `bottom’ one has a travel time that grows linearly with the measure of travelers who employ it. Thus, if fraction {x} of travelers take the bottom path, each incurs a travel time of {x} units.

A central planner, say, Uber, interested in minimizing total travel time will route half of all travelers through the top and the remainder through the bottom. Total travel time will be {0.5 \times 1 + 0.5 \times 0.5 = 0.75}. The only Nash equilibrium of the path selection game is for all travelers to choose the bottom path yielding a total travel time of {1}. Thus, if the only choice is to delegate my route selection to Uber or make it myself, there is no equilibrium where all travelers delegate to Uber.

Now suppose, there are two competing ride hailing services. Assume fraction {\alpha} of travelers are signed up with Uber and fraction {1-\alpha} are signed up with Lyft. To avoid annoying corner cases, {\alpha \in [1/3, 2/3]}. Each firm routes its users so as to minimize the total travel time that their users incur. Uber will choose fraction {\lambda_1} of its subscribers to use the top path and the remaining fraction will use the bottom path. Lyft will choose a fraction {\lambda_2} of its subscribers to use the top path and the remaining fraction will use the bottom path.

A straight forward calculation reveals that the only Nash equilibrium of the Uber vs. Lyft game is {\lambda_1 = 1 - \frac{1}{3 \alpha}} and {\lambda_2 = 1 - \frac{1}{3(1-\alpha)}}. An interesting case is when {\alpha = 2/3}, i.e., Uber has a dominant market share. In this case {\lambda_2 = 0}, i.e., Lyft sends none of its users through the top path. Uber on the hand will send half its users via the top and the remainder by the bottom path. Assuming Uber randomly assigns its users to top and bottom with equal probability, the average travel time for a Uber user will be

\displaystyle 0.5 \times 1 + 0.5 \times [0.5 \times (2/3) + 1/3] = 5/6.

The travel time for a Lyft user will be

\displaystyle [0.5 \times (2/3) + 1/3] = 2/3.

Total travel time will be {7/9}, less than in the Nash equilibrium outcome. However, Lyft would offer travelers a lower travel time than Uber. This is because, Uber which has the bulk of travelers, must use the top path to reduce total travel times. If this were the case, travelers would switch from Uber to Lyft. This conclusion ignores prices, which at present are not part of the model.

Suppose we include prices and assume that travelers now evaluate a ride hailing service based on delivered price, that is price plus travel time. Thus, we are assuming that all travelers value time at $1 a unit of time. The volume of customers served by Uber and Lyft is no longer fixed and they will focus on minimizing average travel time per customer. A plausible guess is that there will be an equal price equilibrium where travelers divide evenly between the two services, i.e., {\alpha = 0.5}. Each service will route {1/3} of its customers through the top and the remainder through the bottom. Average travel time per customer will be {5/3}. However, total travel time on the bottom will be {2/3}, giving every customer an incentive to opt out and drive their own car on the bottom path.

What this simple minded analysis highlights is that the benefits of coordination may be hard to achieve if travelers can opt out and drive themselves. To minimize congestion, the ride hailing services must limit traffic on the bottom path. This is the one that is congestible. However, doing so makes its attractive in terms of travel time encouraging travelers to opt out.

Credit for the game that bears his name is due to to Borel. It appears in a 1921 paper in French. An English translation (by Leonard Savage) may be found in a 1953 Econometrica.



The first appearance in print of a version of the game with Colonel Blotto’s name attached is, I believe, in the The Weekend Puzzle Book by Caliban (June 1924). Caliban was the pen name of Hubert Phillips one time head of Economics at the University of Bristol and a puzzle contributor to The New Statesman.

Blotto itself is a slang word for inebriation. It does not, apparently, derive from the word `blot’, meaning to absorb liquid. One account credits a French manufacturer of delivery tricycles (Blotto Freres, see the picture) that were infamous for their instability. This inspired Laurel and Hardy to title one of their movies Blotto. In it they get blotto on cold tea, thinking it whiskey.

Over time, the Colonel has been promoted. In 2006 to General and to Field Marshall in 2011.

Colleagues outside of Economics often marvel at the coordinated nature of the Economics job market. The job market is so efficient, that the profession no longer wastes resources by having everyone read each candidate’s job market paper. That task is assigned to one person (Tyler Cowen) who reports back to the rest of us. In case you missed the report, here it is

Economics is not alone in having a coordinated job market. Philosophy has one, but it has begun to show signs of unraveling. The ability to interview via Skype, for example, has reduced the value in the eyes of many, for a preliminary interview at their annual meeting. In response, the American Philosophy Association posted the following statement regarding the job market calendar:

For tenure-track/continuing positions advertised in the second half of the calendar year, we recommend an application deadline of November 1 or later. It is further recommended that positions be advertised at least 30 days prior to the application deadline to ensure that candidates have ample time to apply.

In normal circumstances a prospective employee should have at least two weeks for consideration of a written offer from the hiring institution, and responses to offers of a position whose duties begin in the succeeding fall should not be required before February 1.

When advertising in PhilJobs: Jobs for Philosophers, advertisers will be asked to confirm that the hiring institution will follow the above guidelines. If an advertiser does not do so, the advertisement will include a notice to that effect.

Its natural to wonder if the Economics market is not far behind. Skype interviews are already taking place. The current set up requires a department to evaluate and select candidates  for preliminary interviews within a month (roughly the middle of November to mid December) which is hardly conducive to mature reflection (and argument).

I don’t often go to empirical talks, but when I do, I fall asleep. Recently, while so engaged, I dreamt of the `replicability crisis’ in Economics (see Chang and Li (2015)). The penultimate line of their abstract is the following bleak assessment:

`Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable.’

Eager to help my empirical colleagues snatch victory from the jaws of defeat, I did what all theorists do. Build a model. Here it is.

The journal editor is the principal and the agent is an author. Agent has a paper characterized by two numbers {(v, p)}. The first is the value of the findings in the paper assuming they are replicable. The second is the probability that the findings are indeed replicable. The expected benefit of the paper is {pv}. Assume that {v} is common knowledge but {p} is the private information of agent. The probability that agent is of type {(v,p)} is {\pi(v,p)}.

Given a paper, the principal can at a cost {K} inspect the paper. With probability {p} the inspection process will replicate the findings of the paper. Principal proposes an incentive compatible direct mechanism. Agent reports their type, {(v, p)}. Let {a(v, p)} denote the interim probability that agent’s paper is provisionally accepted. Let {c(v, p)} be the interim probability of agent’s paper not being inspected given it has been provisionally accepted. If a provisionally accepted paper is not inspected, it is published. If a paper subject to inspection is successfully replicated, the paper is published. Otherwise it is rejected and, per custom, the outcome is kept private. Agent cares only about the paper being accepted. Hence, agent cares only about

\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p.

The principal cares about replicability of papers and suffers a penalty of {R > K} for publishing a paper that is not replicable. Principal also cares about the cost of inspection. Therefore she maximizes

\displaystyle \sum_{v,p}\pi(v,p)[pv - (1-p)c(v,p)R]a(v,p) - K \sum_{v,p}\pi(v,p)a(v,p)(1-c(v,p))

\displaystyle = \sum_{v,p}\pi(v,p)[pv-K]a(v,p) + \sum_{v,p}\pi(v,p)a(v,p)c(v,p)[K - (1-p)R].

The incentive compatibility constraint is
\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p \geq a(v, p')c(v,p') + a(v, p')(1-c(v,p'))p.

Recall, an agent cannot lie about the value component of the type.
We cannot screen on {p}, so all that matters is the distribution of {p} conditional on {v}. Let {p_v = E(p|v)}. For a given {v} there are only 3 possibilities: accept always, reject always, inspect and accept. The first possibility has an expected payoff of

\displaystyle vp_v - (1-p_v) R = (v+R) p_v - R

for the principal. The second possibility has value zero. The third has value { vp_v -K }.
The principal prefers to accept immediately over inspection if

\displaystyle (v+R) p_v - R > vp_v - K \Rightarrow p_v > (R-K)/R.

The principal will prefer inspection to rejection if { vp_v \geq K}. The principal prefers to accept rather than reject depends if {p_v \geq R/(v+R).}
Under a suitable condition on {p_v} as a function of {v}, the optimal mechanism can be characterized by two cutoffs {\tau_2 > \tau_1}. Choose {\tau_2} to be the smallest {v} such that

\displaystyle p_v \geq \max( (R/v+R), ((R-K)/R) ).

Choose {\tau_1} to be the largest {v} such that {p_v \leq \min (K/v, R/v+R)}.
A paper with {v \geq \tau_2} will be accepted without inspection. A paper with {v \leq \tau_1} will be rejected. A paper with {v \in (\tau_1, \tau_2)} will be provisionally accepted and then inspected.

For empiricists the advice would be to should shoot for high {v} and damn the {p}!

More seriously, the model points out that even a journal that cares about replicability and bears the cost of verifying this will publish papers that have a low probability of being replicable. Hence, the presence of published papers that are not replicable is not, by itself, a sign of something rotten in Denmark.

One could improve outcomes by making authors bear the costs of a paper not being replicated. This points to a larger question. Replication is costly. How should the cost of replication be apportioned? In my model, the journal bore the entire cost. One could pass it on to the authors but this may have the effect of discouraging empirical research. One could rely on third parties (voluntary, like civic associations, or professionals supported by subscription). Or, one could rely on competing partisan groups pursuing their agendas to keep the claims of each side in check. The last seems at odds with the romantic ideal of disinterested scientists but could be efficient. The risk is partisan capture of journals which would shut down cross-checking.

When analyzing a mechanism it is convenient to assume that it is direct. The revelation principle allows one to argue that this restriction is without loss of generality. Yet, there are cases where one prefers to implement the indirect version of a mechanism rather than its direct counterpart. The clock version of the English ascending auction and the sealed bid second price auction are the most well known example (one hopes not the only). There are few (i.e. I could not immediately recall any) theorems that uniquely characterize a particular indirect mechanism. It would be nice to have more. What might such a characterization depend upon?

1) Direct mechanisms require that agents report their types. A concern for privacy could be used to `kill’ off a direct mechanism. However, one would first have to rule out the use of trusted third parties (either human or computers implementing cryptographic protocols).

2) Indirect mechanism can sometimes be thought of as an extensive form game and one might look for refinements of solution concepts for extensive form games that have no counterpart in the direct version of the mechanism. The notion of obviously dominant strategy-proof that appears here is an example. However, indirect mechanisms may introduce equilibria, absent in the direct counterpart, that are compelling for the agents but unattractive for the designers purposes.

3) One feature of observed indirect mechanisms is that they use simple message spaces, but compensate by using multiple rounds of communication. Thus a constraint on message spaces would be needed in a characterization but coupled with a constraint on the rounds of communication.

From Kris Shaw, a TA in for my ECON 101 class, I learnt that the band Van Halen once required that brown M&M’s not darken their dressing room door. Why? Maybe it was a lark. Perhaps, a member of the band (or two) could not resist chuckling over the idea of a minor factotum appointed to the task of sorting the M&Ms. When minor factotum is asked what they did that day, the response was bound to elicit guffaws. However, minor factotum might have made it a point to not wash their hands before sorting the M&Ms. Then, who would be laughing harder?

A copy of the M&M rider can be found here. Along with van Halen’s explanation of why the rider was included:

……the group has said the M&M provision was included to make sure that promoters had actually read its lengthy rider. If brown M&M’s were in the backstage candy bowl, Van Halen surmised that more important aspects of a performance–lighting, staging, security, ticketing–may have been botched by an inattentive promoter.

So the rider helps screen, apparently, whether the promotor pays attention to detail. I think the explanation problematic. It suggests that it is hard to monitor effort expended by promoter on important things like staging for example. So, monitor something completely irrelevant. The strategic promoter should shirk on the staging and expend effort on the M&Ms.


Duppe and Weintraub date the birth of Economic Theory,  at June 1949. It was the year in which Koopmans organized the Cowles Commission Activity Analysis Conference. It is also counted as conference Zero of the Mathematical Programming Symposium. I mention this because the connections between Economic Theory and Mathematical Programming and Operations Research had, at one time been very strong. The conference, for example, was conceived of by Tjalling Koopmans, Harold Kuhn, George Dantzig, Albert Tucker, Oskar Morgenstern, and Wassily Leontief with the support of the Rand corporation.

One of the last remaining links to this period who straddled, like a Colossus, both Economic Theory and Operations Research, Herbert Eli Scarf, passed away on November 15th, 2015.

Scarf came to Economics and Operations Research by way of Princeton’s mathematics department. Among his classmates was Gomory of the cutting plane method Milnor of topology fame and Shapley. Subsequently, he went on to  Rand ( Dantzig, Bellman, Ford & Fulkerson). While there he met Samuel Karlin and Kenneth Arrow who introduced him to inventory theory. It was in this subject that Scarf made the first of many important contributions: the optimality of (S, s) polices. He would go on to establish equivalence of the core and competitive equilibrium (jointly with Debreu), identify a sufficient condition for non-emptiness of the core of a NTU game (now known as Scarf’s Lemma), anticipated the application of Groebner basis in integer programming (neighborhood systems) and of course his magnificent `Computation of Economic Equilibria’.

Exegi monumentum aere perennnius regalique situ pyramidum altius, quod non imber edax, non Aquilo impotens possit diruere aut innumerabilis annorum series et fuga temporum. Non omnis moriar…….

I have finished a monument more lasting than bronze and higher than the royal structure of the pyramids, which neither the destructive rain, nor wild North wind is able to destroy, nor the countless series of years and flight of ages. I will not wholly die………….