You are currently browsing the category archive for the ‘Mechanism design’ category.
According to the NY Times, some Californians
would have to cut their water consumption by 35 percent under the terms of a preliminary plan issued by state officials on Tuesday to meet a 25 percent mandatory statewide reduction in urban water use.
There is an obvious way to achieve this, raise the price of water. If its obvious why isn’t it the first thing California did some years back when it was clear that water was scarce? In some cases the hands of the state are tied by water rights allocated for commercial purposes, so lets focus on household consumption.
We know that the first tool the state reaches for is regulation. See, for example, the following memo from the California State water board. Interestingly, it begins by noting that the state is in the 4th year of a drought! Eventually, it is clear that regulation is insufficient and then, price increases are considered. In fact, the water reduction targets quoted from the NY Times above come from a report by the Governor of the state that also urges the use of
rate structures and other pricing mechanisms
to achieve reductions. Again, why is price last rather first? Is this because the state must maintain a credible reputation for not exploiting its monopoly power with water?
If one is going reduce consumption by raising prices, should it be an across the board price increase? Note that consumption is metered so the exact amount that is purchased by a household is known to the provider. The state also has access to other information: location, home size, family size and income. In principle, the state could price discriminate. Here is an example from the Irvine Ranch Water district. Each household is given an initial `allotment’ that depends on household size and area that is landscaped. Exceed the allotment and the price of water rises. For more details and the impact on consumption see the following paper. Is it obvious that this is the `correct’ mechanism?
Uber posts a price per ride and keeps a commission on the price. Suppose Uber is the only ride matching service in town. If is the demand function for rides at per ride price and is the supply curve for drivers at wage per ride, Uber must choose and to solve the following:
The last constraint comes from the assumption that Uber is committed to ensuring that every rider seeking a ride at the posted price gets one.
Suppose, Uber did not link the payment to driver to the price charged to rider in this particular way. Then, Uber would solve
The first optimization problem is clearly more restrictive than the second. Hence, the claim that Uber is not profit maximizing. Which raises the obvious puzzle, why is Uber using a revenue sharing scheme?
Sydney Afriat arrived in Purdue in the late 60’s with a Bentley in tow. Mort Kamien described him as having walked out of the pages of an Ian Flemming novel. Why he brought the Bentley was a puzzle, as there were no qualified mechanics as far as the eye could see. In Indiana, that is a long way. Afriat would take his Bentley on long drives only to be interrupted by mechanical difficulties that necessitated the Bentley being towed to wait for parts or specialized help.
I came upon Afriat when I learnt about the problem of rationalizability. One has a model of choice and a collection of observations about what an agent selected. Can one rationalize the observed choices by the given model of choice? In Afriat’s seminal paper on the subject, the observations consisted of price-quantity pairs for a vector of goods and a budget. The goal was to determine if the observed choices were consistent with an agent maximizing a concave utility function subject to the budget constraint. Afriat’s paper has prompted many other papers asking the same question for different models of choice. There is an aspect of these papers, including Afriat’s, that I find puzzling.
To illustrate, consider rationalizing expected utility (Eran Shmaya suggested that `expected consumption’ might be more accurate). Let be the set of possible states. We are given a sequence of observations and a single budget . Here represents consumption in state and is the unit price of consumption in state in observation . We want to know if there is a probability distribution over states, , such that each maximizes expected utility. In other words, solves
The solution to the above program is obvious. Identify the variable with the largest objective coefficient to constraint ratio and make it as large as possible. It is immediate that a collection of observations can be rationalized by a suitable set of non-zero and nonnegative ‘s if the following system has a feasible solution:
This completes the task as formulated by Afriat. A system of inequalities has been identified, that if feasible means the given observations can be rationalized. How hard is this to do in other cases? As long as the model of choice involves optimization and the optimization problem is well behaved in that first order conditions, say, suffice to characterize optimality, its a homework exercise. One can do this all day, thanks to Afriat; concave, additively separable concave, etc. etc.
Interestingly, no rationalizability paper stops at the point of identifying the inequalities. Even Afriat’s paper goes a step farther and proceeds to `characterize’ when the observations can be rationalized. But, feasibility of the inequalities themselves is just such a characterization. What more is needed?
Perhaps, the characterization involving inequalities lacks `interpretation’. Or, if the given system for a set of observations was infeasible, we may be interested in the obstacle to feasibility. Afriat’s paper gave a characterization in terms of the strong axiom of revealed preference, i.e., an absence of cycles of certain kinds. But that is precisely the Farkas alternative to the system of inequalities identified in Afriat. The absence of cycles condition follows from the fact that the initial set of inequalities is associated with the problem of finding a shortest path (see the chapter on rationalizability in my mechanism design book). Let me illustrate with the example above. It is equivalent to finding a non-negative and non trivial solution to
This is exactly the dual to the problem of finding a shortest path in a suitable network (I believe that Afriat has a paper, that I’ve not found, which focuses on systems of the form ).The cycle characterization would involve products of terms like being less than 1 (or greater than 1 depending on convention). So, what would this add?
In my salad days, school masters would assign boys returning from the summer hols an essay: `What I did during the summer’. Yes, masters and boys. I served a portion of my youth in a `misbegotten penal colony upon a wind blasted heath’. The only females present were master’s wives, matrons and the French mistress. No, not that kind, the kind that offers instruction in French. As you can see, to the lascivious minds of boys, there was no end to the double entendres. However, I digress.
The National Resident Matching program strives for a stable matching of medical students to teaching hospitals. With the presence of couples, stable matchings need not exist. For any student preferences, we show that each instance of a stable matching problem has a `nearby’ instance with a stable matching. The nearby instance is obtained by perturbing the capacities of the hospitals. Specifically, given a reported capacity for each hospital , we find a redistribution of the slot capacities satisfying for all hospitals and , such that a stable matching exists with respect to . Our approach is general and applies to other type of complementarities, as well as matchings with side constraints and contracts.
In other words, with the addition of at most 9 additional slots, one can guarantee the existence of a stable matchings. This is independent of the size of the market or doctors preferences (it does assume responsive preferences on the part of hospitals). The key tool is Scarf’s lemma which is a wonderful device for converting results about cardinal matching problems into results about ordinal matching problems. For more on this, consult the paper by Kiralyi and Pap, who should be credited with a formulation of Scarf’s lemma that makes its usefulness evident.
The news of Stanley Reiter’s passing arrived over the weekend. Born in a turbulent age long since passed, he lived a life few of us could replicate. He saw service in WW2 (having lied about his age), and survived the Battle of the Bulge. On the wings of the GI bill he went through City College, which in those days, was the gate through which many outsiders passed on their way to the intellectual aristocracy.
Perhaps a minute to recall to what Stan left behind.
Stan, is well known of his important contributions to mechanism design in collaboration with Hurwicz and Mount. The most well known example of this is the notion of the size of the message space of a mechanism. Nisan and Segal pointed out the connection between this and the notion of communication complexity. Stan would have been delighted to learn about the connection between this and extension complexity.
Stan was in fact half a century ahead of the curve in his interest in the intersection of algorithms and economics. He was one of the first scholars to tackle the job shop problem. He proposed a simple index policy that was subsequently implemented and reported on in Business Week: “Computer Planning Unsnarls the Job Shop,” April 2, 1966, pp. 60-61.
In 1965, with G. Sherman, he proposed a local-search algorithm for the TSP (“Discrete optimizing”, SIAM Journal on Applied Mathematics 13, 864-889, 1965). Their algorithm was able to produce a tour at least as good as the tours that were reported in earlier papers. The ideas were extended with Don Rice to a local search heuristic for non-concave mixed integer programs along with a computation study of performance.
Stan was also remarkable as a builder. At Purdue, he developed a lively school of economic theory attracting the likes of Afriat, Kamien, Sonnenschein, Ledyard and Vernon Smith. He convinced them all to come telling them Purdue was just like New York! Then, to Northwestern to build two groups one in the Economics department and another (in collaboration with Mort Kamien) in the business school.
I had the opportunity to participate in a delightful workshop on mechanism design and the informed principal organized by Thomas Troeger and Tymofiy Mylovavnov. The setting was a charming `schloss‘ (manse rather than castle) an hour and half outside of Mannheim. They had gathered together a murderer’s row of speakers and auditors. Suffice it to say I was the infimum of the group and lucky to be there.
One (among many) remarkable talks was given by Roger Myerson on his 1983 paper entitled `Mechanism Design by an Informed Principal‘. Kudos to Thomas and Tymofiy for coming up with the idea of doing this. It brought to mind some couplets from Locksley Hall:
When the centuries behind me like a fruitful land reposed;
When I clung to all the present for the promise that it closed:
When I dipt into the future far as human eye could see;
Saw the Vision of the world and all the wonder that would be.—
By the way, the last pair of lines appears on the dedication plaque that graces the USS Voyager (of the Star Trek franchise).
What did Roger do? He tried as best as possible, given the gulf of time, to explain why he had chosen the tack that he did in the paper (axiomatic) and his hope for how it would influence research on the subject.
A principal with private information must propose a mechanism to an agent. However, the choice of mechanism will reveal something of the principal’s private information to the agent. Thus, the problem of mechanism design in this setting is not a straight optimization problem. It is, at a high level, a signaling game. The signals are the set of mechanisms that the principal can propose. Thus, one seeks an equilibrium of this game. But which equilibrium?
In section 7 of the paper, Roger approaches the question axiomatically in the spirit of Nash bargaining. Indeed, Roger made just such an analogy in his talk. Nash did not have in mind any particular bargaining protocol, but a conviction that any reasonable protocol must satisfy some natural invariance conditions. Some decades later Rubinstein arrives with a bargaining protocol to justify Nash’s conviction. So, Roger sought the same here and expressed the wish to see this year a vindication of his hopes.
Lest you think the audience accepted Roger’s axioms uncritically, Thomas Troeger, pointed out Roger’s axiom 1 ruled out some possibly natural settings like Rothschild & Stiglitz. Roger argued that it was right and proper to rule this out and battle joined!
The Nan-Shan, a Siamese steamer under the control of Captain James MacWhirr, on his orders, sails into a typhoon in the South China Sea. Conrad described the captain as
“Having just enough imagination to carry him through each successive day”.
On board, an all-white crew and 200 Chinese laborers, returning home with seven years’ wages stowed in
“a wooden chest with a ringing lock and brass on the corners, containing the savings of his labours: some clothes of ceremony, sticks of incense, a little opium maybe, bits of nameless rubbish of conventional value, and a small hoard of silver dollars, toiled for in coal lighters, won in gambling-houses or in petty trading, grubbed out of earth, sweated out in mines, on railway lines, in deadly jungle, under heavy burdens—amassed patiently, guarded with care, cherished fiercely.”
Ship and souls driven by McWhirr’s will survive the Typhoon. The wooden chest does not. Its contents strewn below deck, the silver dollars are mixed together. It falls to McWhirr to determine how the dollars are to be apportioned between the Chinese laborers to forestall an uprising.
“It seems that after he had done his thinking he made that Bun Hin fellow go down and explain to them the only way they could get their money back. He told me afterwards that, all the coolies having worked in the same place and for the same length of time, he reckoned he would be doing the fair thing by them as near as possible if he shared all the cash we had picked up equally among the lot. You couldn’t tell one man’s dollars from another’s, he said, and if you asked each man how much money he brought on board he was afraid they would lie, and he would find himself a long way short. I think he was right there. As to giving up the money to any Chinese official he could scare up in Fuchau, he said he might just as well put the lot in his own pocket at once for all the good it would be to them. I suppose they thought so, too.”
My former colleague Gene Mumy, writing in the JPE, argued that McWhirr’s solution was arbitrary. We know what McWhirr’s response would have been:
” The old chief says that this was plainly the only thing that could be done. The skipper remarked to me the other day, ‘There are things you find nothing about in books.’ I think that he got out of it very well for such a stupid man.”
Mumy, undeterred, proposed instead a pivotal mechanism (Clark, Groves, Tidemann, Tullock etc). For each agent compute the difference between the total amount of money and the sum of all other claims. If an agent claims at most this amount, they receive their claim. If his claim exceeds this amount, he is penalized. Mumy showed that truth telling was a full information Nash equilibrium of the mechanism.
Saryadar, in a comment in the JPE, criticizes Mumy’s solution on the grounds that it rules out pre-play communication on the part of the agents. Such communication could allow agents to transmit threats (I’m claiming everything) that if credible change the equilibrium outcome. He also hints that the assumption of common knowledge of the contributions is hard to swallow.
Schweinzer and Shimoji revisit the problem with the observation that truth telling is not the only Nash equilibrium of the mechanism proposed by Mumy. Instead, they treat it as problem of implementation under incomplete information. The captain is assumed to know the total amount of money to be divided but not the agents. They propose a mechanism and identify a sufficient condition on beliefs under which truth telling is the unique rationalizable strategy for each agent. The mechanism is in the spirit of a scoring rule, and relies on randomization. I think McWhirr might have objected on the grounds that the money represented the entire savings of the laborers.
Conrad describes the aftermath.
“We finished the distribution before dark. It was rather a sight: the sea running high, the ship a wreck to look at, these Chinamen staggering up on the bridge one by one for their share, and the old man still booted, and in his shirt-sleeves, busy paying out at the chartroom door, perspiring like anything, and now and then coming down sharp on myself or Father Rout about one thing or another not quite to his mind. He took the share of those who were disabled to them on the No. 2 hatch. There were three dollars left over, and these went to the three most damaged coolies, one to each. We turned-to afterwards, and shovelled out on deck heaps of wet rags, all sorts of fragments of things without shape, and that you couldn’t give a name to, and let them settle the ownership themselves.”
Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.
In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by
The problem of maximizing total welfare is
I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is called sign consistency. For each , the sign of and for any is the same. Furthermore, this applies to all pairs .
Let be a graph with vertex set and for any such that introduce an edge . Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . The second condition is that be a tree.
The following is the relaxation that they consider:
Denote by the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”
Let be the maximal connected components of after deletion of the edges in (call this ). The proof will be by induction on . The case follows from total unimodularity. I prove this later.
Suppose . Let be an optimal solution to our linear program. We can choose to be an extreme point of . As is a tree, there must exist a incident to exactly one negative edge, say . Denote by the polyhedron restricted to just the vertices of and by the polyhedron restricted to just the vertices in the complement of . By the induction hypothesis, both and are integral polyhedrons. Each extreme point of () assigns a vertex of (the complement of ) to a particular agent. Let be the set of extreme points of . If in extreme point , vertex is assigned to agent we write and zero otherwise. Similarly with the extreme points of . Thus, is assigns vertex to agent . Let be the objective function value of the assignment , similarly with .
Now restricted to can be expressed as . Similarly, restricted to can be expressed as . We can now reformulate our linear program as follows:
The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because and cannot both be 1, similarly with the ‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in every cycle must contain a positive even number of negative edges.
Return to the case . Consider the polyhedron restricted to just one . It will have the form:
Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, , of rows/constraints. The goal is to partition them into two sets and so that column by column the difference in the sum of the non-zero entries in and and the sum of the nonzero entries in differ by at most one.
Observe that the rows associated with constraints are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If is present in but is absent (or vice-versa), we are free to assign the row associated with in any way to satisfy the GH theorem. The difficulty will arise when both , and are present in . To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with and be separated.
When is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to and making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).
Fix an agent . The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in (which is a tree) to be a root and direct all edges `away’ from the root (when is a subset of the constraints we delete from any edge in which at most one from the pair and appears in ) . Label the root . Label all its neighbors , label the neighbors of the neighbors and so on. If vertex was labeled assign the row to the set otherwise to the row . This produces a partition of the constraints of the form satisfying GH.
Initially, all leaves and edges of are unmarked. Trace out a path from the root to one of the leaves of and mark that leaf. Each unmarked directed edge on this path corresponds to the pair and . Assign to the same set that is the label of . Assign to the same set that is the label of vertex . Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge . If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge as well as . Suppose was labeled on the first iteration and was marked. This means was assigned to . Subsequently will also be assigned to which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.
What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. Note that other sources, like Wikipedia give other reasons, for why electricity is special:
Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.
I’m skeptical. To see why, replace electricity by air travel.
Let be the set of vertices and the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in . Let be the set of directed arcs that result. Hence, mens that the edge is directed from to . Notice, if , then .
Associated with each is a number that we interpret as a flow of electricity. If we interpret this to be a flow from to . If we interpret this as a flow from to .
- Let is the resistance on link .
- unit cost of injecting current into node .
- marginal value of current consumed at node .
- amount of current consumed at node .
- amount of current injected at node .
- capacity of link .
Current must satisfy two conditions. The first is conservation of flow at each node:
The second is Ohm’s law. There exist node potentials such that
Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.
At each node there is a power supplier with constant marginal cost of production of upto units. At each there is a consumer with constant marginal value of upto units. A natural optimization problem to consider is
This is the problem of finding a flow that maximizes surplus.
Let be the set of cycles in . Observe that each corresponds to a cycle in if we ignore the orientation of the edges. For each cycle , let denote the edges in that are traversed in accordance with their orientation. Let be the set of edges in that are traversed in the opposing orientation.
We can project out the variables and reformulate as
Recall the scenario we ended with in part 1. Let , and in addition suppose for all . Only has a capacity constraint of 600. Let and . Also and and each have unlimited capacity. At node 3, the marginal value is upto 1500 units and zero thereafter. The optimization problem is
Notice, for every unit of flow sent along , half a unit of flow must be sent along and as well to satisfy the cycle flow constraint.
The solution to this problem is , , , , and . What is remarkable about this not all of customer 3’s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along and 900 units along . This flow violates the cycle constraint.
In this example, when generator 1 injects electricity into the network to serve customer 3’s demand, a positive amount of that electricity must flow along every path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3’s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.
The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.
Let be the dual variable associated with the flow balance constraint. Let be associated with the cycle constraints. Let and be associated with link capacity constraints. Let and be associated with the remaining tow constraints. These can be interpreted as the profit of supplier and the surplus of customer respectively. For completeness the dual would be:
Now has a natural interpretation as a price to be paid for consumption at node for supply injected at node . and can be interpreted as the price of capacity. However, is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.