You are currently browsing the category archive for the ‘Mechanism design’ category.

The news of Stanley Reiter’s passing arrived over the weekend. Born in a turbulent age long since passed, he lived a life few of us could replicate. He saw service in WW2 (having lied about his age), and survived the Battle of the Bulge. On the wings of the GI bill he went through City College, which  in those days, was the gate through which many outsiders passed on their way to the intellectual aristocracy.

But in the importance and noise of to-morrow
When the brokers are roaring like beasts on the floor of the Bourse

Perhaps  a minute to recall to what Stan left behind.

Stan, is well known of his important contributions to mechanism design in collaboration with Hurwicz and Mount. The most well known example of this is the notion of the size of the message space of a mechanism. Nisan and Segal pointed out the connection between this and the notion of communication complexity. Stan would have been delighted to learn about the connection between this and extension complexity.

Stan was in fact half a century ahead of the curve in his interest in the intersection of algorithms and economics. He was one of the first scholars to tackle the job shop problem. He proposed a simple index policy that was subsequently implemented and reported on in Business Week: “Computer Planning Unsnarls the Job Shop,” April 2, 1966, pp. 60-61.

In 1965, with G. Sherman, he proposed a local-search algorithm for the TSP (“Discrete optimizing”, SIAM Journal on Applied Mathematics 13, 864-889, 1965). Their algorithm was able to produce a tour at least as good as the tours that were reported in earlier papers. The ideas were extended with Don Rice  to a local search heuristic for  non-concave mixed integer programs along with a computation study of performance.

Stan was also remarkable as a builder. At Purdue, he developed a lively school of economic theory attracting the likes of Afriat, Kamien, Sonnenschein, Ledyard and Vernon Smith. He convinced them all to come telling them Purdue was just like New York! Then, to Northwestern to build two groups one in the Economics department and another (in collaboration with Mort Kamien) in the business school.

I had the opportunity to participate in a delightful workshop on mechanism design and the informed principal organized by Thomas Troeger and Tymofiy Mylovavnov. The setting was a charming `schloss‘ (manse rather than castle)  an hour and half outside of Mannheim. They had gathered together a murderer’s row of speakers and auditors. Suffice it to say I was the infimum of the group and lucky to be there.

One (among many) remarkable talks was given by Roger Myerson on his 1983 paper entitled `Mechanism Design by an Informed Principal‘. Kudos to Thomas and Tymofiy for coming up with the idea of doing this. It brought to mind some couplets from Locksley Hall:

When the centuries behind me like a fruitful land reposed;
When I clung to all the present for the promise that it closed:

When I dipt into the future far as human eye could see;
Saw the Vision of the world and all the wonder that would be.—

By the way, the last pair of lines appears on the dedication plaque that graces the USS Voyager (of the Star Trek franchise).

What did Roger do? He tried as best as possible, given the gulf of time, to explain why he had chosen the tack that he did in the paper (axiomatic) and his hope for how it would influence research on the subject.

A principal with private information must propose a mechanism to an agent. However, the choice of mechanism will reveal something of the principal’s private information to the agent. Thus, the problem of mechanism design in this setting is not a straight optimization problem. It is, at a high level, a signaling game. The signals are the set of mechanisms that the principal can propose. Thus, one seeks an equilibrium of this game. But which equilibrium?

In section 7 of the paper, Roger approaches the question axiomatically in the spirit of Nash bargaining. Indeed, Roger made just such an analogy in his talk. Nash did not have in mind any particular bargaining protocol, but a conviction that any reasonable protocol must satisfy some natural invariance conditions. Some decades later Rubinstein arrives with a bargaining protocol to justify Nash’s conviction. So, Roger sought the same here and expressed the wish to see this year a vindication of his hopes.

Lest you think the audience accepted Roger’s axioms uncritically, Thomas Troeger, pointed out Roger’s axiom 1 ruled out some possibly natural settings like Rothschild & Stiglitz. Roger argued that it was right and proper to rule this out and battle joined!

The Nan-Shan, a Siamese steamer under the control of Captain James MacWhirr, on his orders, sails into a typhoon in the South China Sea. Conrad described the captain as

“Having just enough imagination to carry him through each successive day”.

On board, an all-white crew and 200 Chinese laborers, returning home with seven years’ wages stowed in

“a wooden chest with a ringing lock and brass on the corners, containing the savings of his labours: some clothes of ceremony, sticks of incense, a little opium maybe, bits of nameless rubbish of conventional value, and a small hoard of silver dollars, toiled for in coal lighters, won in gambling-houses or in petty trading, grubbed out of earth, sweated out in mines, on railway lines, in deadly jungle, under heavy burdens—amassed patiently, guarded with care, cherished fiercely.”

Ship and souls driven by McWhirr’s will survive the Typhoon. The wooden chest does not. Its contents strewn below deck, the silver dollars are mixed together. It falls to McWhirr to determine how the dollars are to be apportioned between the Chinese laborers to forestall an uprising.

“It seems that after he had done his thinking he made that Bun Hin fellow go down and explain to them the only way they could get their money back. He told me afterwards that, all the coolies having worked in the same place and for the same length of time, he reckoned he would be doing the fair thing by them as near as possible if he shared all the cash we had picked up equally among the lot. You couldn’t tell one man’s dollars from another’s, he said, and if you asked each man how much money he brought on board he was afraid they would lie, and he would find himself a long way short. I think he was right there. As to giving up the money to any Chinese official he could scare up in Fuchau, he said he might just as well put the lot in his own pocket at once for all the good it would be to them. I suppose they thought so, too.”

My former colleague Gene Mumy, writing in the JPE, argued that McWhirr’s solution was arbitrary. We know what McWhirr’s response would have been:

” The old chief says that this was plainly the only thing that could be done. The skipper remarked to me the other day, ‘There are things you find nothing about in books.’ I think that he got out of it very well for such a stupid man.”

Mumy, undeterred, proposed instead a pivotal mechanism (Clark, Groves, Tidemann, Tullock etc). For each agent compute the difference between the total amount of money and the sum of all other claims. If an agent claims at most this amount, they receive their claim. If his claim exceeds this amount, he is penalized. Mumy showed that truth telling was a full information Nash equilibrium of the mechanism.

Saryadar, in a comment in the JPE, criticizes Mumy’s solution on the grounds that it rules out pre-play communication on the part of the agents. Such communication could allow agents to transmit threats (I’m claiming everything) that if credible change the equilibrium outcome. He also hints that the assumption of common knowledge of the contributions is hard to swallow.

Schweinzer and Shimoji revisit the problem with the observation that truth telling is not the only Nash equilibrium of the mechanism proposed by Mumy. Instead, they treat it as problem of implementation under incomplete information. The captain is assumed to know the total amount of money to be divided but not the agents. They propose a mechanism and identify a sufficient condition on beliefs under which truth telling is the unique rationalizable strategy for each agent. The mechanism is in the spirit of a scoring rule, and relies on randomization. I think McWhirr might have objected on the grounds that the money represented the entire savings of the laborers.

Conrad describes the aftermath.

“We finished the distribution before dark. It was rather a sight: the sea running high, the ship a wreck to look at, these Chinamen staggering up on the bridge one by one for their share, and the old man still booted, and in his shirt-sleeves, busy paying out at the chartroom door, perspiring like anything, and now and then coming down sharp on myself or Father Rout about one thing or another not quite to his mind. He took the share of those who were disabled to them on the No. 2 hatch. There were three dollars left over, and these went to the three most damaged coolies, one to each. We turned-to afterwards, and shovelled out on deck heaps of wet rags, all sorts of fragments of things without shape, and that you couldn’t give a name to, and let them settle the ownership themselves.”

Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.

In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let {x^k_j = 1} if object {j \in M} is given to agent {k \in N} and zero otherwise. Denote the utility of agent {k \in N} from consuming bundle {S \subseteq M} by

\displaystyle u_k(S) = \sum_{i \in S}u^k_i + \sum_{i, j \in S}w^k_{ij}.

The problem of maximizing total welfare is

\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{i \neq j}w^k_{ij}x^k_ix^k_j

subject to

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M

\displaystyle x^k_i \in \{0,1\}\,\, \forall i \in M, k \in N

I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is called sign consistency. For each {i,j \in M}, the sign of {w^k_{ij} } and {w^r_{ij}} for any {k, r \in N} is the same. Furthermore, this applies to all pairs {i, j \in M}.

Let {G^w} be a graph with vertex set {M} and for any {i,j \in M} such that {w^k_{ij} \neq 0} introduce an edge {(i,j)}. Because of the sign consistency condition we can label the edges of {G^w} as being positive or negative depending on the sign of {w^k_{ij}}. Let {E^+ = \{(i,j): w^k_{ij} \geq 0\}} and {E^- = \{(i,j): w^k_{ij} < 0\}}. The second condition is that {G^w} be a tree.

The following is the relaxation that they consider:

\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{(i,j) \in E^+ \cup E^-}w^k_{ij}z^k_{ij}

subject to

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M

\displaystyle z^k_{ij} \leq x^k_i, x^k_j\,\, \forall k \in N, (i,j) \in E^+

\displaystyle z^k_{ij} \geq x^k_i + x^k_j - 1\,\, \forall k \in N, (i,j) \in E^-

\displaystyle x^k_i, z^k_{ij} \in \{0,1\}\,\, \forall i \in M, k \in N

Denote by {P} the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of {P} are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”

Let {{\cal C}} be the maximal connected components of {G^w} after deletion of the edges in {E^-} (call this {G^+}). The proof will be by induction on {|{\cal C}|}. The case {|{\cal C}| = 1} follows from total unimodularity. I prove this later.

Suppose {|{\cal C}| > 1}. Let {({\bar z}, {\bar x})} be an optimal solution to our linear program. We can choose {({\bar z}, {\bar x})} to be an extreme point of {P}. As {G^w} is a tree, there must exist a {C \in {\cal C}} incident to exactly one negative edge, say {(p,q)}. Denote by {P'} the polyhedron {P} restricted to just the vertices of {C} and by {Q} the polyhedron {P} restricted to just the vertices in the complement of {C}. By the induction hypothesis, both {P'} and {Q} are integral polyhedrons. Each extreme point of {P'} ({Q}) assigns a vertex of {C} (the complement of {C}) to a particular agent. Let {X_1, X_2, \ldots, X_a} be the set of extreme points of {P'}. If in extreme point {X_r}, vertex {p} is assigned to agent {k} we write {X_{rk} = 1} and zero otherwise. Similarly with the extreme points {Y_1, Y_2, \ldots, Y_b} of {Q}. Thus, {Y_{rk} = 1} is {Y_r} assigns vertex {q} to agent {k}. Let {v(X_{r})} be the objective function value of the assignment {X_r}, similarly with {v(Y_{rk})}.

Now {({\bar z}, {\bar x})} restricted to {P'} can be expressed as {\sum_r\lambda_{r}X_{r}}. Similarly, {({\bar z}, {\bar x})} restricted to {Q} can be expressed as {\sum_r\mu_{r}Y_{r}}. We can now reformulate our linear program as follows:

\displaystyle \max \sum_r\lambda_{r}v(X_{r}) + \sum_r\mu_{r}v(Y_{r}) -\sum_{k \in N} |w^k_{pq}|y^k_{pq}

subject to

\displaystyle -\sum_r\lambda_{rk} = -1

\displaystyle -\sum_r\mu_{rk} = -1

\displaystyle \sum_{r: X_{rk} = 1}\lambda_{r}X_{r} + \sum_{r: Y_{rk} = 1}\mu_{r}Y_{r} \leq y^k_{pq} + 1\,\, \forall k \in N

\displaystyle \lambda_{r}, \mu_{r}, y^k_{pq} \geq 0\,\, \forall r, k

The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because {X_{rk}} and {X_{rk'}} cannot both be 1, similarly with the {Y}‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in {G^w} every cycle must contain a positive even number of negative edges.

Return to the case {|{\cal C}| = 1}. Consider the polyhedron {P} restricted to just one {C \in {\cal C}}. It will have the form:

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in C

\displaystyle z^k_{ij} \leq x^k_i, x^k_j\,\, \forall k \in N, (i,j) \in E^+ \cap C

\displaystyle x^k_i, z^k_{ij} \in \{0,1\}\,\, \forall i \in C, k \in N

Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, {S}, of rows/constraints. The goal is to partition them into two sets {L} and {R} so that column by column the difference in the sum of the non-zero entries in {L} and and the sum of the nonzero entries in {R} differ by at most one.

Observe that the rows associated with constraints {\sum_{k \in N}x^k_i \leq 1} are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If {y^k_{ij} - x^k_i \leq 0} is present in {S} but {y^k_{ij} -x^k_j \leq 0} is absent (or vice-versa), we are free to assign the row associated with {y^k_{ij} - x^k_i \leq 0} in any way to satisfy the GH theorem. The difficulty will arise when both {y^k_{ij}}, {x^k_i} and {x^k_j} are present in {S}. To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with {y^k_{ij} - x^k_i \leq 0} and {y^k_{ij} -x^k_j \leq 0} be separated.

When {S} is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to {L} and {R} making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).

Fix an agent {k \in N}. The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in {C} (which is a tree) to be a root and direct all edges `away’ from the root (when {S} is a subset of the constraints we delete from {C} any edge {(i,j)} in which at most one from the pair {y^k_{ij} - x^k_i \leq 0} and {y^k_{ij} -x^k_j \leq 0} appears in {S}) . Label the root {L}. Label all its neighbors {R}, label the neighbors of the neighbors {L} and so on. If vertex {i \in C} was labeled {L} assign the row {\sum_{k \in N}x^k_i \leq 1} to the set {L} otherwise to the row {R}. This produces a partition of the constraints of the form {\sum_{k \in N}x^k_i \leq 1} satisfying GH.

Initially, all leaves and edges of {C} are unmarked. Trace out a path from the root to one of the leaves of {C} and mark that leaf. Each unmarked directed edge {(i,j)} on this path corresponds to the pair {y^k_{ij} - x^k_i \leq 0} and {y^k_{ij} -x^k_j \leq 0}. Assign {y^k_{ij} - x^k_i \leq 0} to the same set that is the label of {i}. Assign {y^k_{ij} - x^k_j \leq 0} to the same set that is the label of vertex {j}. Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge {(i,j)}. If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge {(i,j)} as well as {(i,t)}. Suppose {i} was labeled {L} on the first iteration and {(i,j)} was marked. This means {y^k_{ij} - x^k_{i} \leq 0} was assigned to {L}. Subsequently {y^k_{it} - x^k_i \leq 0} will also be assigned to {L} which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.

What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. Note that other sources, like Wikipedia give other reasons, for why electricity is special:

Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.

I’m skeptical. To see why, replace electricity by air travel.

Let {V} be the set of vertices and {E^*} the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in {E^*}. Let {E} be the set of directed arcs that result. Hence, {(i,j) \in E} mens that the edge {(i,j)} is directed from {i} to {j}. Notice, if {(i,j) \in E}, then {(i,j) \not \in E}.

Associated with each {(i,j) \in E} is a number {x_{ij}} that we interpret as a flow of electricity. If {x_{ij} > 0} we interpret this to be a flow from {i} to {j}. If {x_{ij} < 0} we interpret this as a flow from {j} to {i}.

  1. Let {\rho_{ij}} is the resistance on link {(i,j)}.
  2. {c_i} unit cost of injecting current into node {i}.
  3. {v_i} marginal value of current consumed at node {i}.
  4. {d_i} amount of current consumed at node {i}.
  5. {s_i} amount of current injected at node {i}.
  6. {K_{ij}} capacity of link {(i,j)}.

Current must satisfy two conditions. The first is conservation of flow at each node:

\displaystyle s_i + \sum_{k: (k,i) \in E}x_{ji} = \sum_{j: (i,j) \in E}x_{ij} + d_i\,\, \forall i \in V

The second is Ohm’s law. There exist node potentials {\{\phi_i\}_{i \in V}} such that

\displaystyle \rho_{ij}x_{ij} = \phi_i - \phi_j\,\, \forall (i,j) \in E.

Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.

At each node {i \in V} there is a power supplier with constant marginal cost of production of {c_i} upto {S_i} units. At each {i \in V} there is a consumer with constant marginal value of {v_i} upto {D_i} units. A natural optimization problem to consider is

\displaystyle \max \sum_{i \in V}[v_id_i - c_is_i]

subject to

\displaystyle \sum_{j: (i,j) \in E}x_{ij} -\sum_{j: (j,i) \in E}x_{ji} - s_i + d_i= 0\,\, \forall i \in V

\displaystyle \rho_{ij}x_{ij} = \mu_i - \mu_j\,\, \forall (i,j) \in E

\displaystyle -K_{ij} \leq x_{ij} \leq K_{ij}\,\, \forall (i,j) \in E

\displaystyle 0 \leq s_i \leq S_i\,\, \forall i \in V

\displaystyle 0 \leq d_i \leq D_i\,\, \forall i \in V

This is the problem of finding a flow that maximizes surplus.

Let {{\cal C}} be the set of cycles in {(V, E^*)}. Observe that each {C \in {\cal C}} corresponds to a cycle in {(V, E)} if we ignore the orientation of the edges. For each cycle {C \in {\cal C}}, let {C^+} denote the edges in {E} that are traversed in accordance with their orientation. Let {C^-} be the set of edges in {C} that are traversed in the opposing orientation.

We can project out the {\phi} variables and reformulate as

\displaystyle \max \sum_{i \in V}[v_id_i - c_is_i]

subject to

\displaystyle \sum_{j: (i,j) \in E}x_{ij} -\sum_{j: (j,i) \in E}x_{ji} - s_i + d_i= 0\,\, \forall i \in V

\displaystyle \sum_{(i,j) \in C^+}\rho_{ij}x_{ij} - \sum_{(i,j) \in C^-}\rho_{ij}x_{ij} = 0\,\, \forall \,\, C \in {\cal C}

\displaystyle -K_{ij} \leq x_{ij} \leq K_{ij}\,\, \forall (i,j) \in E

\displaystyle 0 \leq s_i \leq S_i\,\, \forall i \in V

\displaystyle 0 \leq d_i \leq D_i\,\, \forall i \in V

Recall the scenario we ended with in part 1. Let {V = \{1, 2, 3\}}, {E = \{(1,3), (1,2), (2,3)\}} and in addition suppose {\rho_{ij} =1} for all {(i,j)}. Only {(1,3)} has a capacity constraint of 600. Let {D_1 = D_2 = 0} and {S_3 = 0}. Also {c_1 = 20} and {c_2 = 40} and each have unlimited capacity. At node 3, the marginal value is {V > 40} upto 1500 units and zero thereafter. The optimization problem is

\displaystyle \max Vd_3 - 20s_1 - 40 s_2

subject to

\displaystyle x_{12} + x_{13} - s_1 = 0

\displaystyle x_{23} - s_2 - x_{12} = 0

\displaystyle d_3 - x_{13} - x_{23} = 0

\displaystyle x_{13} - x_{23} - x_{12} = 0

\displaystyle -600 \leq x_{13} \leq 600

\displaystyle 0 \leq d_3 \leq 1500

Notice, for every unit of flow sent along {(1,3)}, half a unit of flow must be sent along {(1,2)} and {(2,3)} as well to satisfy the cycle flow constraint.

The solution to this problem is {x_{13} = 600}, {x_{12} = -300}, {x_{23} = 900}, {s_1 = 300}, {s_2 = 1200} and {d_3 = 1500}. What is remarkable about this not all of customer 3’s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along {(1,3)} and 900 units along {(1,2) \rightarrow (2,3)}. This flow violates the cycle constraint.

In this example, when generator 1 injects electricity into the network to serve customer 3’s demand, a positive amount of that electricity must flow along every path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3’s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.

The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.

Let {y_i} be the dual variable associated with the flow balance constraint. Let {\lambda_C} be associated with the cycle constraints. Let {\nu_i} and {\theta_i} be associated with link capacity constraints. Let {\mu_i} and {\sigma_i} be associated with the remaining tow constraints. These can be interpreted as the profit of supplier {i} and the surplus of customer {i} respectively. For completeness the dual would be:

\displaystyle \min \sum_{(i,j) \in E}[\nu_{ij} + \theta_{ij}]K_{ij} + \sum_{i \in V}[S_i \mu_i + D_i \sigma_i]

subject to

\displaystyle -\theta_{ij} + \nu_{ij} + \rho_{ij}\sum_{C^+ \ni (i,j)}\lambda_C - \rho_{ij}\sum_{C^- \ni (i,j)}\lambda_C + y_i - y_j = 0\,\, \forall (i,j) \in E

\displaystyle \mu_i - y_i \geq -c_i\,\, \forall i \in V

\displaystyle \sigma_i + y_i \geq v_i\,\, \forall i \in V

\displaystyle \nu_{ij}, \theta_{ij}, \mu_i, \sigma_i \geq 0\,\, \forall i \in V,\,\,\forall (i,j) \in E

Now {y_i} has a natural interpretation as a price to be paid for consumption at node {i} for supply injected at node {i}. {\mu_i} and {\nu_i} can be interpreted as the price of capacity. However, {\lambda_C} is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.

In this, the second lecture, I focus on electricity markets. I’ll divide the summary of that lecture into two parts.

Until the 1980s electricity markets around the world operated as regulated monopolists. Generation (power plants) and distribution (the wires) were combined into a single entity. Beginning with Chile, a variety of Latin American countries started to privatize their electricity markets. So, imagine you were a bright young thing in the early 1980s, freshly baptised in the waters of Lake Michigan off Hyde Park. The General approaches you and says I want a free market in electricity, make it so (Quiero un mercado libre de la electricidad, que asi­ sea.) What would you reccomend?

Obviously, privatize the generators by selling them off, perhaps at auction (or given one’s pedigree, allocate them at random and allow the owners to trade among themeselves). What about the wire’s that carry electricity from one place to another. Tricky. Owner of the wire will have monopoly power, unless there are multiple parrallell wires. However, that would lead to inefficient duplication of resources. As a first pass, lets leave the wires in Government hands. Not obviously wrong. We do that with the road network. The Government owns and mainatins it and for a fee grants access to all.

So, competition to supply power but central control of the wires. Assuming an indifferent and benign authority controlling the wires, what will the market for generation look like? To fix ideas, consider a simple case. Two generators {\{1, 2\}} and a customer {3}.

Generator has unlimited supply and a constant marginal cost of production of $20 a unit. Generator 2 has an unlimited supply and a constant marginal cost of production of $40 a unit. Customer 3 has a constant marginal value of {V} upto 1500 units and zero thereafter. Assume {U} to be sufficiently large to make all subsequent statements true. Initially there are only two wires, one from generator 1 to customer 3 and the other from generator 2 to customer 3. Suppose {\{1,2,3\}} are all price takers. Then, the Walrasian price for this economy will be $20. For customer 3 this clearly a better outcome than unregulated monopoly, where the price would be {U}. What if the price taking assumption is not valid? An alternative model would be Bertrand competition between 1 and 2. So, the outcome would be a `hairs breadth’ below $40. Worse than the Walrasian outcome but still better than unregulated monopoly. It would seem that deregulation would be a good idea and as the analysis above suggest, there is no necessity for a market to be designed. There is a catch. Is unregulated monopolist the right benchmark? Surely, a regulated monopolist would be better. Its not clear that one does better than the regulated monopolist.

Now lets add a wrinkle. Suppose the wire between 1 and 3 has capacity 600 units. There are two ways to think of this capacity constraint. The first is a capacity constraint on generator 1 that we have chosen to model as a constraint on the wire {(1,3)}. The second is that it is indeed a constraint on the wire {(1,3)}. The difference is not cosmetic as we shall see in a moment.

Suppose its a constraint on generator 1’s capacity. Then, under the price taking assumption, the Walrasian price in this economy will be $40. An alternative model of competition would be Bertrand-Edgeworth. In general equilibria are mixed, but whatever the mixture, the expected price per unit customer 3 will pay cannot exceed $40 a unit. In both cases, the outcome is better for customer 3 than unregulated monopolist.

Assume now the capacity constraint is on the wire instead. Under the price taking assumption, at a price of $20 unit, generator 1 is indifferent between supplying any non-negative amount. Generator 3’s supply correspondence is the empty set. However there is no way for supply to meet demand. Why is this? In the usal Walrasian set up each agent reports their supply and demand correspondence based on posted prices and their own information only. To obtain a sensible answer in this case, generator 1 must be aware of the capacity of the network into which its supply will be injected. As the next scenario we consider shows, this is not easy when it comes to electricity.

Suppose there is now a link joining generator 1 and 2 with no capacity constraint. There is still a 600 unit capacity constraint on the link between 1 and 3. One might think, that in this scenario, customer 3 can receive all its demand from generator 1. It turns out that this is not possible because of the way electricity flows in networks.

A recent paper by Bergeman, Brooks and Morris (BBM) supposes a monopolist free to segment the market in any way she can (without worrying about arbitrage), and asks what is the achievable set of pairs of producer and consumer surplus? BBM gives a simple and satisfying answer to this question. This post attempts a short proof of their characterization.

A monopolist faces a market consisting of buyers with valuations {\{v_1, v_2, \ldots, v_n\}}. Order them so that {v_1 < v_2 < \ldots < v_n}. The number of buyers with valuation {v_i} is {d_i} and assume the buyers are divisble. A segmentation of the market is a partition of the buyers into upto {n} markets with the property that the profit maximizing price in market {j = 1, \ldots, n} is {v_j}. If we let {x_{ij}} be the number of buyers with valuation {v_i} in market {j}, then any segmentation is characterized by the following:

\displaystyle \sum_{j=1}^{n}x_{ij} = d_i\,\, \forall i = 1, \ldots, n

\displaystyle v_j\sum_{r \geq j}x_{rj} \geq v_t \sum_{r \geq t}x_{rj}\,\, \forall j = 1, \ldots, n, \,\, \forall t \neq j

\displaystyle x_{ij} \geq 0\,\, \forall i,j

Denote by {X(d)} the set of feasible segmentations. Let {P = \sum_{j=1}^{n}v_j\sum_{r= j}^nx_{rj}} be the profit earned by the monopolist under the segmentation {x \in X(d)}. The consumer surplus of buyers under the segmentation {x \in X(d)} is {C = \sum_{j=1}^{n}\sum_{r = j}^n[v_r-v_j]x_{rj}.}

It is easy to see that {P \leq \sum_{j=1}^nv_jd_j = \Pi_1(d)}. The upper bound follows from the segmentation that assigns all buyers with valuation {v_j} to the {j^{th}} market and no others. This corresponds to first degree price discrimination. It is also easy to see that {P \geq \max_{1 \leq j \leq n} v_j\sum_{r \geq j}d_j = \Pi_0(d)}. The lower bound comes from the segmentation that assigns all customers to market {k}, where {v_k} is the profit maximizing monopoly price without discrimination. BBM show the following:

Theorem {(P,C)} is feasible iff {\Pi_0(d) \leq P \leq \Pi_1(d)} and {P + C \leq \sum_jv_jd_j}.

That {\Pi_0(d) \leq P \leq \Pi_1(d)}, is straightforward. The hard part is to show two things.

1) For any {P} such that {\Pi_0(d) \leq P \leq \Pi_1(d)} there is a {C} such that {P + C = \sum_{j=1}^nv_jd_j}.

2)There exists an {x \in X(d)} such that {P = \Pi_0(d)} and {C = 0}.

To prove the first item (which BBM note in the paper is easy) on this list, call a segmentation {x \in X(d)} upper triangular if {x_{ij} = 0} for all {i < j}. Note {v_k > v_t}.

Let {r = \arg \max_{1 \leq q \leq t}v_q\sum_{i=q}^tx_{ik}}. We construct a new segmentation {y \in X(d)} from {x} by shifting the buyers in market {k} with values below {v_k} into market {r}. As the profit maximizing price just to this portion of buyers is {v_r}, moving them into market {r} leaves the profit maximizing price in market {r} unchanged. Formally:

{y_{ij} = x_{ij}} for all {i} and {j \neq k, r}.
{y_{ik} = x_{ik}} for all {i \geq k}.
{y_{ik} = 0} for all {i t}.
{y_{ir} = x_{ir} +x_{ik}} for all {i \leq t}.

Under segmentation {y}, both {P} and {C} increased in value, contradicting the initial choice of {x}.

To prove the second item on the list, among all feasible segmentations such that {P = \Pi_0(d)}, choose one that minimizes {C}, say {x}. Call {x} lower triangular if {x_{ij} = 0} for all {i > j}. I show that {x} must be lower triangular from which it follows that {C = 0}. The proof will be by induction on {n} the number of distinct valuations.

The case {n=2} is straightforward. Suppose first that {\Pi_0(d) = v_1(d_1 + d_2)}. The following segmentation, as can be verified, does the trick:

{x_{22} = d_2}
{ x_{12} = \alpha} where {v_1[\alpha + d_2] = v_2d_2}
{x_{21} = 0}
{x_{11} = d_1 - \alpha}

If {\Pi_0(d) = v_2d_2}, the segmentation that assigns all buyers to market 2 will have the requiste property.

Now consider the case of arbitrary {n} and suppose first that {\Pi_0(d) > v_1\sum_{i=1}^nd_i}. Given an instance {d} on {n} valuations construct an instance {d'} on {n-1} valuations {\{v'_2, \ldots, v'_{n}\}} by setting {d'_i = d_i} for all {i \geq 2}. It is easy to see that {\Pi_0(d') = \Pi_0(d)}, i.e., the optimal monopoly profit with no discrimination remains unchanged. By the induction hypothesis there is a segmentation {y \in X(d')} that is lower triangular. To conclude the argument we must show how to convert {y} into a legitimate segmentation for {d}.

{x_{ij} = y_{ij}} for {i,j \geq 2}.
{x_{1i} = \mu_i} where {v_1[\mu_i + \sum_{r=2}^nx_{ri}] \leq v_{ii}x_{ii}} for all {i \geq 1} and {\sum_{i=1}^n\mu_i = d_1}.
If the {\mu_i}‘s can indeed be chosen as specified, then, {x \in X(d)} is lower triangular and the corresponding {P = \Pi_0(d)} and {C = 0}. To verify that appropriate {\mu_i}‘s exist, it is enough to check that
\displaystyle d_1 \leq \sum_{i=1}^n\frac{v_{ii}}{v_1}x_{ii} - \sum_{i=1}^n \sum_{r=2}^nx_{ri} = \frac{1}{v_1}\Pi_0(d') - \sum_{r=2}^nd_r

which follows from the hypothesis that {v_1\sum_{r=1}^nd_r \leq \Pi_0(d)}.

To conclude, suppose now that {\Pi_0(d) = v_1\sum_{i=1}^nd_i}. Construct a new instance {d'} on {n-1} valuations {\{v'_1, v'_3 \ldots, v'_{n}\}} by setting {d'_i = d_i} for all {i \geq 3} and {d'_1 = d_1 + d_2}. Notice, {\Pi_0(d') = \Pi_0(d)}. By the induction hypothesis there is a segmentation {y \in X(d')} that is lower triangular. To conclude the argument we must show how to convert {y} into a legitimate segmentation for {d}.

{x_{ij} = y_{ij}} for {i,j \geq 3}.
{x_{2i} = \mu_i} for all {i \geq 1} where {v_2[\mu_i + \sum_{r \geq 3}x_{ri}] \leq v_{ii}x_{ii}}, {\mu_i \leq y_{1i}} and {\sum_{r=1}^n\mu_i = d_2}.
{x_{1i} = y_{1i} - \mu_i} for all {i \geq 1}.

If the {\mu_i}‘s can be chosen as specified then, {x} is lower triangular, in {X(d)} and the corresponding {P = \Pi_0} and {C = 0}. Verifying that the appropriate {\mu_i}‘s exist, can be done in the same way as the previous case.

In part two, as promised, I turn to the welfare maximization problem. To formulate the problem of finding a welfare maximizing allocation let {x^k_j = 1} if object {j \in M} is given to agent {k \in N} and zero otherwise. Denote the utility of agent {k \in N} from consuming bundle {S \subseteq M} by

\displaystyle u_k(S) = \sum_{i \in S}u^k_i + \sum_{i, j \in S}w^k_{ij}.

The problem of maximizing total welfare is

\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{i \neq j}w^k_{ij}x^k_ix^k_j

subject to

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M

\displaystyle x^k_i \in \{0,1\}\,\, \forall i \in M, k \in N

Welfare maximization with BQP preferences is in general NP-hard. One proof relies on a reduction to the multi-way cut problem. Given a graph {G=(V,E)} with edge weight {c_{ij}} for each {(i,j) \in E}, and a set of terminal vertices {T = \{v_1, v_2, \ldots, v_n\}}, a multiway cut is a set of edges whose removal disconnects every pair of terminal vertices. The problem of finding the multiway cut of minimum total weight is called the multiway cut problem. When {T} consists of only two terminals ({k = 2}) the problem reduces to the well known minimum cut problem. For {k \geq 3}, it is known that the problem is {NP-}hard even on planar graphs.

We can obtain the multiway cut problem by setting {u^k_i = 0} for all {k \in N} and {i \in M} and {w^k_{ij} = c_{ij}} for all {k \in N} and {(i,j) \in E}. Any pair {(i,j)} such that {x^k_ix^r_j = 1} for {k \neq r} will be part of a multi-way cut. This reduction implies that welfare maximization when {w^k_{ij} \geq 0} for all {k \in N} and {i,j \in M} is NP-hard. This is in contrast to the case of surplus maximization.

,Candogan, Ozdaglar and Parrilo (2013), the paper that prompted this post, identify a solvable instance of the welfare maximization problem. They impose two conditions. The first is called sign consistency. For each {i,j \in M} the sign of {w^k_{ij} } and {w^r_{ij}} for any {k, r \in N} is the same. Furthermore, this applies to all pairs {i, j \in M}. Sign consistency by itself is not sufficient to obtain a solvable instance. Another condition is needed. Let {G^w} be a graph with vertex set {M} and for any {i,j \in M} such that {w^k_{ij} \neq 0} introduce an edge {(i,j)}. The second condition is that {G^w} be a tree. Interestingly, Erdos and Sz\'{e}kely (1995) show that a generalization of the multiway cut problem which corresponds to welfare maximization under sign consistency and {u^k_i = 0} for all {k \in N} and {i \in M}, is polynomially solvable when the underlying graph {G} is a tree. The Candogan, Ozdaglar and Parrilo (COP) proof is based on a dynamic programming argument similar to the one used in Erdos and Sz\'{e}kely (1994).

The key result in COP is the following natural linearization of the welfare maximization problem admits an integral optimal solution.

\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{i \neq j}w^k_{ij}y^k_{ij}

subject to

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M

\displaystyle y^k_{ij} \leq \{x^k_i, x^k_j\}^-\,\, \forall w^k_{ij} > 0

\displaystyle y^k_{ij} \geq x^k_i + x^k_j -1\,\, \forall w^k_{ij} < 0

\displaystyle x^k_i \in \{0,1\}\,\, \forall i \in M, k \in N

There is a connection between the welfare maximization problem and packing subtrees into trees that I want to highlight. It suggests a possible avenue by which one might enlarge the class of preferences COP consider.

Because of the sign consistency condition we can label the edges of {G^w} as being positive or negative depending on the sign of {w^k_{ij}}. Let {E^+ = \{(i,j): w^k_{ij} \geq 0\}} and {E^- = \{(i,j): w^k_{ij} < 0\}}. Let {{\cal C}} be the maximal connected components of {G^w} after deletion of the edges in {E^-} (call this {G^+}). For any {C \in {\cal C}} and {S \subseteq C} let

\displaystyle v_k(S) = \sum_{i \in S}u^k_i + \sum_{i,j \in S}w^k_{ij}.

By the way, for each {C \in {\cal C}} and {k \in N}, {v_k} is supermodular over the subsets of {C}. Let {z_k(S) = 1} if {S \subseteq C} is assigned to agent {k} and zero otherwise. The problem of finding a welfare maximizing allocation can be expressed as follows:

\displaystyle \max \sum_{k \in N}\sum_{C \in {\cal C}}\sum_{S \subseteq C}v_k(S)z_k(S) - \sum_{k \in N}\sum_{(i,j) \in E^-}|w^k_{ij}|y^k_{ij}

subject to

\displaystyle \sum_{S \ni i}\sum_{k \in N}z_k(S) \leq 1\,\, \forall i \in M,\,\, S \subseteq C \in {\cal C}

\displaystyle y^k_{ij} \geq \sum_{S \ni i}z_k(S) + \sum_{S \ni j}z_k(S) - 1\,\, \forall k \in N, (i,j) \in E^-

\displaystyle z_k(S) \in \{0,1\}\,\, \forall k \in N, S \subseteq C \in {\cal C}

\displaystyle y^k_{ij} \in \{0,1\}\,\, \forall k \in N, (i,j) \in E^-

In the above program, we can, without loss, restrict attention to subsets {S} that a subtrees (connected subgraphs) of {E^+}. To see why, suppose in that in some optimal solution to the above integer program, {z_k(S) = 1} where {S} is not a subtree. Then, we can write {S = A \cup B} where both {A} and {B} are in the same component of {C} as {S} is. Furthermore, it must be the the case that there is no edge {(i,j) \in E^+} such that {i \in A} and {j \in B}. Therefore, {v_k(S) = v_k(A) + v_k(B)}. That means we can construct a new optimal solution by setting {z_k(S) = 0} and raising {z_k(A)} and {z_k(B)} to 1. Note, in the original solution {z_k(A), z_k(B) =0} by virtue of the first constraint. As long as {S} is not a subtree we can repeat this argument.

Hence, if we let {{\cal T}} be the set of subtrees of {G^+}, the welfare maximization problem can be expressed as follows:

\displaystyle \max \sum_{k \in N}\sum_{T \in {\cal T}}v_k(T)z_k(T) - \sum_{k \in N}\sum_{(i,j) \in E^-}|w^k_{ij}|y^k_{ij}

subject to

\displaystyle \sum_{T \in {\cal T}}\sum_{T \ni i}\sum_{k \in N}z_k(T) \leq 1\,\, \forall i \in M,

\displaystyle y^k_{ij} \geq \sum_{T \ni i}z_k(T) + \sum_{S \ni j}z_k(S) - 1\,\, \forall k \in N, (i,j) \in E^-

\displaystyle z_k(S) \in \{0,1\}\,\, \forall k \in N, T \in {\cal T}

\displaystyle y^k_{ij} \in \{0,1\}\,\, \forall k \in N, (i,j) \in E^-

Its important to emphasize that no {T \in {\cal T}} contains a negative edge.

Were it not for the second set of the constraints, integrality would follow from Barany, Edmonds and Wolsey (1986). I’m not aware of this variation of the tree packing problem having been considered. A follow up paper by Aghezzaf and Wolsey (1994) comes close in the sense of allowing for a piecewise linear concave objective function.

In 1961, Clarence Earl Gideon was charged with breaking and entering with intent to commit petty larceny. Appearing without counsel, Gideon invoked reverent authority:

The United States Supreme Court says I am entitled to be represented by Counsel.

Those words set in train a chain of events that confirmed, three years later, the right of indigent defendants in criminal proceedings, upon request, to have counsel appointed both during trial and on appeal. Qualified counsel, however, is scarce and there is an abundance of indigent defendants. Over the years the state has chosen to solve the problem of matching counsel to indigent defendant by fiat. Judge Posner in US vs Ely justifies this as follows:

There are practical reasons for not giving indigent criminal defendants their choice of counsel. Appointed counsel are not paid at munificent rates under the Criminal Justice Act, 18 U.S.C. § 3006A(d); in the Central District of Illinois, in the most recent year for which data are available (1980), the average fee per case under the Act was only $426.31. Director of Adm. Off. of U.S. Cts., 1982 Ann.Rep. 511 (Exh. C-1). The best criminal lawyers who accept appointments therefore limit the amount of time they are willing to devote to this relatively unremunerative type of work; some criminal lawyers, indeed, only reluctantly agree to serve as appointed counsel, under pressure by district judges to whom they feel a sense of professional obligation. The services of the criminal defense bar cannot be auctioned to the highest bidder among the indigent accused — by definition, indigents are not bidders. But these services must be allocated somehow; indigent defendants cannot be allowed to paralyze the system by all flocking to one lawyer.

Time to sharpen pencils and put on the thinking cap. For a graduate student looking for a topic in market design, I cannot think of a more interesting question than how to match counsel to indigent defendants. One has: moral hazard (on the part of attorney), asymmetry of information (how does one distinguish between a good lawyer and a bad one), informed third parties with divided interests (Judges who appoint counsel, but may be more interested in a speedy trial than a vigorous defense), budget constraints (on the part of defendants) and competing objectives (speedy resolution vs. correct adjudication). For a description of the institution as it is currently structured and a proposal to revise it based on vouchers see Friedman and Schulhofer (yes Friedman fils).

Join 119 other followers

Follow

Get every new post delivered to your Inbox.

Join 119 other followers

%d bloggers like this: