You are currently browsing the category archive for the ‘Auctions’ category.

When analyzing a mechanism it is convenient to assume that it is direct. The revelation principle allows one to argue that this restriction is without loss of generality. Yet, there are cases where one prefers to implement the indirect version of a mechanism rather than its direct counterpart. The clock version of the English ascending auction and the sealed bid second price auction are the most well known example (one hopes not the only). There are few (i.e. I could not immediately recall any) theorems that uniquely characterize a particular indirect mechanism. It would be nice to have more. What might such a characterization depend upon?

1) Direct mechanisms require that agents report their types. A concern for privacy could be used to `kill’ off a direct mechanism. However, one would first have to rule out the use of trusted third parties (either human or computers implementing cryptographic protocols).

2) Indirect mechanism can sometimes be thought of as an extensive form game and one might look for refinements of solution concepts for extensive form games that have no counterpart in the direct version of the mechanism. The notion of obviously dominant strategy-proof that appears here is an example. However, indirect mechanisms may introduce equilibria, absent in the direct counterpart, that are compelling for the agents but unattractive for the designers purposes.

3) One feature of observed indirect mechanisms is that they use simple message spaces, but compensate by using multiple rounds of communication. Thus a constraint on message spaces would be needed in a characterization but coupled with a constraint on the rounds of communication.

Because I have white hair and that so sparse as to resemble the Soviet harvest of 1963, I am asked for advice. Just recently I was asked about `hot’ research topics in the sharing economy. `You mean a pure exchange economy?, said I in reply. Because I have white hair etc, I sometimes forget to bite my tongue.

Returning to topic, the Economist piece I linked to above, gets it about right. With a fall in certain transaction costs, trades that were otherwise infeasible, are realized. At a high level there is nothing more to be said beyond what we know already about exchange economies.

A closer looks suggests something of interest in the role of the mediator (eBay, Uber) responsible for the reduction in transaction costs. They are not indifferent Walrasian auctioneers but self interested ones. eBay and Uber provide an interesting contrast in `intrusiveness’. The first reduces the costs with respect to search, alleviates the lemons problem and moral hazard by providing information and managing payments. It does not, however, set prices. These are left to participants to decide. In sum, eBay it appears, tries to eliminate the textbook obstacles to a perfectly competitive market. Uber, also does these things but more. It chooses prices and the supplier who will meet the reported demand. One might think eBay does not because of the multitude of products it would have to track. The same is true for Uber. A product on Uber is a triple of origin, destination and time of day. The rider and driver may not be thinking about things in this way, but Uber certainly must in deciding prices and which supplier will be chosen to meet the demand. Why doesn’t Uber allow riders to post bids and drivers to post asks?

Fox News and CNN have been in knots recently about how to allocate scarce debate slots to the many Republican pretenders to the oval office. Should Poll numbers be the criteria? What about gender, race, state office etc. etc. There is a simple and transparent way to allocate the slots: an auction. Neither CNN or Fox are charities. If the debates are a public good, there is no reason why Fox and CNN alone should forgo profits to host them. They should, instead auction off the rights to participate in the debates.

What design questions would have to be answered? First, how much time to allocate to the debate, i.e., what is the supply of the resource to be allocated? Second, should one auction slots or time. For example, should it be 10 slots of equal time lengths in a 2 hour time period? Or, should one allow the bidders to bid on the actual amount of time they would get in the 2 hour period? Are candidates the only ones allowed to bid, or can anyone submit bids on their behalf? Can one bid to deny time to the candidates? Actually, as these are not debates at all, but parallel news conference, perhaps one should auction off the right to answer questions. In real time. That would be entertainment!

According to the NY Times, some Californians

would have to cut their water consumption by 35 percent under the terms of a preliminary plan issued by state officials on Tuesday to meet a 25 percent mandatory statewide reduction in urban water use.

There is an obvious way to achieve this, raise the price of water. If its obvious why isn’t it the *first* thing California did some years back when it was clear that water was scarce? In some cases the hands of the state are tied by water rights allocated for commercial purposes, so lets focus on household consumption.

We know that the first tool the state reaches for is regulation. See, for example, the following memo from the California State water board. Interestingly, it begins by noting that the state is in the 4th year of a drought! Eventually, it is clear that regulation is insufficient and then, price increases are considered. In fact, the water reduction targets quoted from the NY Times above come from a report by the Governor of the state that also urges the use of

rate structures and other pricing mechanisms

to achieve reductions. Again, why is price last rather first? Is this because the state must maintain a credible reputation for not exploiting its monopoly power with water?

If one is going reduce consumption by raising prices, should it be an across the board price increase? Note that consumption is metered so the exact amount that is purchased by a household is known to the provider. The state also has access to other information: location, home size, family size and income. In principle, the state could price discriminate. Here is an example from the Irvine Ranch Water district. Each household is given an initial `allotment’ that depends on household size and area that is landscaped. Exceed the allotment and the price of water rises. For more details and the impact on consumption see the following paper. Is it obvious that this is the `correct’ mechanism?

Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.

In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by

The problem of maximizing total welfare is

subject to

I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is called **sign consistency**. For each , the sign of and for any is the same. Furthermore, this applies to all pairs .

Let be a graph with vertex set and for any such that introduce an edge . Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . The second condition is that be a tree.

The following is the relaxation that they consider:

subject to

Denote by the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”

Let be the maximal connected components of after deletion of the edges in (call this ). The proof will be by induction on . The case follows from total unimodularity. I prove this later.

Suppose . Let be an optimal solution to our linear program. We can choose to be an extreme point of . As is a tree, there must exist a incident to exactly one negative edge, say . Denote by the polyhedron restricted to just the vertices of and by the polyhedron restricted to just the vertices in the complement of . By the induction hypothesis, both and are integral polyhedrons. Each extreme point of () assigns a vertex of (the complement of ) to a particular agent. Let be the set of extreme points of . If in extreme point , vertex is assigned to agent we write and zero otherwise. Similarly with the extreme points of . Thus, is assigns vertex to agent . Let be the objective function value of the assignment , similarly with .

Now restricted to can be expressed as . Similarly, restricted to can be expressed as . We can now reformulate our linear program as follows:

subject to

The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because and cannot both be 1, similarly with the ‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in every cycle must contain a positive even number of negative edges.

Return to the case . Consider the polyhedron restricted to just one . It will have the form:

Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, , of rows/constraints. The goal is to partition them into two sets and so that column by column the difference in the sum of the non-zero entries in and and the sum of the nonzero entries in differ by at most one.

Observe that the rows associated with constraints are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If is present in but is absent (or vice-versa), we are free to assign the row associated with in any way to satisfy the GH theorem. The difficulty will arise when both , and are present in . To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with and be separated.

When is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to and making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).

Fix an agent . The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in (which is a tree) to be a root and direct all edges `away’ from the root (when is a subset of the constraints we delete from any edge in which at most one from the pair and appears in ) . Label the root . Label all its neighbors , label the neighbors of the neighbors and so on. If vertex was labeled assign the row to the set otherwise to the row . This produces a partition of the constraints of the form satisfying GH.

Initially, all leaves and edges of are unmarked. Trace out a path from the root to one of the leaves of and mark that leaf. Each unmarked directed edge on this path corresponds to the pair and . Assign to the same set that is the label of . Assign to the same set that is the label of vertex . Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge . If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge as well as . Suppose was labeled on the first iteration and was marked. This means was assigned to . Subsequently will also be assigned to which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.

What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. Note that other sources, like Wikipedia give other reasons, for why electricity is special:

Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.

I’m skeptical. To see why, replace electricity by air travel.

Let be the set of vertices and the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in . Let be the set of directed arcs that result. Hence, mens that the edge is directed from to . Notice, if , then .

Associated with each is a number that we interpret as a flow of electricity. If we interpret this to be a flow from to . If we interpret this as a flow from to .

- Let is the resistance on link .
- unit cost of injecting current into node .
- marginal value of current consumed at node .
- amount of current consumed at node .
- amount of current injected at node .
- capacity of link .

Current must satisfy two conditions. The first is conservation of flow at each node:

The second is Ohm’s law. There exist node potentials such that

Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.

At each node there is a power supplier with constant marginal cost of production of upto units. At each there is a consumer with constant marginal value of upto units. A natural optimization problem to consider is

subject to

This is the problem of finding a flow that maximizes surplus.

Let be the set of cycles in . Observe that each corresponds to a cycle in if we ignore the orientation of the edges. For each cycle , let denote the edges in that are traversed in accordance with their orientation. Let be the set of edges in that are traversed in the opposing orientation.

We can project out the variables and reformulate as

subject to

Recall the scenario we ended with in part 1. Let , and in addition suppose for all . Only has a capacity constraint of 600. Let and . Also and and each have unlimited capacity. At node 3, the marginal value is upto 1500 units and zero thereafter. The optimization problem is

subject to

Notice, for every unit of flow sent along , half a unit of flow must be sent along and as well to satisfy the cycle flow constraint.

The solution to this problem is , , , , and . What is remarkable about this not all of customer 3’s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along and 900 units along . This flow violates the cycle constraint.

In this example, when generator 1 injects electricity into the network to serve customer 3’s demand, a positive amount of that electricity must flow along *every* path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3’s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.

The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.

Let be the dual variable associated with the flow balance constraint. Let be associated with the cycle constraints. Let and be associated with link capacity constraints. Let and be associated with the remaining tow constraints. These can be interpreted as the profit of supplier and the surplus of customer respectively. For completeness the dual would be:

subject to

Now has a natural interpretation as a price to be paid for consumption at node for supply injected at node . and can be interpreted as the price of capacity. However, is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.

In part two, as promised, I turn to the welfare maximization problem. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by

The problem of maximizing total welfare is

subject to

Welfare maximization with BQP preferences is in general NP-hard. One proof relies on a reduction to the multi-way cut problem. Given a graph with edge weight for each , and a set of terminal vertices , a **multiway cut** is a set of edges whose removal disconnects every pair of terminal vertices. The problem of finding the multiway cut of minimum total weight is called the multiway cut problem. When consists of only two terminals () the problem reduces to the well known minimum cut problem. For , it is known that the problem is hard even on planar graphs.

We can obtain the multiway cut problem by setting for all and and for all and . Any pair such that for will be part of a multi-way cut. This reduction implies that welfare maximization when for all and is NP-hard. This is in contrast to the case of surplus maximization.

,Candogan, Ozdaglar and Parrilo (2013), the paper that prompted this post, identify a solvable instance of the welfare maximization problem. They impose two conditions. The first is called **sign consistency**. For each the sign of and for any is the same. Furthermore, this applies to all pairs . Sign consistency by itself is not sufficient to obtain a solvable instance. Another condition is needed. Let be a graph with vertex set and for any such that introduce an edge . The second condition is that be a tree. Interestingly, Erdos and Sz\'{e}kely (1995) show that a generalization of the multiway cut problem which corresponds to welfare maximization under sign consistency and for all and , is polynomially solvable when the underlying graph is a tree. The Candogan, Ozdaglar and Parrilo (COP) proof is based on a dynamic programming argument similar to the one used in Erdos and Sz\'{e}kely (1994).

The key result in COP is the following natural linearization of the welfare maximization problem admits an integral optimal solution.

subject to

There is a connection between the welfare maximization problem and packing subtrees into trees that I want to highlight. It suggests a possible avenue by which one might enlarge the class of preferences COP consider.

Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . Let be the maximal connected components of after deletion of the edges in (call this ). For any and let

By the way, for each and , is supermodular over the subsets of . Let if is assigned to agent and zero otherwise. The problem of finding a welfare maximizing allocation can be expressed as follows:

subject to

In the above program, we can, without loss, restrict attention to subsets that a subtrees (connected subgraphs) of . To see why, suppose in that in some optimal solution to the above integer program, where is not a subtree. Then, we can write where both and are in the same component of as is. Furthermore, it must be the the case that there is no edge such that and . Therefore, . That means we can construct a new optimal solution by setting and raising and to 1. Note, in the original solution by virtue of the first constraint. As long as is not a subtree we can repeat this argument.

Hence, if we let be the set of subtrees of , the welfare maximization problem can be expressed as follows:

subject to

Its important to emphasize that no contains a negative edge.

Were it not for the second set of the constraints, integrality would follow from Barany, Edmonds and Wolsey (1986). I’m not aware of this variation of the tree packing problem having been considered. A follow up paper by Aghezzaf and Wolsey (1994) comes close in the sense of allowing for a piecewise linear concave objective function.

A paper by Azevdo, Weyl and White in a recent issue of Theoretical Economics caught my eye. It establishes existence of Walrasian prices in an economy with indivisible goods, a continuum of agents and quasilinear utility. The proof uses Kakutani’s theorem. Here is an argument based on an observation about extreme points of linear programs. It shows that there is a way to scale up the number of agents and goods, so that in the scaled up economy a Walrasian equilibrium exists.

First, the observation. Consider . The matrix and the RHS vector are all rational. Let be an optimal extreme point solution and the absolute value of the determinant of the optimal basis. Then, must be an integral vector. Equivalently, if in our original linear program we scale the constraints by , the new linear program has an optimal solution that is integral.

Now, apply this to the existence question. Let be a set of agents, a set of distinct goods and the utility that agent enjoys from consuming the bundle . Note, no restrictions on beyond non-negativity and quasi-linearity.

As utilities are quasi-linear we can formulate the problem of finding the efficient allocation of goods to agents as an integer program. Let if the bundle is assigned to agent and 0 otherwise. The program is

subject to

If we drop the integer constraints we have an LP. Let be an optimal solution to that LP. Complementary slackness allows us to interpret the dual variables associated with the second constraint as Walrasian prices for the goods. Also, any bundle such that must be in agent ‘s demand correspondence.

Let be the absolute value of the determinant of the optimal basis. We can write for all and where is an integer. Now construct an enlarged economy as follows.

Scale up the supply of each by a factor of . Replace each agent by clones. It should be clear now where this is going, but lets dot the i’s. To formulate the problem of finding an efficient allocation in this enlarged economy let if bundle is allocated the clone of agent and zero otherwise. Let be the utility function of the clone of agent . Here is the corresponding integer program.

subject to

Its easy to see a feasible solution is to give for each and such that , the clones in a bundle . The optimal dual variables from the relaxation of the first program complements this solution which verifies optimality. Thus, Walrasian prices that support the efficient allocation in the augmented economy exist.

In this post I describe an alternative proof of a nifty result that appears in a forthcoming paper by Goeree, Kushnir, Moldovanu and Xi. They show (under private values) given any Bayesian incentive compatible mechanism, M, there is a dominant strategy mechanism that gives each agent the same expected surplus as M provides.

For economy of exposition only, suppose 2 agents and a finite set of possible outcomes, . Suppose, also the same type space, for both. Let be the density function over types. To avoid clutter, assume the uniform distribution, i.e., . Nothing in the subsequent analysis relies on this.

When agent 1 reports type and agent 2 reports type , denote by the probability that outcome is selected. The ‘s must be non-negative and satisfy

Associated with each agent is a vector that determines, along with her type, the utility she enjoys from a given allocation. In particular, given the allocation rule , the utility that agent of type enjoys when the other agent reports type is A similar expression holds agent 2.

Let Interpret the ‘s as the `quantity’ of goods that each agent receives. Dominant strategy implies that should be monotone increasing in for each fixed and should be monotone increasing in for each fixed . The interim `quantities will be: Bayesian incentive compatibility (BIC) implies that the ‘s should be monotone. To determine if given BIC interim `quantities’ ‘s can be implemented via dominant strategies, we need to know if the following system is feasible

System (1-6) is feasible iff the optimal objective function value of the following program is zero:

subject to

Let be an optimal solution to this program.

Suppose, for a contradiction there is a pair such that . I will argue that there must exist an such that . Suppose not, then for each , either or and (at optimality, it cannot be that and are both non-zero). In this case

. This last term is negative, a contradiction. Therefore, there is a such that but .

Let , and denote by the point . Observe that is in the convex hull, of for all . Thus choosing ‘s amounts to choosing a point . Equivalently, choosing a point gives rise to a set of ‘s. For convenience assume that is in the strict interior of for all and that is full dimensional. This avoids having to deal with secondary arguments that obscure the main idea.

Recall, implies implies that . Take all points and shift them horizontally to the right by . Call these new points . Observe that for all . Next, take all points and shift them horizontally to the left by to form new points . These points are also in . Leave all other points unchanged.

Because the vertical coordinates of all points were left unchanged, (8) and (10) are satisfied by this choice of points. Because and were shifted in opposite directions along the horizontal, (9) is still true. Finally, because all points in and were shifted by the same amount, (7) continues to hold.

The shift leftwards of reduces while the rightward shift of reduces . Thus, we get a new solution with higher objective function value, a contradiction.

If and are not the interior of but on the boundary, then horizontal shifts alone may place them outside of . In the case of this can only happen if . In this case, shift across and to the right by as well and then downwards by the same amount. This would have to be matched by a corresponding upward shift by some point . Similarly with .

Thanks to Alexey Kushnir and Ahmad Peivandi for comments.

## Recent Comments