You are currently browsing the category archive for the ‘Auctions’ category.

Around the mid 2010’s Google introduced automated bidding. Other platforms have followed suit.

Rather than bidding directly for an `eyeball’, an advertiser delegates the bidding to the platform. In order to inform the bids that the platform will submit on their behalf, the advertiser submits two numbers to the platform. One is their budget and the second is their ROI target which can be thought of as . Hence, the ROI is the inverse of the cost per click.

Some observers have remarked that auto-bidding is strange because one asks the auctioneer themselves to bid on one’s behalf. Others have been inspired to focus on the design of auctions when bidders have an ROI constraint. This, I think, is misguided.

First, just because the auctioneer’s chosen bidding language uses an ROI target does not mean that a ROI constraint enters bidder’s preferences. One should never confuse the message space of a mechanism with the preferences of the agents.

Second, once a bidder has submitted a budget and an ROI target, the subsequent auction is an irrelevance. Why? Suppose I submit a budget of . Then, my ROI target, says that the platform must deliver clicks. For example, at a budget of $100 and an ROI target of 2, I am telling the platform that I will give them $100 in return for 200 clicks. Now, the platform, has access *not* to a finite number of clicks but a *flow*. They can, given time, satisfy *every* bid. In short, the platform will get your $100. The only issue is when. The automated auction is merely an elaborate device for determining the rate at which different bidders receive a click. One can think of far simpler procedures to do this. For example, round robin or deplete budgets at a uniform rate.

Over a rabelaisian feast with convivial company, conversation turned to a twitter contretemps between economic theorists known to us at table. Its proximate cause was the design of the incentive auction for radio spectrum. The curious can dig around on twitter for the cut and thrust. A summary of the salient economic issues might be helpful for those following the matter.

Three years ago, in the cruelest of months, the FCC conducted an auction to reallocate radio spectrum. It had a procurement phase in which spectrum would be purchased from current holders and a second phase in which it was resold to others. The goal was to shift spectrum, where appropriate, from current holders to others who might use this scarce resource more efficiently.

It is the procurement phase that concerns us. The precise details of the auction in this phase will not matter. Its design is rooted in Ausubel’s clinching auction by way of Bikhchandani *et al *(2011) culminating in Milgrom and Segal (2019).

The pricing rule of the procurement auction was chosen under the assumption that each seller owned a single license. If invalid, it allows a seller with multiple licenses to engage in what is known as supply reduction to push up the price. Even if each seller initially owned a single license, a subset of sellers could benefit from merging their assets and coordinating their bids (or an outsider could come in and aggregate some sellers prior to the auction). A recent paper by my colleagues Doraszelski, Seim, Sinkinson and Wang offers estimates of how much sellers might have gained from strategic supply reduction.

Was the choice of price rule a design flaw? I say, compared to what? How about the VCG mechanism? It would award a seller owning multiple licenses the marginal product associated with their set of licenses. In general, if the assets held by sellers are substitutes for each other, the marginal product of a set will exceed the sum of the marginal products of its individual elements. Thus, the VCG auction would have left the seller with higher surplus than they would have obtained under the procurement auction assuming no supply reduction. As noted in Paul Milgrom’s book, when goods are substitutes, the VCG auction creates an incentive for mergers. This is formalized in Sher (2010). The pricing rule of the procurement auction could be modified to account for multiple ownership (see Bikhchandani *et al *(2011)) but it would have the same qualitative effect. A seller would earn a higher surplus than they would have obtained under the procurement auction assuming no supply reduction. A second point of comparison would be to an auction that was explicitly designed to discourage mergers of this kind. If memory serves, this reduces the auction to a posted price mechanism.

Was there anything that could have been done to discourage mergers? The auction did have reserve prices, so an upper limit was set on how much would be paid for licenses. Legal action is a possibility but its not clear whether that could have been pursued without delaying the auction.

Stepping back, one might ask a more basic question: should the reallocation of spectrum have been done by auction? Why not follow Coase and let the market sort it out? The orthodox answer is no because of hold-up and transaction costs. However, as Thomas Hazlett has argued, there are transaction costs on the auction side as well.

Apparently, it is quite the rage to festoon one’s slides with company logos, particularly of the frightful five. At present this is done for free. It suggests a new business. A platform that matches advertisers to faculty. Faculty can offer up their slides and advertisers can bid for the right to place their logos on the slides.

When analyzing a mechanism it is convenient to assume that it is direct. The revelation principle allows one to argue that this restriction is without loss of generality. Yet, there are cases where one prefers to implement the indirect version of a mechanism rather than its direct counterpart. The clock version of the English ascending auction and the sealed bid second price auction are the most well known example (one hopes not the only). There are few (i.e. I could not immediately recall any) theorems that uniquely characterize a particular indirect mechanism. It would be nice to have more. What might such a characterization depend upon?

1) Direct mechanisms require that agents report their types. A concern for privacy could be used to `kill’ off a direct mechanism. However, one would first have to rule out the use of trusted third parties (either human or computers implementing cryptographic protocols).

2) Indirect mechanism can sometimes be thought of as an extensive form game and one might look for refinements of solution concepts for extensive form games that have no counterpart in the direct version of the mechanism. The notion of obviously dominant strategy-proof that appears here is an example. However, indirect mechanisms may introduce equilibria, absent in the direct counterpart, that are compelling for the agents but unattractive for the designers purposes.

3) One feature of observed indirect mechanisms is that they use simple message spaces, but compensate by using multiple rounds of communication. Thus a constraint on message spaces would be needed in a characterization but coupled with a constraint on the rounds of communication.

Because I have white hair and that so sparse as to resemble the Soviet harvest of 1963, I am asked for advice. Just recently I was asked about `hot’ research topics in the sharing economy. `You mean a pure exchange economy?, said I in reply. Because I have white hair etc, I sometimes forget to bite my tongue.

Returning to topic, the Economist piece I linked to above, gets it about right. With a fall in certain transaction costs, trades that were otherwise infeasible, are realized. At a high level there is nothing more to be said beyond what we know already about exchange economies.

A closer looks suggests something of interest in the role of the mediator (eBay, Uber) responsible for the reduction in transaction costs. They are not indifferent Walrasian auctioneers but self interested ones. eBay and Uber provide an interesting contrast in `intrusiveness’. The first reduces the costs with respect to search, alleviates the lemons problem and moral hazard by providing information and managing payments. It does not, however, set prices. These are left to participants to decide. In sum, eBay it appears, tries to eliminate the textbook obstacles to a perfectly competitive market. Uber, also does these things but more. It chooses prices and the supplier who will meet the reported demand. One might think eBay does not because of the multitude of products it would have to track. The same is true for Uber. A product on Uber is a triple of origin, destination and time of day. The rider and driver may not be thinking about things in this way, but Uber certainly must in deciding prices and which supplier will be chosen to meet the demand. Why doesn’t Uber allow riders to post bids and drivers to post asks?

Fox News and CNN have been in knots recently about how to allocate scarce debate slots to the many Republican pretenders to the oval office. Should Poll numbers be the criteria? What about gender, race, state office etc. etc. There is a simple and transparent way to allocate the slots: an auction. Neither CNN or Fox are charities. If the debates are a public good, there is no reason why Fox and CNN alone should forgo profits to host them. They should, instead auction off the rights to participate in the debates.

What design questions would have to be answered? First, how much time to allocate to the debate, i.e., what is the supply of the resource to be allocated? Second, should one auction slots or time. For example, should it be 10 slots of equal time lengths in a 2 hour time period? Or, should one allow the bidders to bid on the actual amount of time they would get in the 2 hour period? Are candidates the only ones allowed to bid, or can anyone submit bids on their behalf? Can one bid to deny time to the candidates? Actually, as these are not debates at all, but parallel news conference, perhaps one should auction off the right to answer questions. In real time. That would be entertainment!

According to the NY Times, some Californians

would have to cut their water consumption by 35 percent under the terms of a preliminary plan issued by state officials on Tuesday to meet a 25 percent mandatory statewide reduction in urban water use.

There is an obvious way to achieve this, raise the price of water. If its obvious why isn’t it the *first* thing California did some years back when it was clear that water was scarce? In some cases the hands of the state are tied by water rights allocated for commercial purposes, so lets focus on household consumption.

We know that the first tool the state reaches for is regulation. See, for example, the following memo from the California State water board. Interestingly, it begins by noting that the state is in the 4th year of a drought! Eventually, it is clear that regulation is insufficient and then, price increases are considered. In fact, the water reduction targets quoted from the NY Times above come from a report by the Governor of the state that also urges the use of

rate structures and other pricing mechanisms

to achieve reductions. Again, why is price last rather first? Is this because the state must maintain a credible reputation for not exploiting its monopoly power with water?

If one is going reduce consumption by raising prices, should it be an across the board price increase? Note that consumption is metered so the exact amount that is purchased by a household is known to the provider. The state also has access to other information: location, home size, family size and income. In principle, the state could price discriminate. Here is an example from the Irvine Ranch Water district. Each household is given an initial `allotment’ that depends on household size and area that is landscaped. Exceed the allotment and the price of water rises. For more details and the impact on consumption see the following paper. Is it obvious that this is the `correct’ mechanism?

Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.

In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by

The problem of maximizing total welfare is

subject to

I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is called **sign consistency**. For each , the sign of and for any is the same. Furthermore, this applies to all pairs .

Let be a graph with vertex set and for any such that introduce an edge . Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . The second condition is that be a tree.

The following is the relaxation that they consider:

subject to

Denote by the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”

Let be the maximal connected components of after deletion of the edges in (call this ). The proof will be by induction on . The case follows from total unimodularity. I prove this later.

Suppose . Let be an optimal solution to our linear program. We can choose to be an extreme point of . As is a tree, there must exist a incident to exactly one negative edge, say . Denote by the polyhedron restricted to just the vertices of and by the polyhedron restricted to just the vertices in the complement of . By the induction hypothesis, both and are integral polyhedrons. Each extreme point of () assigns a vertex of (the complement of ) to a particular agent. Let be the set of extreme points of . If in extreme point , vertex is assigned to agent we write and zero otherwise. Similarly with the extreme points of . Thus, is assigns vertex to agent . Let be the objective function value of the assignment , similarly with .

Now restricted to can be expressed as . Similarly, restricted to can be expressed as . We can now reformulate our linear program as follows:

subject to

The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because and cannot both be 1, similarly with the ‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in every cycle must contain a positive even number of negative edges.

Return to the case . Consider the polyhedron restricted to just one . It will have the form:

Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, , of rows/constraints. The goal is to partition them into two sets and so that column by column the difference in the sum of the non-zero entries in and and the sum of the nonzero entries in differ by at most one.

Observe that the rows associated with constraints are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If is present in but is absent (or vice-versa), we are free to assign the row associated with in any way to satisfy the GH theorem. The difficulty will arise when both , and are present in . To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with and be separated.

When is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to and making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).

Fix an agent . The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in (which is a tree) to be a root and direct all edges `away’ from the root (when is a subset of the constraints we delete from any edge in which at most one from the pair and appears in ) . Label the root . Label all its neighbors , label the neighbors of the neighbors and so on. If vertex was labeled assign the row to the set otherwise to the row . This produces a partition of the constraints of the form satisfying GH.

Initially, all leaves and edges of are unmarked. Trace out a path from the root to one of the leaves of and mark that leaf. Each unmarked directed edge on this path corresponds to the pair and . Assign to the same set that is the label of . Assign to the same set that is the label of vertex . Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge . If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge as well as . Suppose was labeled on the first iteration and was marked. This means was assigned to . Subsequently will also be assigned to which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.

What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. Note that other sources, like Wikipedia give other reasons, for why electricity is special:

Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.

I’m skeptical. To see why, replace electricity by air travel.

Let be the set of vertices and the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in . Let be the set of directed arcs that result. Hence, mens that the edge is directed from to . Notice, if , then .

Associated with each is a number that we interpret as a flow of electricity. If we interpret this to be a flow from to . If we interpret this as a flow from to .

- Let is the resistance on link .
- unit cost of injecting current into node .
- marginal value of current consumed at node .
- amount of current consumed at node .
- amount of current injected at node .
- capacity of link .

Current must satisfy two conditions. The first is conservation of flow at each node:

The second is Ohm’s law. There exist node potentials such that

Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.

At each node there is a power supplier with constant marginal cost of production of upto units. At each there is a consumer with constant marginal value of upto units. A natural optimization problem to consider is

subject to

This is the problem of finding a flow that maximizes surplus.

Let be the set of cycles in . Observe that each corresponds to a cycle in if we ignore the orientation of the edges. For each cycle , let denote the edges in that are traversed in accordance with their orientation. Let be the set of edges in that are traversed in the opposing orientation.

We can project out the variables and reformulate as

subject to

Recall the scenario we ended with in part 1. Let , and in addition suppose for all . Only has a capacity constraint of 600. Let and . Also and and each have unlimited capacity. At node 3, the marginal value is upto 1500 units and zero thereafter. The optimization problem is

subject to

Notice, for every unit of flow sent along , half a unit of flow must be sent along and as well to satisfy the cycle flow constraint.

The solution to this problem is , , , , and . What is remarkable about this not all of customer 3’s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along and 900 units along . This flow violates the cycle constraint.

In this example, when generator 1 injects electricity into the network to serve customer 3’s demand, a positive amount of that electricity must flow along *every* path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3’s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.

The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.

Let be the dual variable associated with the flow balance constraint. Let be associated with the cycle constraints. Let and be associated with link capacity constraints. Let and be associated with the remaining tow constraints. These can be interpreted as the profit of supplier and the surplus of customer respectively. For completeness the dual would be:

subject to

Now has a natural interpretation as a price to be paid for consumption at node for supply injected at node . and can be interpreted as the price of capacity. However, is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.

## Recent Comments