You are currently browsing the category archive for the ‘Auctions’ category.
Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.
In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by
The problem of maximizing total welfare is
subject to
I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is calledĀ sign consistency. For each , the sign of and for any is the same. Furthermore, this applies to all pairs .
Let be a graph with vertex set and for any such that introduce an edge . Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . The second condition is that be a tree.
The following is the relaxation that they consider:
subject to
Denote by the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”
Let be the maximal connected components of after deletion of the edges in (call this ). The proof will be by induction on . The case follows from total unimodularity. I prove this later.
Suppose . Let be an optimal solution to our linear program. We can choose to be an extreme point of . As is a tree, there must exist a incident to exactly one negative edge, say . Denote by the polyhedron restricted to just the vertices of and by the polyhedron restricted to just the vertices in the complement of . By the induction hypothesis, both and are integral polyhedrons. Each extreme point of () assigns a vertex of (the complement of ) to a particular agent. Let be the set of extreme points of . If in extreme point , vertex is assigned to agent we write and zero otherwise. Similarly with the extreme points of . Thus, is assigns vertex to agent . Let be the objective function value of the assignment , similarly with .
Now restricted to can be expressed as . Similarly, restricted to can be expressed as . We can now reformulate our linear program as follows:
subject to
The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because and cannot both be 1, similarly with the ‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in every cycle must contain a positive even number of negative edges.
Return to the case . Consider the polyhedron restricted to just one . It will have the form:
Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, , of rows/constraints. The goal is to partition them into two sets and so that column by column the difference in the sum of the non-zero entries in and and the sum of the nonzero entries in differ by at most one.
Observe that the rows associated with constraints are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If is present in but is absent (or vice-versa), we are free to assign the row associated with in any way to satisfy the GH theorem. The difficulty will arise when both , and are present in . To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with and be separated.
When is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to and making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).
Fix an agent . The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in (which is a tree) to be a root and direct all edges `away’ from the root (when is a subset of the constraints we delete from any edge in which at most one from the pair and appears in ) . Label the root . Label all its neighbors , label the neighbors of the neighbors and so on. If vertex was labeled assign the row to the set otherwise to the row . This produces a partition of the constraints of the form satisfying GH.
Initially, all leaves and edges of are unmarked. Trace out a path from the root to one of the leaves of and mark that leaf. Each unmarked directed edge on this path corresponds to the pair and . Assign to the same set that is the label of . Assign to the same set that is the label of vertex . Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge . If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge as well as . Suppose was labeled on the first iteration and was marked. This means was assigned to . Subsequently will also be assigned to which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.
What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. NoteĀ that other sources, like Wikipedia give other reasons, for why electricity is special:
Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.
I’m skeptical. To see why, replace electricity by air travel.
Let be the set of vertices and the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in . Let be the set of directed arcs that result. Hence, mens that the edge is directed from to . Notice, if , then .
Associated with each is a number that we interpret as a flow of electricity. If we interpret this to be a flow from to . If we interpret this as a flow from to .
- Let is the resistance on link .
- unit cost of injecting current into node .
- marginal value of current consumed at node .
- amount of current consumed at node .
- amount of current injected at node .
- capacity of link .
Current must satisfy two conditions. The first is conservation of flow at each node:
The second is Ohm’s law. There exist node potentials such that
Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.
At each node there is a power supplier with constant marginal cost of production of upto units. At each there is a consumer with constant marginal value of upto units. A natural optimization problem to consider is
subject to
This is the problem of finding a flow that maximizes surplus.
Let be the set of cycles in . Observe that each corresponds to a cycle in if we ignore the orientation of the edges. For each cycle , let denote the edges in that are traversed in accordance with their orientation. Let be the set of edges in that are traversed in the opposing orientation.
We can project out the variables and reformulate as
subject to
Recall the scenario we ended with in part 1. Let , and in addition suppose for all . Only has a capacity constraint of 600. Let and . Also and and each have unlimited capacity. At node 3, the marginal value is upto 1500 units and zero thereafter. The optimization problem is
subject to
Notice, for every unit of flow sent along , half a unit of flow must be sent along and as well to satisfy the cycle flow constraint.
The solution to this problem is , , , , and . What is remarkable about this not all of customer 3’s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along and 900 units along . This flow violates the cycle constraint.
In this example, when generator 1 injects electricity into the network to serve customer 3’s demand, a positive amount of that electricity must flow alongĀ everyĀ path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3’s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.
The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.
Let be the dual variable associated with the flow balance constraint. Let be associated with the cycle constraints. Let and be associated with link capacity constraints. Let and be associated with the remaining tow constraints. These can be interpreted as the profit of supplier and the surplus of customer respectively. For completeness the dual would be:
subject to
Now has a natural interpretation as a price to be paid for consumption at node for supply injected at node . and can be interpreted as the price of capacity. However, is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.
In part two, as promised, I turn to the welfare maximization problem. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by
The problem of maximizing total welfare is
subject to
Welfare maximization with BQP preferences is in general NP-hard. One proof relies on a reduction to the multi-way cut problem. Given a graph with edge weight for each , and a set of terminal vertices , aĀ multiway cutĀ is a set of edges whose removal disconnects every pair of terminal vertices. The problem of finding the multiway cut of minimum total weight is called the multiway cut problem. When consists of only two terminals () the problem reduces to the well known minimum cut problem. For , it is known that the problem is hard even on planar graphs.
We can obtain the multiway cut problem by setting for all and and for all and . Any pair such that for will be part of a multi-way cut. This reduction implies that welfare maximization when for all and is NP-hard. This is in contrast to the case of surplus maximization.
,Candogan, Ozdaglar and Parrilo (2013), the paper that prompted this post, identify a solvable instance of the welfare maximization problem. They impose two conditions. The first is calledĀ sign consistency. For each the sign of and for any is the same. Furthermore, this applies to all pairs . Sign consistency by itself is not sufficient to obtain a solvable instance. Another condition is needed. Let be a graph with vertex set and for any such that introduce an edge . The second condition is that be a tree. Interestingly, Erdos and Sz\'{e}kely (1995) show that a generalization of the multiway cut problem which corresponds to welfare maximization under sign consistency and for all and , is polynomially solvable when the underlying graph is a tree. The Candogan, Ozdaglar and Parrilo (COP) proof is based on a dynamic programming argument similar to the one used in Erdos and Sz\'{e}kelyĀ (1994).
The key result in COP is the following natural linearization of the welfare maximization problem admits an integral optimal solution.
subject to
There is a connection between the welfare maximization problem and packing subtrees into trees that I want to highlight. It suggests a possible avenue by which one might enlarge the class of preferences COP consider.
Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . Let be the maximal connected components of after deletion of the edges in (call this ). For any and let
By the way, for each and , is supermodular over the subsets of . Let if is assigned to agent and zero otherwise. The problem of finding a welfare maximizing allocation can be expressed as follows:
subject to
In the above program, we can, without loss, restrict attention to subsets that a subtrees (connected subgraphs) of . To see why, suppose in that in some optimal solution to the above integer program, where is not a subtree. Then, we can write where both and are in the same component of as is. Furthermore, it must be the the case that there is no edge such that and . Therefore, . That means we can construct a new optimal solution by setting and raising and to 1. Note, in the original solution by virtue of the first constraint. As long as is not a subtree we can repeat this argument.
Hence, if we let be the set of subtrees of , the welfare maximization problem can be expressed as follows:
subject to
Its important to emphasize that no contains a negative edge.
Were it not for the second set of the constraints, integrality would follow from Barany, Edmonds and Wolsey (1986). I’m not aware of this variation of the tree packing problem having been considered. A follow up paper by Aghezzaf and Wolsey (1994) comes close in the sense of allowing for a piecewise linear concave objective function.
A paper by Azevdo, Weyl and White in a recent issue of Theoretical Economics caught my eye. It establishes existence of Walrasian prices in an economy with indivisible goods, a continuum of agents and quasilinear utility. The proof uses Kakutani’s theorem. Here is an argument based on an observation about extreme points of linear programs. It shows that there is a way to scale up the number of agents and goods, so that in the scaled up economy a Walrasian equilibrium exists.
First, the observation. Consider . The matrix and the RHS vector are all rational. Let be an optimal extreme point solution and the absolute value of the determinant of the optimal basis. Then, must be an integral vector. Equivalently, if in our original linear program we scale the constraints by , the new linear program has an optimal solution that is integral.
Now, apply this to the existence question. Let be a set of agents, a set of distinct goods and the utility that agent enjoys from consuming the bundle . Note, no restrictions on beyond non-negativity and quasi-linearity.
As utilities are quasi-linear we can formulate the problem of finding the efficient allocation of goods to agents as an integer program. Let if the bundle is assigned to agent and 0 otherwise. The program is
subject to
If we drop the integer constraints we have an LP. Let be an optimal solution to that LP. Complementary slackness allows us to interpret the dual variables associated with the second constraint as Walrasian prices for the goods. Also, any bundle such that must be in agent ‘s demand correspondence.
Let be the absolute value of the determinant of the optimal basis. We can write for all and where is an integer. Now construct an enlarged economy as follows.
Scale up the supply of each by a factor of . Replace each agent by clones. It should be clear now where this is going, but lets dot the i’s. To formulate the problem of finding an efficient allocation in this enlarged economy let if bundle is allocated the clone of agent and zero otherwise. Let be the utility function of the clone of agent . Here is the corresponding integer program.
subject to
Its easy to see a feasible solution is to give for each and such that , the clones in a bundle . The optimal dual variables from the relaxation of the first program complements this solution which verifies optimality. Thus, Walrasian prices that support the efficient allocation in the augmented economy exist.
In this post I describe an alternative proof of a nifty result that appears in a forthcoming paper by Goeree, Kushnir, Moldovanu and Xi. They show (under private values) given any Bayesian incentive compatible mechanism, M, there is a dominant strategy mechanism that gives each agent the same expected surplus as M provides.
For economy of exposition only, suppose 2 agents and a finite set of possible outcomes, . Suppose, also the same type space, for both. Let be the density function over types. To avoid clutter, assume the uniform distribution, i.e., . Nothing in the subsequent analysis relies on this.
When agent 1 reports type and agent 2 reports type , denote by the probability that outcome is selected. The ‘s must be non-negative and satisfy
Associated with each agent is a vector that determines, along with her type, the utility she enjoys from a given allocation. In particular, given the allocation rule , the utility that agent of type enjoys when the other agent reports type is A similar expression holds agent 2.
Let Interpret the ‘s as the `quantity’ of goods that each agent receives. Dominant strategy implies that should be monotone increasing in for each fixed and should be monotone increasing in for each fixed . The interim `quantities will be: Bayesian incentive compatibility (BIC) implies that the ‘s should be monotone. To determine if given BIC interim `quantities’ ‘s can be implemented via dominant strategies, we need to know if the following system is feasible
System (1-6) is feasible iff the optimal objective function value of the following program is zero:
subject to
Let be an optimal solution to this program.
Suppose, for a contradiction there is a pair such that . I will argue that there must exist an such that . Suppose not, then for each , either or and (at optimality, it cannot be that and are both non-zero). In this case
. This last term is negative, a contradiction. Therefore, there is a such that but .
Let , and denote by the point . Observe that is in the convex hull, of for all . Thus choosing ‘s amounts to choosing a point . Equivalently, choosing a point gives rise to a set of ‘s. For convenience assume that is in the strict interior of for all and that is full dimensional. This avoids having to deal with secondary arguments that obscure the main idea.
Recall, implies implies that . Take all points and shift them horizontally to the right by . Call these new points . Observe that for all . Next, take all points and shift them horizontally to the left by to form new points . These points are also in . Leave all other points unchanged.
Because the vertical coordinates of all points were left unchanged, (8) and (10) are satisfied by this choice of points. Because and were shifted in opposite directions along the horizontal, (9) is still true. Finally, because all points in and were shifted by the same amount, (7) continues to hold.
The shift leftwards of reduces while the rightward shift of reduces . Thus, we get a new solution with higher objective function value, a contradiction.
If and are not the interior of but on the boundary, then horizontal shifts alone may place them outside of . In the case of this can only happen if . In this case, shift across and to the right by as well and then downwards by the same amount. This would have to be matched by a corresponding upward shift by some point . Similarly with .
Thanks to Alexey Kushnir and Ahmad Peivandi for comments.
This session was devoted to three different perspectives on Gale and Shapley’s stable matching problem. The first, is traditional wherein nothing more than a command of the English language coupled with a patience for moderately long chains of reasoning is needed. The second, is Adachi’s (from his OR letters paper) lattice formulation of the question and an example to illustrate how it can be generalized to other settings. The third, based on Vandevate’s characterization of the convex hull of stable matchings. I also stuck in a new result that fills a much needed gap in the literature that I hope is correct (if incorrect perhaps it will stimulate others).
1. Stable Matchings
The stable matching problem was introduced by Gale and Shapley as a model of how to assign students to colleges. One can view it as the ordinal version of the SS model. The problem has a set~ of men and a set~ of women. Each has a strict preference ordering over the elements of and each has a strict preference ordering over the men. The preference ordering of agent~ is denoted and means agent~ ranks above . One can accommodate the possibility of an agent choosing to remain single by including for each man (woman) a dummy woman (man) in the set () that corresponds to being single (or matched with oneself). With this construction we can assume .
A matching is called \index{matching!unstable}unstable if there are two men and two women , such that
- is matched to ,
- is matched to ,
- and and
The pair is called a blocking pair. A matching that has no blocking pairs is called stable.
Given the preferences of the men and women, its always possible to find a stable matching using the deferred acceptance algorithm.
Deferred Acceptance Algorithm, male-propose version
- First, each man proposes to his top ranked choice.
- Next, each woman who has received at least two proposals keeps (tentatively) her top ranked proposal and rejects the rest.
- Then, each man who has been rejected, proposes to his top ranked choice amongst the women who have not rejected him.
- Again each woman who has at least two proposals (including ones from previous rounds), keeps her top ranked proposal and rejects the rest.
- Process repeats until no man has a woman to propose to or each woman has at most one proposal.
Theorem 1 The male propose algorithm terminates in a stable matching.
Proof: When it terminates, it clearly does so in a matching. Suppose the matching is not stable. Then, there exists a blocking pair with matched to , say, and matched to . Since is blocking and , in the proposal algorithm, would have proposed to before . Since was not matched with by the algorithm it must be because received a proposal from a man that she ranked higher than . Since the algorithm matches her to it follows that . This contradicts the fact that is a blocking pair.
One could describe an algorithm where the females propose and the outcome would also be a stable matching and possibly different from the one returned using the male propose version. Thus, not only is a stable matching guaranteed to exist but there can be more than one. If there can be more than one stable matching, is there a reason to prefer one to another?
Denote a matching by . The woman matched to man in the matching is denoted . Similarly, is the man matched to woman . A matching is male-optimal if there is no stable matching such that or for all with for at least one . Similarly define female-optimal.
Theorem 2 The stable matching produced by the (male-proposal) Deferred Acceptance Algorithm is male-optimal.
Proof: Let be the matching returned by the male-propose algorithm. Suppose is not male optimal. Then, there is a stable matching such that or for all with for at least one . Therefore, in the application of the proposal algorithm there must be an iteration where some man proposes to before since and is rejected by woman . Consider the first such iteration. Since woman rejects she must have received a proposal from a man she prefers to man . Since this is the first iteration at which a male is rejected by his partner under it follows that man ranks woman higher than . Summarizing, and implying that is not stable, a contradiction.
You can replace the word “male” by the word “female” in the statement of the theorem. There is no stable matching that is simultaneously optimal with respect to males and females.
A stable matching is immune to a pair of agents opting out of the matching (the blocking pair). We could ask that no subset of agents should have an incentive to opt out of the matching. Formally, a matching dominates a matching if there is a set that satisfies the following two conditions:
- for all
- and for all .
Stability is a special case of this dominance condition when we restrict attention to sets~ consisting of a single couple. The set of undominated matchings is called the core of the matching game.
Theorem 3 The core of the matching game is the set of all stable matchings.
Now assume the preference orderings of the agents are private information to the agent.
Theorem 4 The direct mechanism associated with the male propose algorithm is strategy proof for the males.
Proof: If not, there is a preference profile for the men, such that man , say, can misreport his preferences and obtain a better match. Formally, let be the stable matching obtained by applying the male proposal algorithm to the profile . Suppose reports the preference ordering instead. Let be the matching that results when the male-proposal algorithm is applied to the profile . For a contradiction suppose . For convenience write to mean that or .
Let be the set of men better off under matching than matching . We show that for any and that . In other words, . For a contradiction suppose . In particular . Now, because , and stability of wrt implies . Stability of wrt requires that , otherwise would block . Therefore, .
Now, for any , because during execution of the male-propose algorithm on , each rejects at some iteration. Let be the last man in to make a proposal during execution of male-propose algorithm. This proposal is made to who, by choice of , must have rejected at some earlier iteration. This means that when proposes to , she must reject a pending proposal from such that . As , we have . Therefore, form a blocking pair for wrt (since ).
The mechanism associated with the male propose algorithm is not strategy proof for the females.
2. A Lattice Formulation
We describe a proof (due to Adachi; Fleiner also independently but later) of the existence of stable matchings using Tarski’s fixed point theorem.
Call an assignment of women to men such that each man is assigned to at most one woman (but a woman may be assigned to more than one man) a male semi-matching. The analogous object for women will be called a female semi-matching. For example, assigning each man his first choice would be a male semi-matching. Assigning each woman her third choice would be an example of a female semi-matching.
A pair of male and female semi-matchings will be called a semi-matching denoted , , etc. An example of a semi-matching would consist of each man being assigned his first choice and each woman being assigned her last choice.
The woman assigned to man under the semi-matching will be denoted . If man is assigned to no woman under , then . Similarly for . Next, define a partial order over the set of semi-matchings. Write if
- or for all , and,
- or for all .
Therefore if all the men are better off under than in and all the women are worse off under than in .
Next, define the meet and join operations. Given two semi-matchings and define as follows:
- if otherwise ,
- if otherwise
Define as follows:
- if otherwise ,
- if otherwise
With these definitions it is easy to check that the set of semi-matchings form a compact lattice.
Now define a function on the set of semi-matchings that is non-decreasing. Given a semi-matching define to be the following semi-matching:
- is man ‘s most preferred woman from the set . If this set is empty, set .
- is woman ‘s most preferred man from the set . If this set is empty set .
It is clear that maps semi-matchings into semi-matchings.
Theorem 5 There is a semi-matching such that and that is a stable matching.
Proof: We use Tarski’s theorem. To apply it we need to check that is non-decreasing. Suppose . Pick any . From the definition of , the women are worse off under than in . Thus
and so or When considering the women, for each we must have (because the women are worse off under than under ):
Therefore, .
As the conditions of Tarski’s theorem hold, there is a semi-matching such that . We show that the semi-matching is a stable matching.
By the definition of a semi-matching we have for every , single valued as is for all . To show that is a matching, suppose not. Then there is a pair , say, such that . Since it follows that is ‘s top ranked choice in and ‘s top ranked choice in . From this it follows that if , then, . As , we can assume, WLOG that . However, which is woman ‘s top ranked choice in . Since belongs to this set, we get a contradiction.
To show that is stable, suppose not. Then, there is a blocking pair . Let and , and . As blocks , and . Now which is man ‘s top ranked choice from . But this set contains which is ranked higher by man than , a contradiction.
2.1. Splittable Stable Marriage
To show how versatile Adachi’s lattice approach is, we examine an extension due to Sethuraman and Teo called the splittable stable marriage problem.
We have two disjoint agents, say,`sellers” (indexed by or ) and “buyers” (indexed by or ). Imagine a “market” for a product that is infinitely divisible. Seller has units to sell; buyer wishes to buy and the quantity of sale between and can be at most (“capacity” constraint). Each agent has a strict preference ordering over agents on the opposite side. For convenience assume each agent ranks themselves last. Assume that the number of buyers is equal to the number of sellers. In the sequel the set of sellers will be , and the set of buyers will be .
A splittable marriage is defined as an matrix such that:
Here is the quantity of the product sold by seller to buyer , if neither nor is ; if and , then is the additional quantity that buyer would like to buy but cannot; and if and , then is the additional quantity that seller would like to sell but cannot.
A blocking pair for a splittable marriage is a pair of agents and such that
- ;
- ; and
- .
If blocks , the agents and can do better by trading more with each other, and giving up some of the quantities they transact with less-preferred agents. A splittable stable marriage is a splittable marriage that is not blocked.
Next we define the appropriate generalization of semi-matching. A demand-list for seller is a vector that satisfies (1) and (3). Similarly, a demand-list for buyer is a vector that satisfies (2) and (3).
The demand-lists of sellers and buyers can be encoded by matrices and . Assume matrices and have their rows indexed by the sellers and the columns indexed by the buyers. Specifically, has rows and columns, whereas has rows and columns.
The seller demand-list matrix is a valid “matching” of the sellers, but not a valid matching of the sellers to the buyers. Similarly, for the buyer demand-list matrix .
Let and be a collection of seller and buyer demand-lists respectively. Define partial orders and on the sets and respectively. For any two elements ,
Similarly, for any two elements ,
(Notice the change in the direction of the inequality in the definition of .)
Given any , let be defined as follows:
Similarly, let be defined as follows:
The matrices and can be computed as follows: for each seller, compute the entries corresponding to his row, starting with the most preferred buyer. Clearly, and are in , and are, respectively, the greatest lower bound and the least upper bound of and . Again, the meet and join of any two elements of can be defined in a similar way. Thus and are lattices. By replacing “min” and “max” with “inf” and “sup” respectively, we can infer that meets and joins exist for arbitrary subsets of elements of or ; hence and are complete lattices. The lattice of interest is the product lattice with the partial order defined as follows:
From standard results from lattice theory we see that is a complete lattice.
Call demand-lists and compatible if for all . Any splittable stable marriage can be viewed as a pair of compatible demand-lists: for any seller and buyer , set and to be ; allocate the remaining supply and unfulfilled demand to the last entries in each row and column of and respectively. The converse is false.
Next we need an appropriate monotone function on the lattice . As intuition, suppose the buyers form a coalition and “agree” to hand over their demand-list to the sellers. How should the sellers respond? Clearly, the sellers would “update” their demand-lists as follows:
Notice that appears on both sides of (1). One way to perform this update is as follows: for each seller, update the entries corresponding to his row, starting with the most preferred buyer. In this manner, the “old” demand-list of is never used in the update. We call this the canonical computation.
Similar considerations suggest:
Again, Eq.(2) can be computed as follows: for each buyer, update the entries corresponding to his row, starting with the most preferred seller.
Equations (1) and (2) have identical right hand sides. Unsurprisingly, compatible demand-lists arising from solutions of Equations (1) and (2) correspond to splittable stable marriages.
Lemma 6 If and are solutions to Equations (1) and (2), then for defines a splittable stable marriage.
Proof: By the hypothesis of the lemma, and are identical. Thus,
So if , either or .
We next show that the computations of Eqs.(1) and (2) define monotone maps.
Theorem 7 For , let be the seller demand-list obtained from Eq.~(1). Then,
Proof: Consider seller and buyer . Since , by definition,
Therefore
If is ‘s most preferred buyer, follows from the definition. Inductively, we can argue along these lines that .
Similarly:
Theorem 8 For , let be the buyer demand-list obtained from Eq.~(2). Then,
Consider the function defined on as follows:
We claim that is monotone. Why? Let . Then , and so ; similarly, , and so . Thus . By Tarski’s theorem, has a fixed point. This fixed point is clearly a splittable stable marriage.
One can generalize even further. Let and be two monotone integer valued submodular functions. Define a splittable matching to be any . Preferences would defined similarly. Each buyer seeks to get at much as possible from their most preferred seller subject to feasibility etc.
3. An LP Formulation
Vandevate identified a collection of linear inequalities that describe the convex hull of stable matchings. For each man and woman let if man is matched with woman and zero otherwise.
Then, every stable matching must satisfy the following.
Let be the polyhedron defined by these inequalities.
The first two constraints of ensure that each agent is matched with exactly one other agent of the opposite sex. The third constraint ensures stability by ruling out blocking pairs (call it the blocking constraint). To see why, suppose and . Then man is matched to a woman, that he ranks below . Similarly, woman is matched to a man she ranks below . This would make the pair a blocking pair.
Lemma 9 Suppose . Let . Then, implies that
Proof: Consider .The dual to this program is
subject to
Set and . Substituting this into the dual constraints yields:
Choose any and set . This choice of is clearly dual feasible. It has an objective function value of and so, is dual optimal. The lemma now follows by complementary slackness.
The proof below is due to Sethuraman and Teo.
Theorem 10 Suppose . Then, is the convex hull of stable matchings.
Proof: Choose any weight vector and let
With each member of we associate an interval . For each , partition the associated interval into subintervals of length for all . Arrange these subintervals left to right by by man ‘s decreasing preference over . For each woman , partition the associated interval into subintervals of length for all . Arrange these subintervals left to right by by woman ‘s increasing preference over .
Lemma 6 means the subinterval spanned by in and coincides. Pick a random number uniformly in (0, 1] and construct a semi-matching in the following way:
- Match to if lies in the subinterval of spanned by .
- Match to if lies in the subinterval of spanned by .
By Lemma 6, is matched to iff is matched to . Furthermore, no two men can be matched to the same woman, and similarly no two women can be matched to the same man. So the above semi-matching gives rise to a perfect matching. To show this matching is stable, consider man who is matched to but prefers . Then, in , the subinterval corresponding to is to the left of the subinterval corresponding to . Because is matched to , it means that is to the right of the subinterval corresponding to in . Therefore, is to the left of the subinterval corresponding to in . In other words, is matched to someone she prefers to . Set iff man is matched to woman by the randomized scheme above. Then
Thus, we have exhibited a probability distribution over stable matchings whose expected objective function value coincides with . It follows then, that there is a stable matching with objective function value .
3.1. A Subgradient Algorithm
The LP approach requires that we know that . It would be nice to verify this independently of the proposal algorithm. Here I will outline (not warranted correct) how one can show directly from the LP that . If woman is man ‘s first choice set . If woman is man ‘s second choice set and so on. Consider:
Let be the multiplier associated with the blocking constraints and the multiplier associated with the constraint for all . Consider the following Lagrangean relaxation of the optimization problem:
subject to
This is easy to solve. For each man choose the woman that maximizes
In case of a tie, chooses the top ranked woman.
Initially set . Let be the value of the multipliers at start of iteration . Denote by the optimal choice of given the multipliers . If we will say that man proposed to woman . Given we adjust by as follows:
- Set
- Set for all .
- Set
- Set
Stop once is an assignment. By complementary slackness this must be an optimal stable assignment. Call the dual adjusted rank.
Suppose at iteration , .
Case 1 If among all such that , man is woman ‘s top ranked choice, .
Because is woman ‘s top ranked choice it follows that . Because it follows that .
Case 2 If among all such that , man is not woman ‘s top ranked choice, .
Because is not woman ‘s top ranked choice it follows that . Because it follows that .
Case 3 If and for all such that , then .
Clearly and . Also, .
Case 4 If and for at least one such that , then .
Clearly and . Also, .
Case 5 If then, then .
Clearly, . Also, .
Let . I show first that . Consider first the case . Suppose man proposed to woman and man is woman ‘s top choice among those who have proposed. Then the dual adjusted rank of woman by man declines by exactly 1 (see case 1). The dual adjusted rank of all other women decline by at least 1 (case 2 to 4). Note that case 5 does not apply in this iteration. Therefore, in the next iteration man will continue to propose to woman . More generally, any woman who receives a proposal in iteration continues to receive a proposal in iteration . Suppose this holds until iteration . Suppose man proposes to woman in iteration and man is woman ‘s top choice among those who have proposed at iteration . By cases (1-4) the dual adjusted rank for all woman ranked below declines by at least 1. Consider a woman . By case 5
However, because any woman who received a proposal before iteration continues to receive a proposal at iteration . Thus the dual adjusted rank of all women ranked above decline by at least 1. Because man did not proposes to these women in iteration their dual adjusted rank must be at least 2 smaller than . Thus, woman continues to have the highest dual adjusted rank for man and at iteration , man will continue to propose to woman .
Next, I show there must be some such that . If not, then, it must be the case that . Because there must be a woman who is overdemanded. Consider a man who has proposed to her at iteration who is not her top choice among those proposing to her. Notice the following:
- By case 2,
- By case 5 for all such that , .
- By case 5, for all such that , .
- By case 3, for all , . Note, for all such it must be that .
To summarize, the dual adjusted rank of all women such that , goes down by at least 2. That means in all subsequent iterations, man must propose to a woman ranked at or above in . Eventually, there is a woman, , say, that man keeps proposing to. This means that the dual adjusted rank of all women in declines by at least 2 while the dual adjusted rank of all women outside of declines by exactly 1. Eventually, some woman outside of must have a dual adjusted rank that is largest and man will propose to her, contradiction.
A mathematical detour today. Will state and prove some basic results in combinatorial optimization that have applications in matching and auction theory. The first is the max-flow-min-cut theorem. I’ll use it to prove Hall’s marriage theorem as well as derive the Border-Matthews characterization of implementable interim allocation rules. It has other applications. Itai Sher, for example, uses it in the analysis of a special case of the Glazer-Rubinstein model of persuasion. Mobius et al use it in their study of capital flows in social networks (the Peruvian village paper). Olszewski and myself have a forthcoming paper where we use it to talk about team formation.
Then, its on to polymatroids which can be used to show the assignment problem retains its integrality property in the presence of additional constraints. These additional constraints arise in, for example, school choice problems. See the paper by Budish, Che, Kojima and Milgrom for a list of examples.
1. The Maximum Flow Problem
Let be a network with vertex set and edge set . Assume, for convenience, that edges are `bi-directional’. In there are two special vertices. One called , the source and the other , the sink. Each edge has a capacity that is the maximum flow that can traverse (in either direction). Assume is integral for all . To avoid trivialities assume at least one path in from to .
The maximum flow problem is to find the maximum amount of flow that can be sent from , through the network, to . That there is only source-sink pair is not a restriction. An instance with multiple sources and sinks can be converted into one with a single pair through the introduction of a dummy super source and super sink.
A cut in is a set of edges whose removal disconnects from . Equivalently, a cut is a set of edges that intersects every path from to in . A cut can also be defined in terms of a partition of into two sets and such that and . A cut would be the set of edges with one endpoint in and the other in . The capacity of a cut is the sum of the capacities of the edges in the the cut.
The maximum flow is clearly bounded above by the capacity of any cut. Indeed, the max flow must be bounded above by the minimum capacity cut. Remarkably, this upper bound is tight, i.e., the max flow is the min cut. Note that the min cut must be integer valued but it is not obvious that the maximum flow should be integer valued.
We can formulate the problem of finding a maximum flow as a linear program. To do this, let denote the amount of flow on edge from to . Then, the maximum flow problem can be expressed as follows:
subject to
The first constraint says that the flow, , out of must use edges of the form . The second constraint imposes flow conservation in each vertex, i.e., flow in must equal flow out. The third constraint ensures that the total flow entering must use edges of the form .
The constraint matrix is a network matrix and so TUM. You can see this by noting that each variable appears in at most two constraints with opposite sign. Once as an `incoming’ edge and once as an `outgoing’ edge. The variable also appears exactly twice with opposite sign.
TUM of the constraint matrix means that the flow on each edge and as well as the amount of flow is integer valued.
Theorem 1 The maximum flow is equal to the capacity of the minimum cut.
Proof: Let be the value of the maximum flow. By LP duality:
To see where it came from, observe that we have one dual variable for each and one variable for each . Recall the bidirectional assumption. So, is different from . Thus one should have two constraints for each pair :
I have simply folded these into a single constraint . Notice that at optimality at most one of and can be positive which is why this folding is legitimate.
Notice also, that we can WLOG fix and . This allows us to right the dual as:
By TUM of the constraint matrix, we know that an optimal solution to this program that is integral exists. An integer solution to this program corresponds to a cut. Let and . Observe
which is the capacity of the cut associated with the partition. It is easy to see that every cut corresponds to a feasible integer solution to the program.
I can’t resist offering a probabilistic proof due to Bertsimas, Teo and myself. Let be an optimal solution to the dual program. Suppose, for a contradiction that it is fractional. Consider the following randomization scheme:
- Position node in according to the value of .
- Generate a single random variable uniformly distributed in . Round all nodes with to and all nodes with to .
- Set .
The randomization scheme clearly returns a feasible cut. More importantly the randomization scheme defines a probability distribution over the set of cuts. Now lets compute the expected capacity of the cut generated by the randomization scheme:
Thus, the expected capacity of our random cut is the same as the optimal objective function value. Because the randomization scheme is a distribution over cuts, it follows that there must be a cut with capacity equal to .
1.1. Hall Marriage Theorem
Recall, that in the DGS auction from the previous session, an important step relied on the Hall Marriage theorem. Here, I will use maxflow-mincut to prove it.
Let be a bipartite graph with vertices and edge set . Suppose . Note edges run between and only. A perfect matching is a subset of edges such that each vertex is incident to exactly one edge in . A matching is a subset of edges such that each vertex is incident to at most one edge in .
For any , let be the set of neighbors of . That is .
Theorem 2 A bipartite graph, contains a perfect matching iff. for all .
Proof: This is clearly necessary. So, suppose for all and for a contradiction assume that the graph does not contain a perfect matching.
Introduce a dummy source vertex with an edge directed to each vertex in . To each edge with , assign a capacity of 1 unit. Introduce a dummy sink with an edge directed from each vertex of to . To each edge of the form assign a capacity of 1 unit. To each edge in assign a capacity of .
Consider now a maximum flow from to . By Theorem 1 we can assume the flow is integral, i.e., the flow on each edge is an integer amount. Second, it is easy to see that no edge will contain more than 1 unit of flow. Third, the set of edges, in that carry a non-zero amount of flow must be a matching. Why? No more than one unit of flow can leave any and no more than one unit of flow can enter an . Fourth, each matching corresponds to a feasible flow. Thus, the value of the maximum flow gives us the size of a maximum cardinality matching. Because we assumed no perfect matching, the value of the maximum flow must be less than . i.e., .
Now a maximum flow must equal the value of a minimum cut. Clearly, no minimum cut can contain edges in because they have infinite capacity. So, the minimum cut must consist of edges of the form and . Let and be the set of vertices incident to the edges in a minimum cut. The capacity of this cut will be . Because they form a cut, removing them from the graph would delete all paths from to . Do that.
Consider any . There cannot be a such that . If not, there would be a path . That means every neighbor of a vertex resides in . Hence, by assumption:
a contradiction.
The Hall marriage theorem is easily generalized to something called Gale’s demand theorem. Suppose each vertex in is a demand vertex, demanding units of a homogenous good. Each vertex is a supply vertex, supplying units of that same good. Supply can be shipped to demand nodes only along the edges in . Is there a feasible way to match supply with demand? Yes, iff. for all we have
1.2. Border’s Theorem
For simplicity assume two agents, be the type space of agent 1, the type space of agent 2 and a single good. Suppose, also for simplicity, that types are independent draws. Denote by the probability of agent having type . Let the probability that the good is allocated to agent when agent 1 reports and agent 2 reports . Feasibility requires that
is an allocation rule. Denote by the interim allocation probability to agent 1when she reports . Note that
A similar expression applies for . Call an interim allocation probability implementable if there exists an allocation rule that corresponds to it. Here is the question posed originally posed by Steve Matthews (who conjectured the answer): characterize the implementable ‘s. The high level idea first. Roll up all the inequalities relating to and feasibility into matrix form:
Denote by the set of that satisfy the system above. The purely mathematical question is this: for which ‘s does there exist at least one such that ? In other words, we want to identify the set :
The set is called the reduced form. Geometrically, is just the projection of onto the -space. By the Farkas lemma, the set can be characterized in the following way. Let . is a cone. Then,
It would seem that is described by a continuum of inequalities. However, because is a cone, it suffices to consider only the generators of the cone and there are finitely many of them. The challenge now reduces to identifying the generators of . When the matrices and have suitable structure this is possible and is the case here. For convenience assume and . Here are all the inequalities:
Now some appropriate scaling:
Now a change of variables to make things nice and neat: .
This system corresponds to the flow balance conditions for a flow problem on a bipartite graph. For each profile of types introduce a supply vertex with supply . For each introduce a demand vertex with demand . From each supply vertex of the form , introduce two edges. One directed from to demand vertex ; the flow on this edge is . The other edge directed into demand vertex ; the flow on this edge is .
The first four inequalities say that the flow out of a supply vertex cannot exceed its available supply. The second four ensure that the flow into a demand vertex matches the demand.
Using this flow interpretation we can use Gale’s generalization of the Hall marriage theorem to determine when a feasible flow exists. Existence of a feasible flow means that is implementable. Let and . Then, for all such and ,
is necessary and sufficient for to be implementable. The left hand side of this inequality is the expected quantity of the good allocated to agents when their types come from and . The right hand side is the probability that at least one agent has a type in . In fact we can rewrite this inequality to read
Here, and are the complements of and .
As you will see later one can generalize this analysis to cases where the allocation rule, must satisfy additional constraints. For details see Che, Kim and Mierendorff as well as Federgruen and Groenvelt.
2. Polymatroids
Let be a ground set. A real valued function defined on subsets of is
- non-decreasing if , and
- is submodular if
- is supermodular if is submodular.
Let be a submodular function on . The polytope:
is the polymatroid associated with . Notice that iff . Polymatroids arise in a number of economic settings. One that we have just discussed are the Border inequalities. Suppose a single object to allocate among bidders whose types are independent draws from the same distribution (with density ). Restrict attention to allocation rules that do not depend on the identity of the agent. Let be the interim allocation probability if an agent reports type . Then, is implementable iff.
Call the right hand side of the above and consider the change of variables :
Now is a monotone polymatroid. So, the feasible region described by the Border inequalities is a polymatroid. The theorem below tells us that every extremal interim allocation probability can be implemented as a priority rule. That is, types are assigned priorities and in any profile the good is allocated to the agent with the highest priority. This connection to priority rules is useful for analyzing optimal auctions in dynamic settings (see my survey paper on dynamic mechanism design) as well as competition between auctions (see Pai’s jmp).
Theorem 3 Let be a non-decreasing, integer valued, submodular function on with . Then, the polymatroid has all integral extreme points.
Proof: Choose a weight vector and order the elements of by decreasing weight:
let and for all . Consider the vector obtained by the following greedy procedure:
- for
- for .
I show that Note is integral and non-negative because is non-decreasing. To verify that observe:
The dual to our optimization problem is:
subject to
To show that is an optimal primal solution we identify a dual feasible solution with an objective function value of . Set for . Set and for .
Notice that for all and is feasible in the dual because:
for all and
if .
The dual objective function value is:
Now suppose has a fractional extreme point. Choose to be the unique weight vector that is optimized at that point. However, as shown above, every weight vector has an integral point in that optimizes it, contradiction.
If we drop the requirement that be non-decreasing then can still be solved by using the greedy algorithm because
Here which is submodular and non-decreasing. Now I describe a non-constructive proof of Theorem 3 which uses an `uncrossing’ trick that is good to know. Let be an optimal solution to the polymatroid optimization problem. Let . Then, we can assume that is laminar. If true, this means that the optimal basis in the dual is TUM. But this is also an an optimal basis for the primal. Thus, there is an optimal primal solution that is integral. Ok, suppose is not laminar. Then there exist such that . Construct a new dual solution as follows: Increase by each. Decrease , by each. This new dual solution feasible. The change in objective function value is
which cannot exceed zero by submodularity. This is the uncrossing step. Repeated applications produces an optimal dual solution which is laminar.
The following remarkable result on polymatroid intersection is due to Edmonds.
Theorem 4 Let and be non-decreasing, integer valued, submodular functions on with . Then is integral.
Proof: Pick an arbitrary integral and consider . The dual to this linear program is
subject to
Let be an optimal dual solution. Let and . By uncrossing we can assume that each of these laminar. Recall from the first session that the constraint matrix associated with can be arranged so that it has exactly one non-zero entry in each row. Similarly with . Hence, the constraint matrix formed from the concatenation of and (which is also the optimal basis) is TUM.
From the proof we get another result for free. Let and be two laminar collections on the same ground set . Consider the following polyhedron:
As long as the ‘s are integral, this polyhedron is integral.
How is this useful? Return to the SS model. For any set of edges, (of the form with and ) define to be the number of vertices in that the edges in are incident to. Similarly, define to be the number of vertices in that edges in are incident to. Note and are submodular. Here is an equivalent formulation of the problems of finding an efficient assignment:
subject to
Why is this useful? Notice, by choosing and differently we can model more constraints than just the number of edges incident to a vertex is at most 1.
Bit the bullet and started a (reading) course on market design. The plan is to begin with some Ā lectures that cover some of Ā the usual stuff that is covered followed by readings. First session was on the Shapley-Shubik (SS) assignment model. A summary can be obtained by clicking on mktdesign1
The readings cover the following topics:
1) Credit Default Swaps
2) Bankruptcy
The Economics of Bankruptcy Reform Author(s): Philippe Aghion, Oliver Hart,Ā John Moore Source: Journal of Law, Economics, & Organization, Vol. 8, No. 3Ā (Oct., 1992), pp. 523-546
New Approach to Corporate reorganization by Lucien Bebchuk, Harvard LawĀ Review, 1988
Analysis of secured debt, Stultz and johnson, JFE, 1985
3) Healthcare Markets
Ho, Katherine 2009. “Insurer-Provider Networks in the Medical Care Market.”Ā American Economic Review, 99(1): 393ā430.
4) Ā IPOās
5) Ā Affirmative Action
Hickman
Fryer & LouryĀ
Chung
5) Wages, centralized matching, unraveling
Bulow & Levin
Roth
Halaburda
1. Shapley-Shubik Model
There is a set of buyers and a set of sellers each selling one unit of a good (could be divisible or not). Let be the monetary value that buyer assigns to seller ‘s object. In other words, we assume quasi-linear preferences. No buyer wants more than one of any good. By adding dummy sellers or buyers we can ensure that .
Depending on the ‘s you can assign a couple of interpretations to the SS model. First, . Here is the value to buyer of acquiring a good irrespective who sells it. In effect the sellers all sell the same good. is the opportunity cost to seller . Under this interpretation, SS is a model of bilateral trade.
Second, is the output generated when worker is matched with firm . Sometimes one assumes a specific functional form form for , for example, . Here each agent (firm or worker) has a type and their joint output is a function of their types. Third, each seller is an advertising slot and each buyer an advertiser. Then, is the the click through rate on slot and is the value per click to advertiser . Because billions (and if one were Carl Sagan one would repeat the word `billions’) of $’s are transacted this way, it is obligatory to mention this example in a class on market design to give the whole enterprise a certain gravitas. There is a natural optimization problem one can associate with SS, which is to find an assignment of goods to buyers to maximize the total value achieved so that no buyer consumes more than one good and no seller gives more than one good. In other words, find an efficient allocation. This is called the assignment problem or maximum weight bipartite matching. The problem has a long history going back to Monge (1700’s). Indeed, well before Becker, it was Monge who considered the problem under the assumption that . Since Monge’s time, the literature has split. One branch, via Kantorovich, interprets the problem in terms of coupling of measures and this has blossomed into an active and influential area of Mathematics called optimal transport of measure. It has important applications in pde’s and concentration of measure. There are also some applications in economic theory. I won’t go down that branch. The other, via Koopmans and Hitchcock, sticks to the discrete set up described here. That is the branch I will follow.
2. Shapley-Shubik (Divisible)
So that the problem of finding an efficient allocation can be expressed as an LP suppose the goods are divisible and each buyers utility for goods is linear in quantity. However, no buyer wants to consume more than one unit in total (unit demand). Let be the quantity of good allocated to buyer . For any and , let
subject to
The first constraint arises from the unit demand assumption. The second ensures no seller supplies more than they have. These constraints imply that for all and .
Let be the dual variable associated with each constraint and the dual variable associated with each constraint . The dual to the above program is:
subject to
The dual has an interesting interpretation. Think of as the price of object . Given a collection of prices, the optimal solution to the dual is found by setting each to Thus, each represents the maximum surplus that agent can receive from the consumption of a single object at prices .
At prices , buyer must solve:
subject to
to determine its demand. It is easy to check that the optimal solution is any convex combination of objects that yield maximum surplus to the buyer.
Suppose is an optimal primal solution and an optimal dual solution. Then, the prices `support’ the efficient allocation in the following sense. Suppose we post a price for each . Ask each buyer to name their demand correspondance at the posted prices. Then, it is possible to give each agent one of their demanded bundles and not exceed supply . To see why, recall complementary slackness:
So, if it follows that Hence, in this economy there is a set of prices that can be posted for each good so as to balance supply with demand. In other words, LP duality gives us the existence of Walrasian prices in this economy.
The set of feasible dual solutions forms a lattice with respect to the partial order if and only if . In this partial order there is a smallest element, which corresponds to the smallest Walrasian price. This can be determined by choosing from among all optimal dual solutions one that minimizes (equivalently maximizes total surplus to buyers).
Hold fixed and consider as a function of subsets of . Then, is non-decreasing. It is also submodular. Why? From the dual we see that it is obtained by minimizing a linear (i.e. submodular) function over a lattice. As submodularity is preserved under minimization, it follows that is submodular in . A similar argument shows that is supermodular in . This submodularity/supermodularity is useful in deriving certain incentive properties of the efficient allocation.
One can associate a tu co-op game with the assignment model. For any coalition of buyers () and sellers () let be the value of the coalition. The core, will be the set of solutions to the following:
The core has the usual interpretation. It is the set of allocations that cannot be blocked by any coalition of buyers and sellers.
Consider the following dual program (DP):
subject to
It is straight forward to check that every optimal solution to (DP) is in and every point in corresponds to an optimal solution to (DP). Why is this interesting? It says that most of the constraints that define the core are redundant. It suffices to consider only pairs of agents consisting of one buyer and one seller. If the core is the set of efficient allocations immune to deviations by any subset of agents, then equivalence to (DP) says that it suffices to consider only pairwise deviations.
From the core, consider the following two relations:
Negate the second and add to the first:
The term on the right hand side is buyer ‘s marginal product. Hence, no point in the core can give any buyer more than their marginal product. Submodularity of implies that there is a point in the core that gives each buyer their marginal product (an old result of Shapley’s). What is this point? It is the one that corresponds to the minimal Walrasian price, i.e., the optimal solution to (DP) that maximizes .
Suppose sellers are non-strategic and we run a Vickrey auction to allocate the goods amongst the buyers. Then, each buyer would walk away with a surplus equal to their marginal product (which is what the Vickrey auction gives buyers to give them the incentive to reveal their valuations). Thus, in this assignment economy, minimal Walrasian prices correspond to the prices that would be charged in a Vickrey auction.
3. Shapley-Shubik (Indivisible)
Now suppose the goods in SS are indivisible. I’m going to show the results outlined above carry through unchanged. To do that I need to tell you about totally unimodular matrices.
3.1. Total Unimodularity
A matrix is called totally unimodular (TUM) iff the determinant of each square submatrix has value 1, -1 or 0. If a matrix is TUM then so is its transpose. If and are TUM, then so is .
Example 1 The following matrix
is TUM. The following is not:
Every proper square submatrix has determinant with absolute value 0 or 1, but the determinant of the entire matrix is 2.
Theorem 1 Let be a TUM matrix. Let be a integral vector. Then, every extreme point of is integral.
Proof: To every extreme point of there is a basis of such that . By Cramer’s rule, we can write where is the adjoint of . Since has all integer entries, has all integer entires. Since is TUM and is non-singular, it follows that . Hence has all integer entries. Thus, is integral.
Paul Seymour gave a complete (polytime) characterization of TUM matrices. The upshot of it is that most TUM matrices are network matrices. is a network matrix if for all and each column contains at most two non-zero entires of opposite sign.
Theorem 2 If is a network matrix, then is TUM.
Proof: Proof is by induction on the size of a submatrix. Consider a square submatrix of , call it . If each column of has exactly two non-zero entries then . Why? Summing up the rows of gives us zero, meaning that the rows of are linearly dependent. If there is a column of that contains exactly one non-zero entry, then compute the determinant of using the minor associated with this entry. By induction the determinant must have value, 0,1, -1. A column with all zeros means that .
3.2. Back to SS
I’ll show that the constraint matrix for the assignment program that defines is TUM. This would mean that there is always an efficient allocation which produces an integral allocation.
Fix a good and agent . Consider the column associated with the variable . The variable appears with a coefficient of 1 in exactly two rows. One occurs in a row corresponding to agent and the other to a row corresponding to object . Let consist of all rows corresponding to objects and the set of all rows corresponding to agents. Multiply all the rows in by -1. We now have a constraint matrix where each column contains exactly two non-zero entries of opposite sign.
3.3. Ascending Auctions
The equivalence between minimal Walrasian prices and Vickrey prices in SS means that the Vickrey outcome can be obtained from a t\^{a}tonnement process that terminates in the minimal Walrasian price. Two have been proposed in the literature. One by Crawford and Knoer and the other Demange, Gale and Sotomayor (see also Leonard). Both a variations of dual ascent algorithms for solving the LP formulation of the assignment model.
I’ll outline Demange, Gale and Sotomayor (DGS). Assume that the ‘s are integral. In each iteration we have a vector of prices . Initially all prices are set to zero. In each iteration buyers report their demand correspondance. That is, the set of goods that maximize their surplus at current prices. Let be the demand correspondance of buyer . Consider now the bipartite graph defined on as follows: an edge exists iff . If this graph contains a perfect matching, stop. At current prices demand equals supply. A perfect matching means that there is a way to give each an such that and no good is allocated to more than one buyer. If a perfect matching does not exist, by the Hall marriage theorem (I will state and prove this later), there must exists a set such that The set is called overdemanded. Identify a minimally overdemanded set and increase the price of each good in this set by 1 unit. Repeat.
4. Bilateral Trade
Lets put this machinery to work on bilateral trade. This is the case when . Suppose is the private information of buyer and is the private information of seller .
- The core of the associated tu game is non-empty.
- The point in the core that maximizes the total surplus to buyers makes it a dominant strategy for them to reveal their private information.
- The point in the core that maximizes the total profit to the sellers makes it a dominant strategy for them to reveal their private information.
- In general, there is no point in the core that is jointly `best’ for buyers and sellers. Hence, there is no way to obtain an outcome in the core for which it is a dominant strategy for both sides to reveal their private information.
We have the outlines of an archetypal matching paper. First, show that a stable or core outcome exists. Second, show that when only one side is strategic, one can implement the core outcome in an incentive compatible way. Third, observe that it is impossible to implement the core outcome when both sides are strategic. Fourth, show that as the economy gets large, one can implement something asymptotically close to the core in an incentive compatible way (or implement a core outcome in an asymptotically incentive compatible way). So, lets do the fourth.
The idea is due to Preston McAfee. Order the buyers so that . Order the sellers so that . The efficient allocation can be computed by determining the largest such that . Buyer is matched with seller for all . McAfee suggests stopping at . Charge all buyers a price of . Pay all sellers , . What each agent pays or receives does not depend on their reports. So, reporting their private information is a dominant strategy. Further, the mechanism runs a slight surplus and is individual rational. However, it is not efficient. The efficiency loss is . Assuming ‘s and ‘s are independent draws from a distribution with bounded support, the percentage loss in efficiency approaches zero as .
Alternatively, one can implement the Vickrey outcome. In this case each buyer pays and each seller receives . The deficit of the Vickrey auction will grow like . One can then use properties of order statistics and the Kolmogorv-Smirnov bounds to show that the deficit goes to zero as .
5. More on TUM
Recall the constraints for the assignment model:
An integer solution, , to these constraints defines a permutation matrix whose entry is . A fractional solution to these constraints is a doubly stochastic matrix. TUM of the constraint matrix means that every doubly stochastic matrix is a convex combination of permutation matrices. This is is known as the Birkhoff-von Neuman theorem. Alternatively, the assignment constraints define the convex hull of doubly stochastic matrices.
For our purposes, TUM means that every fractional solution to the assignment constraints can be interpreted as a lottery over integer assignments. Thus, the assignment constraints give us a succinct description of the set of all lotteries over integer assignments. This is will be useful in applications when we search for randomized allocation rules satisfying other properties.
Network matrices are an important class of TUM matrices. There are matrices that don’t appear to be network matrices but after certain elementary row operations can be converted into network matrices. One such class is the set of 0-1 matrices with the consecutive 1’s property (C1). A 0-1 matrix has the consecutive 1’s property if there is a permutation of its rows such that the non-zero entries in each column are consecutive. The following matrix is C1:
C1 matrices arise in a variety of applications (interval graphs, cyclic staffing). Fulkerson and Gross were the first to give a polytime algorithm for recognizing C1 matrices. The following is due to Veinott and Wagner (1962).
Theorem 3 If is a C1 matrix, then, is TUM.
Proof: Suppose the rows of have already been permuted so that the columns have the consecutive 1’s property. Suppose that is . Define to be the following matrix:
- For all , , .
- For , .
- For all and , .
Here is a example of :
To complete the proof it suffices to verify that is TUM and is a network matrix. Note that pre-multiplying by corresponds to negating row of and adding it to row of .
I turn now to a class of C1 matrices that will be useful later.
Let be a ground set of elements and a collection of subsets of . is called laminar if for any either , or . If one drops the condition that , then is called a chain.
Given a collection of subsets, we can represent it using a 0-1 matrix as follows. A column for each member of and a row for each element of . Set if the set corresponding to column contains . Its easy to see that if is laminar, then is C1. Call a 0-1 matrix that arises in this way a laminar matrix.
In fact, is `equivalent’ to a 0-1 matrix with exactly one non-zero entry in each row. Here is is how. Pick any two columns of , and . Let and be the sets they correspond to in . Suppose . Negate column and add it to column . Note that this can at most flip the sign of the determinant of any square submatrix of . Repeat. The result is a 0-1 matrix whose columns are disjoint, i.e., exactly one non-zero entry in each row.
Part One: Least Unique-Bid Auctions
In recent years a new auction method became widely used on the internet. This method is called Least Unique-Bid Auction, and it goes as follows. An object is offered for sale, say an iPhone or a car. The bids are made in cents. Each participant can make as many bids as he or she wishes, paying 50 cents for each bid. So if I bid on the numbers 1, 2, 5 and 12 (all in units of cents), I pay 2 dollars for the participation. Once the time runs out and all bidders placed their bids, one counts the number of bidders who bid on each number. The winning bid is the minimal number that was bid by a single bidder. This bidder is the winner, and he pays his bid (in cents). So, if Anne bid on the numbers {1,2,3,6,12}, Bill bid on the numbers {1,2,3,4,5,6,7}, and Catherine bid on the numbers {3,4,5,7,13, 14,15,16}, then the number 12 wins, and Anne gets the object. The auctioneer gets 2.5 dollars from Anne for her 5 bids plus 12 cents which is her winning bid, he gets 3.5 dollars from Bill, and 4 dollars from Catherine.
In practice the auction is dynamic and not static; for each bid that you place, you know at each time period whether (a) it is currently the winner (no one else bid the same number, and there is no lower number that was bid by a single bidder), (b) it is a potential winner (no one else bid the same number, but there is at least one lower number that was bid by a single bidder), or (c) not a potential winner (somebody else also bid on that number).
One must admit that this type of auction is ingenious. The selling price is extremely low, usually less than 2% of the object’s market value, so people are drawn to participate. The names of the winners are listed on the site; there are recurring names. So people realize that there are strategies that profit. One can actually think of such strategies: for example, randomly choose numbers, and when you find a potential winner bid on all numbers below it, trying to kill all lower potential winning numbers. So why not participate and make money?
Bottom line: the auctioneer makes a lot of money.
Part Two: LUBA and the court
A couple of months ago a lawyer called me. He wants to sue a company that operates a certain type of LUBA in Israel on a class action, on the ground that this is a lottery, a gambling game, and not an ability game. By law, in Israel only the state can run lotteries. He asked me to provide an expert opinion that LUBA is a lottery. I apologized. I cannot do that. I believe that strategic bidders can make money in LUBA, and therefore, just as black-jack, LUBA is not a lottery. In fact, I write a paper with Marco Scarsini and Nicolas Vieille arguing that LUBA is not a lottery. Having said that, I believe that LUBA is worse that a lottery: in a lottery, all participants have the same probability of winning. This is not the case with LUBA: the presence of strategic bidders essentially kills the chance of non-strategic bidders, or partially strategic bidders, to win the auction.
Part Three: Moral
So LUBA may not be illegal, but it seems that there is something wrong with it. I discussed this issue with Zvika Neeman yesterday. Just like a pyramid scheme or day trading by non professionals, LUBA is a method to get money out of people who may not realize the strategic complexity of the situation that they face.
Part Four: Complexity Fraud
Merriam-Webster defines fraud as:
“a : Intentional perversion of truth in order to induce another to part with something of value or to surrender a legal right. b : An act of deceiving or misrepresenting.”
Dictionary.com defines it as:
“Deceit, trickery, sharp practice, or breach of confidence, perpetrated for profit or to gain some unfair or dishonest advantage.”
There are many types of fraud. I argue that there should be one additional type: complexity fraud. Sometimes people are asked to participate in complex interactions, paying some amount for participation in the hope of getting a reward at the end. The rules are all set in advance, so nobody can later argue that he or she did not know the rules. But most people are not well versed in game theory, and we have all bounded computational capacity. Therefore, when the interaction is complex, people cannot analyze it and rationally decide whether they want to participate or not. People tend to be optimistic, they over-estimate their ability and smartness. If the strategic analysis of the method was explained to them, and if they were faced by statistics, they would turn away from the method. If this is the case, then hiding the strategic analysis and the complexity of the situation is, in my view, as deceptive as any other fraud.
I am not a lawyer, and I do not know what the court will think of my arguments. I hope that congressmen and parliament members worldwide will look into them, and change the law accordingly.
Recent Comments