You are currently browsing the monthly archive for January 2012.
Here, recounting the debate before the Bin Laden operation.
[The President] said, I have to make this decision what is your opinion. He started with the National Security adviser, the Secretary of State and he ended with me. Every single person in that room hedged their bet, except Leon Panetta. Leon said GO. Everyone else said 49, 51, this…
It got to me. Joe what do you think? I said “You know I didn’t know we have so many economists around the table. We owe the man a direct answer.
Me: First, is `economist’ a synonym to somebody who can’t give a direct answer ? I thought we have a better reputation. Second, the way I see it 49,51 is actually a direct answer and a bold prediction. It means that if they start making independent attempts to kill Bin Laden, and substantially less or substantially more than half of these attempts succeed then the president has all the reasons to give these guys the boot.
A mathematical detour today. Will state and prove some basic results in combinatorial optimization that have applications in matching and auction theory. The first is the max-flow-min-cut theorem. I’ll use it to prove Hall’s marriage theorem as well as derive the Border-Matthews characterization of implementable interim allocation rules. It has other applications. Itai Sher, for example, uses it in the analysis of a special case of the Glazer-Rubinstein model of persuasion. Mobius et al use it in their study of capital flows in social networks (the Peruvian village paper). Olszewski and myself have a forthcoming paper where we use it to talk about team formation.
Then, its on to polymatroids which can be used to show the assignment problem retains its integrality property in the presence of additional constraints. These additional constraints arise in, for example, school choice problems. See the paper by Budish, Che, Kojima and Milgrom for a list of examples.
1. The Maximum Flow Problem
Let be a network with vertex set and edge set . Assume, for convenience, that edges are `bi-directional’. In there are two special vertices. One called , the source and the other , the sink. Each edge has a capacity that is the maximum flow that can traverse (in either direction). Assume is integral for all . To avoid trivialities assume at least one path in from to .
The maximum flow problem is to find the maximum amount of flow that can be sent from , through the network, to . That there is only source-sink pair is not a restriction. An instance with multiple sources and sinks can be converted into one with a single pair through the introduction of a dummy super source and super sink.
A cut in is a set of edges whose removal disconnects from . Equivalently, a cut is a set of edges that intersects every path from to in . A cut can also be defined in terms of a partition of into two sets and such that and . A cut would be the set of edges with one endpoint in and the other in . The capacity of a cut is the sum of the capacities of the edges in the the cut.
The maximum flow is clearly bounded above by the capacity of any cut. Indeed, the max flow must be bounded above by the minimum capacity cut. Remarkably, this upper bound is tight, i.e., the max flow is the min cut. Note that the min cut must be integer valued but it is not obvious that the maximum flow should be integer valued.
We can formulate the problem of finding a maximum flow as a linear program. To do this, let denote the amount of flow on edge from to . Then, the maximum flow problem can be expressed as follows:
subject to
The first constraint says that the flow, , out of must use edges of the form . The second constraint imposes flow conservation in each vertex, i.e., flow in must equal flow out. The third constraint ensures that the total flow entering must use edges of the form .
The constraint matrix is a network matrix and so TUM. You can see this by noting that each variable appears in at most two constraints with opposite sign. Once as an `incoming’ edge and once as an `outgoing’ edge. The variable also appears exactly twice with opposite sign.
TUM of the constraint matrix means that the flow on each edge and as well as the amount of flow is integer valued.
Theorem 1 The maximum flow is equal to the capacity of the minimum cut.
Proof: Let be the value of the maximum flow. By LP duality:
To see where it came from, observe that we have one dual variable for each and one variable for each . Recall the bidirectional assumption. So, is different from . Thus one should have two constraints for each pair :
I have simply folded these into a single constraint . Notice that at optimality at most one of and can be positive which is why this folding is legitimate.
Notice also, that we can WLOG fix and . This allows us to right the dual as:
By TUM of the constraint matrix, we know that an optimal solution to this program that is integral exists. An integer solution to this program corresponds to a cut. Let and . Observe
which is the capacity of the cut associated with the partition. It is easy to see that every cut corresponds to a feasible integer solution to the program.
I can’t resist offering a probabilistic proof due to Bertsimas, Teo and myself. Let be an optimal solution to the dual program. Suppose, for a contradiction that it is fractional. Consider the following randomization scheme:
- Position node in according to the value of .
- Generate a single random variable uniformly distributed in . Round all nodes with to and all nodes with to .
- Set .
The randomization scheme clearly returns a feasible cut. More importantly the randomization scheme defines a probability distribution over the set of cuts. Now lets compute the expected capacity of the cut generated by the randomization scheme:
Thus, the expected capacity of our random cut is the same as the optimal objective function value. Because the randomization scheme is a distribution over cuts, it follows that there must be a cut with capacity equal to .
1.1. Hall Marriage Theorem
Recall, that in the DGS auction from the previous session, an important step relied on the Hall Marriage theorem. Here, I will use maxflow-mincut to prove it.
Let be a bipartite graph with vertices and edge set . Suppose . Note edges run between and only. A perfect matching is a subset of edges such that each vertex is incident to exactly one edge in . A matching is a subset of edges such that each vertex is incident to at most one edge in .
For any , let be the set of neighbors of . That is .
Theorem 2 A bipartite graph, contains a perfect matching iff. for all .
Proof: This is clearly necessary. So, suppose for all and for a contradiction assume that the graph does not contain a perfect matching.
Introduce a dummy source vertex with an edge directed to each vertex in . To each edge with , assign a capacity of 1 unit. Introduce a dummy sink with an edge directed from each vertex of to . To each edge of the form assign a capacity of 1 unit. To each edge in assign a capacity of .
Consider now a maximum flow from to . By Theorem 1 we can assume the flow is integral, i.e., the flow on each edge is an integer amount. Second, it is easy to see that no edge will contain more than 1 unit of flow. Third, the set of edges, in that carry a non-zero amount of flow must be a matching. Why? No more than one unit of flow can leave any and no more than one unit of flow can enter an . Fourth, each matching corresponds to a feasible flow. Thus, the value of the maximum flow gives us the size of a maximum cardinality matching. Because we assumed no perfect matching, the value of the maximum flow must be less than . i.e., .
Now a maximum flow must equal the value of a minimum cut. Clearly, no minimum cut can contain edges in because they have infinite capacity. So, the minimum cut must consist of edges of the form and . Let and be the set of vertices incident to the edges in a minimum cut. The capacity of this cut will be . Because they form a cut, removing them from the graph would delete all paths from to . Do that.
Consider any . There cannot be a such that . If not, there would be a path . That means every neighbor of a vertex resides in . Hence, by assumption:
a contradiction.
The Hall marriage theorem is easily generalized to something called Gale’s demand theorem. Suppose each vertex in is a demand vertex, demanding units of a homogenous good. Each vertex is a supply vertex, supplying units of that same good. Supply can be shipped to demand nodes only along the edges in . Is there a feasible way to match supply with demand? Yes, iff. for all we have
1.2. Border’s Theorem
For simplicity assume two agents, be the type space of agent 1, the type space of agent 2 and a single good. Suppose, also for simplicity, that types are independent draws. Denote by the probability of agent having type . Let the probability that the good is allocated to agent when agent 1 reports and agent 2 reports . Feasibility requires that
is an allocation rule. Denote by the interim allocation probability to agent 1when she reports . Note that
A similar expression applies for . Call an interim allocation probability implementable if there exists an allocation rule that corresponds to it. Here is the question posed originally posed by Steve Matthews (who conjectured the answer): characterize the implementable ‘s. The high level idea first. Roll up all the inequalities relating to and feasibility into matrix form:
Denote by the set of that satisfy the system above. The purely mathematical question is this: for which ‘s does there exist at least one such that ? In other words, we want to identify the set :
The set is called the reduced form. Geometrically, is just the projection of onto the -space. By the Farkas lemma, the set can be characterized in the following way. Let . is a cone. Then,
It would seem that is described by a continuum of inequalities. However, because is a cone, it suffices to consider only the generators of the cone and there are finitely many of them. The challenge now reduces to identifying the generators of . When the matrices and have suitable structure this is possible and is the case here. For convenience assume and . Here are all the inequalities:
Now some appropriate scaling:
Now a change of variables to make things nice and neat: .
This system corresponds to the flow balance conditions for a flow problem on a bipartite graph. For each profile of types introduce a supply vertex with supply . For each introduce a demand vertex with demand . From each supply vertex of the form , introduce two edges. One directed from to demand vertex ; the flow on this edge is . The other edge directed into demand vertex ; the flow on this edge is .
The first four inequalities say that the flow out of a supply vertex cannot exceed its available supply. The second four ensure that the flow into a demand vertex matches the demand.
Using this flow interpretation we can use Gale’s generalization of the Hall marriage theorem to determine when a feasible flow exists. Existence of a feasible flow means that is implementable. Let and . Then, for all such and ,
is necessary and sufficient for to be implementable. The left hand side of this inequality is the expected quantity of the good allocated to agents when their types come from and . The right hand side is the probability that at least one agent has a type in . In fact we can rewrite this inequality to read
Here, and are the complements of and .
As you will see later one can generalize this analysis to cases where the allocation rule, must satisfy additional constraints. For details see Che, Kim and Mierendorff as well as Federgruen and Groenvelt.
2. Polymatroids
Let be a ground set. A real valued function defined on subsets of is
- non-decreasing if , and
- is submodular if
- is supermodular if is submodular.
Let be a submodular function on . The polytope:
is the polymatroid associated with . Notice that iff . Polymatroids arise in a number of economic settings. One that we have just discussed are the Border inequalities. Suppose a single object to allocate among bidders whose types are independent draws from the same distribution (with density ). Restrict attention to allocation rules that do not depend on the identity of the agent. Let be the interim allocation probability if an agent reports type . Then, is implementable iff.
Call the right hand side of the above and consider the change of variables :
Now is a monotone polymatroid. So, the feasible region described by the Border inequalities is a polymatroid. The theorem below tells us that every extremal interim allocation probability can be implemented as a priority rule. That is, types are assigned priorities and in any profile the good is allocated to the agent with the highest priority. This connection to priority rules is useful for analyzing optimal auctions in dynamic settings (see my survey paper on dynamic mechanism design) as well as competition between auctions (see Pai’s jmp).
Theorem 3 Let be a non-decreasing, integer valued, submodular function on with . Then, the polymatroid has all integral extreme points.
Proof: Choose a weight vector and order the elements of by decreasing weight:
let and for all . Consider the vector obtained by the following greedy procedure:
- for
- for .
I show that Note is integral and non-negative because is non-decreasing. To verify that observe:
The dual to our optimization problem is:
subject to
To show that is an optimal primal solution we identify a dual feasible solution with an objective function value of . Set for . Set and for .
Notice that for all and is feasible in the dual because:
for all and
if .
The dual objective function value is:
Now suppose has a fractional extreme point. Choose to be the unique weight vector that is optimized at that point. However, as shown above, every weight vector has an integral point in that optimizes it, contradiction.
If we drop the requirement that be non-decreasing then can still be solved by using the greedy algorithm because
Here which is submodular and non-decreasing. Now I describe a non-constructive proof of Theorem 3 which uses an `uncrossing’ trick that is good to know. Let be an optimal solution to the polymatroid optimization problem. Let . Then, we can assume that is laminar. If true, this means that the optimal basis in the dual is TUM. But this is also an an optimal basis for the primal. Thus, there is an optimal primal solution that is integral. Ok, suppose is not laminar. Then there exist such that . Construct a new dual solution as follows: Increase by each. Decrease , by each. This new dual solution feasible. The change in objective function value is
which cannot exceed zero by submodularity. This is the uncrossing step. Repeated applications produces an optimal dual solution which is laminar.
The following remarkable result on polymatroid intersection is due to Edmonds.
Theorem 4 Let and be non-decreasing, integer valued, submodular functions on with . Then is integral.
Proof: Pick an arbitrary integral and consider . The dual to this linear program is
subject to
Let be an optimal dual solution. Let and . By uncrossing we can assume that each of these laminar. Recall from the first session that the constraint matrix associated with can be arranged so that it has exactly one non-zero entry in each row. Similarly with . Hence, the constraint matrix formed from the concatenation of and (which is also the optimal basis) is TUM.
From the proof we get another result for free. Let and be two laminar collections on the same ground set . Consider the following polyhedron:
As long as the ‘s are integral, this polyhedron is integral.
How is this useful? Return to the SS model. For any set of edges, (of the form with and ) define to be the number of vertices in that the edges in are incident to. Similarly, define to be the number of vertices in that edges in are incident to. Note and are submodular. Here is an equivalent formulation of the problems of finding an efficient assignment:
subject to
Why is this useful? Notice, by choosing and differently we can model more constraints than just the number of edges incident to a vertex is at most 1.
Bit the bullet and started a (reading) course on market design. The plan is to begin with some lectures that cover some of the usual stuff that is covered followed by readings. First session was on the Shapley-Shubik (SS) assignment model. A summary can be obtained by clicking on mktdesign1
The readings cover the following topics:
1) Credit Default Swaps
2) Bankruptcy
The Economics of Bankruptcy Reform Author(s): Philippe Aghion, Oliver Hart, John Moore Source: Journal of Law, Economics, & Organization, Vol. 8, No. 3 (Oct., 1992), pp. 523-546
New Approach to Corporate reorganization by Lucien Bebchuk, Harvard Law Review, 1988
Analysis of secured debt, Stultz and johnson, JFE, 1985
3) Healthcare Markets
Ho, Katherine 2009. “Insurer-Provider Networks in the Medical Care Market.” American Economic Review, 99(1): 393–430.
4) IPO’s
5) Affirmative Action
Hickman
Fryer & Loury
Chung
5) Wages, centralized matching, unraveling
Bulow & Levin
Roth
Halaburda
1. Shapley-Shubik Model
There is a set of buyers and a set of sellers each selling one unit of a good (could be divisible or not). Let be the monetary value that buyer assigns to seller ‘s object. In other words, we assume quasi-linear preferences. No buyer wants more than one of any good. By adding dummy sellers or buyers we can ensure that .
Depending on the ‘s you can assign a couple of interpretations to the SS model. First, . Here is the value to buyer of acquiring a good irrespective who sells it. In effect the sellers all sell the same good. is the opportunity cost to seller . Under this interpretation, SS is a model of bilateral trade.
Second, is the output generated when worker is matched with firm . Sometimes one assumes a specific functional form form for , for example, . Here each agent (firm or worker) has a type and their joint output is a function of their types. Third, each seller is an advertising slot and each buyer an advertiser. Then, is the the click through rate on slot and is the value per click to advertiser . Because billions (and if one were Carl Sagan one would repeat the word `billions’) of $’s are transacted this way, it is obligatory to mention this example in a class on market design to give the whole enterprise a certain gravitas. There is a natural optimization problem one can associate with SS, which is to find an assignment of goods to buyers to maximize the total value achieved so that no buyer consumes more than one good and no seller gives more than one good. In other words, find an efficient allocation. This is called the assignment problem or maximum weight bipartite matching. The problem has a long history going back to Monge (1700’s). Indeed, well before Becker, it was Monge who considered the problem under the assumption that . Since Monge’s time, the literature has split. One branch, via Kantorovich, interprets the problem in terms of coupling of measures and this has blossomed into an active and influential area of Mathematics called optimal transport of measure. It has important applications in pde’s and concentration of measure. There are also some applications in economic theory. I won’t go down that branch. The other, via Koopmans and Hitchcock, sticks to the discrete set up described here. That is the branch I will follow.
2. Shapley-Shubik (Divisible)
So that the problem of finding an efficient allocation can be expressed as an LP suppose the goods are divisible and each buyers utility for goods is linear in quantity. However, no buyer wants to consume more than one unit in total (unit demand). Let be the quantity of good allocated to buyer . For any and , let
subject to
The first constraint arises from the unit demand assumption. The second ensures no seller supplies more than they have. These constraints imply that for all and .
Let be the dual variable associated with each constraint and the dual variable associated with each constraint . The dual to the above program is:
subject to
The dual has an interesting interpretation. Think of as the price of object . Given a collection of prices, the optimal solution to the dual is found by setting each to Thus, each represents the maximum surplus that agent can receive from the consumption of a single object at prices .
At prices , buyer must solve:
subject to
to determine its demand. It is easy to check that the optimal solution is any convex combination of objects that yield maximum surplus to the buyer.
Suppose is an optimal primal solution and an optimal dual solution. Then, the prices `support’ the efficient allocation in the following sense. Suppose we post a price for each . Ask each buyer to name their demand correspondance at the posted prices. Then, it is possible to give each agent one of their demanded bundles and not exceed supply . To see why, recall complementary slackness:
So, if it follows that Hence, in this economy there is a set of prices that can be posted for each good so as to balance supply with demand. In other words, LP duality gives us the existence of Walrasian prices in this economy.
The set of feasible dual solutions forms a lattice with respect to the partial order if and only if . In this partial order there is a smallest element, which corresponds to the smallest Walrasian price. This can be determined by choosing from among all optimal dual solutions one that minimizes (equivalently maximizes total surplus to buyers).
Hold fixed and consider as a function of subsets of . Then, is non-decreasing. It is also submodular. Why? From the dual we see that it is obtained by minimizing a linear (i.e. submodular) function over a lattice. As submodularity is preserved under minimization, it follows that is submodular in . A similar argument shows that is supermodular in . This submodularity/supermodularity is useful in deriving certain incentive properties of the efficient allocation.
One can associate a tu co-op game with the assignment model. For any coalition of buyers () and sellers () let be the value of the coalition. The core, will be the set of solutions to the following:
The core has the usual interpretation. It is the set of allocations that cannot be blocked by any coalition of buyers and sellers.
Consider the following dual program (DP):
subject to
It is straight forward to check that every optimal solution to (DP) is in and every point in corresponds to an optimal solution to (DP). Why is this interesting? It says that most of the constraints that define the core are redundant. It suffices to consider only pairs of agents consisting of one buyer and one seller. If the core is the set of efficient allocations immune to deviations by any subset of agents, then equivalence to (DP) says that it suffices to consider only pairwise deviations.
From the core, consider the following two relations:
Negate the second and add to the first:
The term on the right hand side is buyer ‘s marginal product. Hence, no point in the core can give any buyer more than their marginal product. Submodularity of implies that there is a point in the core that gives each buyer their marginal product (an old result of Shapley’s). What is this point? It is the one that corresponds to the minimal Walrasian price, i.e., the optimal solution to (DP) that maximizes .
Suppose sellers are non-strategic and we run a Vickrey auction to allocate the goods amongst the buyers. Then, each buyer would walk away with a surplus equal to their marginal product (which is what the Vickrey auction gives buyers to give them the incentive to reveal their valuations). Thus, in this assignment economy, minimal Walrasian prices correspond to the prices that would be charged in a Vickrey auction.
3. Shapley-Shubik (Indivisible)
Now suppose the goods in SS are indivisible. I’m going to show the results outlined above carry through unchanged. To do that I need to tell you about totally unimodular matrices.
3.1. Total Unimodularity
A matrix is called totally unimodular (TUM) iff the determinant of each square submatrix has value 1, -1 or 0. If a matrix is TUM then so is its transpose. If and are TUM, then so is .
Example 1 The following matrix
is TUM. The following is not:
Every proper square submatrix has determinant with absolute value 0 or 1, but the determinant of the entire matrix is 2.
Theorem 1 Let be a TUM matrix. Let be a integral vector. Then, every extreme point of is integral.
Proof: To every extreme point of there is a basis of such that . By Cramer’s rule, we can write where is the adjoint of . Since has all integer entries, has all integer entires. Since is TUM and is non-singular, it follows that . Hence has all integer entries. Thus, is integral.
Paul Seymour gave a complete (polytime) characterization of TUM matrices. The upshot of it is that most TUM matrices are network matrices. is a network matrix if for all and each column contains at most two non-zero entires of opposite sign.
Theorem 2 If is a network matrix, then is TUM.
Proof: Proof is by induction on the size of a submatrix. Consider a square submatrix of , call it . If each column of has exactly two non-zero entries then . Why? Summing up the rows of gives us zero, meaning that the rows of are linearly dependent. If there is a column of that contains exactly one non-zero entry, then compute the determinant of using the minor associated with this entry. By induction the determinant must have value, 0,1, -1. A column with all zeros means that .
3.2. Back to SS
I’ll show that the constraint matrix for the assignment program that defines is TUM. This would mean that there is always an efficient allocation which produces an integral allocation.
Fix a good and agent . Consider the column associated with the variable . The variable appears with a coefficient of 1 in exactly two rows. One occurs in a row corresponding to agent and the other to a row corresponding to object . Let consist of all rows corresponding to objects and the set of all rows corresponding to agents. Multiply all the rows in by -1. We now have a constraint matrix where each column contains exactly two non-zero entries of opposite sign.
3.3. Ascending Auctions
The equivalence between minimal Walrasian prices and Vickrey prices in SS means that the Vickrey outcome can be obtained from a t\^{a}tonnement process that terminates in the minimal Walrasian price. Two have been proposed in the literature. One by Crawford and Knoer and the other Demange, Gale and Sotomayor (see also Leonard). Both a variations of dual ascent algorithms for solving the LP formulation of the assignment model.
I’ll outline Demange, Gale and Sotomayor (DGS). Assume that the ‘s are integral. In each iteration we have a vector of prices . Initially all prices are set to zero. In each iteration buyers report their demand correspondance. That is, the set of goods that maximize their surplus at current prices. Let be the demand correspondance of buyer . Consider now the bipartite graph defined on as follows: an edge exists iff . If this graph contains a perfect matching, stop. At current prices demand equals supply. A perfect matching means that there is a way to give each an such that and no good is allocated to more than one buyer. If a perfect matching does not exist, by the Hall marriage theorem (I will state and prove this later), there must exists a set such that The set is called overdemanded. Identify a minimally overdemanded set and increase the price of each good in this set by 1 unit. Repeat.
4. Bilateral Trade
Lets put this machinery to work on bilateral trade. This is the case when . Suppose is the private information of buyer and is the private information of seller .
- The core of the associated tu game is non-empty.
- The point in the core that maximizes the total surplus to buyers makes it a dominant strategy for them to reveal their private information.
- The point in the core that maximizes the total profit to the sellers makes it a dominant strategy for them to reveal their private information.
- In general, there is no point in the core that is jointly `best’ for buyers and sellers. Hence, there is no way to obtain an outcome in the core for which it is a dominant strategy for both sides to reveal their private information.
We have the outlines of an archetypal matching paper. First, show that a stable or core outcome exists. Second, show that when only one side is strategic, one can implement the core outcome in an incentive compatible way. Third, observe that it is impossible to implement the core outcome when both sides are strategic. Fourth, show that as the economy gets large, one can implement something asymptotically close to the core in an incentive compatible way (or implement a core outcome in an asymptotically incentive compatible way). So, lets do the fourth.
The idea is due to Preston McAfee. Order the buyers so that . Order the sellers so that . The efficient allocation can be computed by determining the largest such that . Buyer is matched with seller for all . McAfee suggests stopping at . Charge all buyers a price of . Pay all sellers , . What each agent pays or receives does not depend on their reports. So, reporting their private information is a dominant strategy. Further, the mechanism runs a slight surplus and is individual rational. However, it is not efficient. The efficiency loss is . Assuming ‘s and ‘s are independent draws from a distribution with bounded support, the percentage loss in efficiency approaches zero as .
Alternatively, one can implement the Vickrey outcome. In this case each buyer pays and each seller receives . The deficit of the Vickrey auction will grow like . One can then use properties of order statistics and the Kolmogorv-Smirnov bounds to show that the deficit goes to zero as .
5. More on TUM
Recall the constraints for the assignment model:
An integer solution, , to these constraints defines a permutation matrix whose entry is . A fractional solution to these constraints is a doubly stochastic matrix. TUM of the constraint matrix means that every doubly stochastic matrix is a convex combination of permutation matrices. This is is known as the Birkhoff-von Neuman theorem. Alternatively, the assignment constraints define the convex hull of doubly stochastic matrices.
For our purposes, TUM means that every fractional solution to the assignment constraints can be interpreted as a lottery over integer assignments. Thus, the assignment constraints give us a succinct description of the set of all lotteries over integer assignments. This is will be useful in applications when we search for randomized allocation rules satisfying other properties.
Network matrices are an important class of TUM matrices. There are matrices that don’t appear to be network matrices but after certain elementary row operations can be converted into network matrices. One such class is the set of 0-1 matrices with the consecutive 1’s property (C1). A 0-1 matrix has the consecutive 1’s property if there is a permutation of its rows such that the non-zero entries in each column are consecutive. The following matrix is C1:
C1 matrices arise in a variety of applications (interval graphs, cyclic staffing). Fulkerson and Gross were the first to give a polytime algorithm for recognizing C1 matrices. The following is due to Veinott and Wagner (1962).
Theorem 3 If is a C1 matrix, then, is TUM.
Proof: Suppose the rows of have already been permuted so that the columns have the consecutive 1’s property. Suppose that is . Define to be the following matrix:
- For all , , .
- For , .
- For all and , .
Here is a example of :
To complete the proof it suffices to verify that is TUM and is a network matrix. Note that pre-multiplying by corresponds to negating row of and adding it to row of .
I turn now to a class of C1 matrices that will be useful later.
Let be a ground set of elements and a collection of subsets of . is called laminar if for any either , or . If one drops the condition that , then is called a chain.
Given a collection of subsets, we can represent it using a 0-1 matrix as follows. A column for each member of and a row for each element of . Set if the set corresponding to column contains . Its easy to see that if is laminar, then is C1. Call a 0-1 matrix that arises in this way a laminar matrix.
In fact, is `equivalent’ to a 0-1 matrix with exactly one non-zero entry in each row. Here is is how. Pick any two columns of , and . Let and be the sets they correspond to in . Suppose . Negate column and add it to column . Note that this can at most flip the sign of the determinant of any square submatrix of . Repeat. The result is a 0-1 matrix whose columns are disjoint, i.e., exactly one non-zero entry in each row.
So, wikipedia is dark today in protest of an initiative in congress to block sites that link to sites that infringe on copyrighted intellectual property. Ever noticed before how many times a day you use wikipedia ?
Here is what I don’t get about this whole idea of “copyrighted intellectual property”. Is it something advocated on moral grounds or on economic grounds ? I mean, when Bob sneaks into Alice’s vineyard and eats the grapes without permission, we view it as a moral atrocity; It’s just a wicked thing to do; It invokes the wrath of the gods; Moses explicitly forbade it. To be sure, it’s hard to pin down what exactly makes the vineyard belong to Alice without getting into a recursive definition of ownership, and if we try tracing back the vineyard from one legitimate owner to another we arrive to the first man who just fenced a piece of land and said “This is mine”. But here the economic argument kicks in — Most of us don’t begrudge this initial act of illegitimate fencing because the bastard who committed it was the founder of civil society. We like the idea of civil society. We like prosperity and growth. Without protection of private property we will have none of these.
But what about protection of “intellectual property” ? Clearly this is not a necessary condition for a civil society. It’s also not a necessary condition for production of knowledge and culture. We had Plato and Archimedes and Cicero and Shakespeare and Newton before it occurred to anybody that Bob has to gets Alice’s permission to reproduce a code that Alice wrote. Coming to think about it, when did the concept of intellectual property pop up anyway ? Waitaminute let me just check it up on wikipedia. Oops.. What did we ever do before wikipedia ?
The White House thinks that intellectual property is justified on economic grounds
Online piracy is a real problem that harms the American economy, and threatens jobs for significant numbers of middle class workers and hurts some of our nation’s most creative and innovative companies and entrepreneurs. It harms everyone from struggling artists to production crews, and from startup social media companies to large movie studios.
I wonder if this assertion backed by some empirical research ? I realize some people lose their job because of online piracy. Also, Some people lost their jobs following the introduction of ATMs. But we view ATMs as positive development since it made a certain service way cheaper. My guess is that the same is true about intellectual piracy — it makes distribution of culture and knowledge cheaper and therefore makes also the production of culture and knowledge cheaper. True, some companies, particularly the established ones, are damaged by intellectual theft. Other companies, particularly startups, benefit. One may argue that intellectual piracy destroys incentive to produce and therefore no new culture or knowledge will be produced absent some protection for intellectual property. But this is a claim that can be empirically checked no ? We live in a world of file sharing and user generated (often stolen) content sites. Are there less books written ?
Embassy Suite hotel Saturday morning. Photo by Jacob Leshno.
Update (May 2017) McLennan and Tourky seem to be the first to make the argument in this post (link to their paper)
They say that when Alfred Tarski came up with his theorem that the axiom of choice is equivalent to the statement that, for every set , and have the same cardinality, he first tried to publish it in the French PNAS. Both venerable referees rejected the paper: Frechet argued there is no novelty in equivalence between two well known theorems; Lebesgue argued that there is no interest in equivalence between two false statments. I don’t know if this ever happened but it’s a cool story. I like to think about it everytime a paper of mine is rejected and the referees contradict each other.
Back to game theory, one often hears that the existence of Nash Equilibrium is equivalent to Brouwer’s fixed point theorem. Of course we all know that Brouwer implies Nash but the other direction is more tricky less known. I heard a satisfying argument for the first time a couple of months ago from Rida. I don’t know whether this is a folk theorem or somebody’s theorem but it is pretty awesome and should appear in every game theory textbook.
So, assume Nash’s Theorem and let be a compact convex set in and be a continuous function. We claim that has a fixed point. Indeed, consider the two-player normal-form game in which the set of pure strategies of every player is , and the payoffs under strategy profile is for player I and for player II. Since strategy sets are compact and the payoff function is continuous, the game has an equilibrium in mixed strategies. In fact, the equilibrium strategies must be pure. (Indeed, for every mixed strategy of player II, player 1 has a unique best response, the one concentrated on the barycenter of ). But if is a pure equilibrium then it is immediate that .
Update I should add that I believe that the transition from existence of mixed Nash Equilibrium in games with finite strategy sets to existence of mixed Nash Equilibrium in games with compact strategy sets and continuous payoffs is not hard. In the case of the game that I defined above, if is a dense subset of and is a mixed equilibrium profile in the finite game with the same payoff functions and in which both players are restricted to the pure strategy set , then an accumulation point of the sequence in the weak topology is a mixed strategy equilibrium in the original game.
Recent Comments