A mathematical detour today. Will state and prove some basic results in combinatorial optimization that have applications in matching and auction theory. The first is the max-flow-min-cut theorem. I’ll use it to prove Hall’s marriage theorem as well as derive the Border-Matthews characterization of implementable interim allocation rules. It has other applications. Itai Sher, for example, uses it in the analysis of a special case of the Glazer-Rubinstein model of persuasion. Mobius et al use it in their study of capital flows in social networks (the Peruvian village paper). Olszewski and myself have a forthcoming paper where we use it to talk about team formation.

Then, its on to polymatroids which can be used to show the assignment problem retains its integrality property in the presence of additional constraints. These additional constraints arise in, for example, school choice problems. See the paper by Budish, Che, Kojima and Milgrom for a list of examples.

1. The Maximum Flow Problem

Let {G= (V,E)} be a network with vertex set {V} and edge set {E}. Assume, for convenience, that edges are `bi-directional’. In {V} there are two special vertices. One called {s}, the source and the other {t}, the sink. Each edge {(i,j) \in E} has a capacity {c_{ij}} that is the maximum flow that can traverse {(i,j)} (in either direction). Assume {c_{ij}} is integral for all {(i,j) \in E}. To avoid trivialities assume at least one path in {G} from {s} to {t}.

The maximum flow problem is to find the maximum amount of flow that can be sent from {s}, through the network, to {t}. That there is only source-sink pair is not a restriction. An instance with multiple sources and sinks can be converted into one with a single {s-t} pair through the introduction of a dummy super source and super sink.

A cut in {G} is a set of edges whose removal disconnects {s} from {t}. Equivalently, a cut is a set of edges that intersects every path from {s} to {t} in {G}. A cut can also be defined in terms of a partition of {V} into two sets {S} and {T} such that {s \in S} and {t \in T}. A cut would be the set of edges with one endpoint in {S} and the other in {T}. The capacity of a cut is the sum of the capacities of the edges in the the cut.

The maximum {s-t} flow is clearly bounded above by the capacity of any cut. Indeed, the max {s-t} flow must be bounded above by the minimum capacity cut. Remarkably, this upper bound is tight, i.e., the max flow is the min cut. Note that the min cut must be integer valued but it is not obvious that the maximum flow should be integer valued.

We can formulate the problem of finding a maximum flow as a linear program. To do this, let {x_{ij}} denote the amount of flow on edge {(i,j)} from {i} to {j}. Then, the maximum flow problem can be expressed as follows:

\displaystyle  \max f

subject to

\displaystyle  f - \sum_{(s,j) \in E}x_{sj} = 0

\displaystyle \sum_{i \in V: (i,j) \in E}x_{ij} - \sum_{k \in V: (j,k) \in E}x_{jk} = 0 \quad \forall j \in V\setminus\{s,t\}

\displaystyle  -f + \sum_{(i,t) \in E}x_{it} = 0

\displaystyle 0 \leq x_{ij} \leq c_{ij}\quad, \forall (i,j) \in E

The first constraint says that the flow, {f}, out of {s} must use edges of the form {(s,j)}. The second constraint imposes flow conservation in each vertex, i.e., flow in must equal flow out. The third constraint ensures that the total flow {f} entering {t} must use edges of the form {(j,t)}.

The constraint matrix is a network matrix and so TUM. You can see this by noting that each variable {x_{ij}} appears in at most two constraints with opposite sign. Once as an `incoming’ edge and once as an `outgoing’ edge. The variable {f} also appears exactly twice with opposite sign.

TUM of the constraint matrix means that the flow on each edge and as well as the amount of flow is integer valued.

Theorem 1 The maximum flow is equal to the capacity of the minimum cut.

Proof: Let {f^*} be the value of the maximum flow. By LP duality:

\displaystyle  f^* = \min \sum_{(i,j) \in E}c_{ij}z_{ij}

\displaystyle  z_{ij} \geq |y_i - y_j| \quad \forall (i,j) \in E

\displaystyle y_s - y_t = 1

\displaystyle  z_{ij} \geq 0 \quad \forall (i,j) \in E

To see where it came from, observe that we have one dual variable {y_i} for each {i \in V} and one variable {z_{ij}} for each {(i,j) \in E}. Recall the bidirectional assumption. So, {z_{ij}} is different from {z_{ji}}. Thus one should have two constraints for each pair {(i,j)}:

\displaystyle z_{ij} \geq y_i - y_j

\displaystyle z_{ji} \geq y_j - y_i

I have simply folded these into a single constraint {z_{ij} \geq |y_i - y_j|}. Notice that at optimality at most one of {z_{ij}} and {z_{ji}} can be positive which is why this folding is legitimate.

Notice also, that we can WLOG fix {y_s =1} and {y_t = 0}. This allows us to right the dual as:

\displaystyle  f^* = \min \sum_{(i,j) \in E}c_{ij}z_{ij}

\displaystyle z_{ij} \geq |y_i - y_j| \quad \forall (i,j) \in E

\displaystyle y_s = 1

\displaystyle  y_t = 0

\displaystyle  z_{ij} \geq 0 \quad \forall (i,j) \in E

By TUM of the constraint matrix, we know that an optimal solution to this program that is integral exists. An integer solution to this program corresponds to a cut. Let {S = \{i: y_i =1\}} and {T = \{i: y_i = 0\}}. Observe

\displaystyle f^* = \sum_{i \in S, j \in T}c_{ij},

which is the capacity of the cut associated with the partition. It is easy to see that every cut corresponds to a feasible integer solution to the program.{\Box}

I can’t resist offering a probabilistic proof due to Bertsimas, Teo and myself. Let {(y^*, z^*)} be an optimal solution to the dual program. Suppose, for a contradiction that it is fractional. Consider the following randomization scheme:

  1. Position node {i} in {[0,1]} according to the value of {y^*_i}.
  2. Generate a single random variable {U} uniformly distributed in {[0,1]}. Round all nodes {i} with {y^*_i \leq U} to {y_i=0} and all nodes {i} with {y^*_i > U} to {_i=1}.
  3. Set {z_{ij} = |y_i-y_j|}.

The randomization scheme clearly returns a feasible cut. More importantly the randomization scheme defines a probability distribution over the set of cuts. Now lets compute the expected capacity of the cut generated by the randomization scheme:

\displaystyle \sum_{(i,j) \in E}c_{ij} P\{\min (y^*_i, y^*_j) \leq U \leq \max (y^*_i, y^*_j)\}

\displaystyle = \sum_{(i,j) \in E}c_{ij}|y^*_i - y^*_j| = \sum_{(i,j) \in E}c_{ij}z^*_{ij}.

Thus, the expected capacity of our random cut is the same as the optimal objective function value. Because the randomization scheme is a distribution over cuts, it follows that there must be a cut with capacity equal to {f^*}.

1.1. Hall Marriage Theorem

Recall, that in the DGS auction from the previous session, an important step relied on the Hall Marriage theorem. Here, I will use maxflow-mincut to prove it.

Let {(V_1, V_2, E)} be a bipartite graph with vertices {V_1 \cup V_2} and edge set {E}. Suppose {V_1| = |V_2| = n}. Note edges run between {V_1} and {V_2} only. A perfect matching is a subset of edges {M} such that each vertex is incident to exactly one edge in {M}. A matching is a subset of edges {M} such that each vertex is incident to at most one edge in {M}.

For any {S \subseteq V_1}, let {N(S) \subseteq V_2} be the set of neighbors of {S}. That is {N(S) = \{j \in V_2: (i,j) \in E,\,\, i \in S\}}.

Theorem 2 A bipartite graph, {(V_1, V_2, E)} contains a perfect matching iff. { |S| \leq |N(S)|} for all {S \subseteq V_1}.

Proof: This is clearly necessary. So, suppose { |S| \leq |N(S)|} for all {S \subseteq V_1} and for a contradiction assume that the graph does not contain a perfect matching.

Introduce a dummy source vertex {s} with an edge directed to each vertex in {V_1}. To each edge {(s,i)} with {i \in V_1}, assign a capacity of 1 unit. Introduce a dummy sink {t} with an edge directed from each vertex of {V_2} to {t}. To each edge of the form {(j,t)} assign a capacity of 1 unit. To each edge in {E} assign a capacity of {\infty}.

Consider now a maximum flow from {s} to {t}. By Theorem 1 we can assume the flow is integral, i.e., the flow on each edge is an integer amount. Second, it is easy to see that no edge will contain more than 1 unit of flow. Third, the set of edges, {M} in {E} that carry a non-zero amount of flow must be a matching. Why? No more than one unit of flow can leave any {i \in V_1} and no more than one unit of flow can enter an {j \in V_2}. Fourth, each matching corresponds to a feasible flow. Thus, the value of the maximum flow gives us the size of a maximum cardinality matching. Because we assumed no perfect matching, the value of the maximum flow must be less than {n}. i.e., {|M| < n}.

Now a maximum flow must equal the value of a minimum cut. Clearly, no minimum cut can contain edges in {E} because they have infinite capacity. So, the minimum cut must consist of edges of the form {(s,i)} and {(j,t)}. Let {A_1 \subseteq V_1} and {A_2 \subseteq V_2} be the set of vertices incident to the edges in a minimum cut. The capacity of this cut will be {|A_1| + |A_2| < n}. Because they form a cut, removing them from the graph would delete all paths from {s} to {t}. Do that.

Consider any {i \in V_1 \setminus A_1}. There cannot be a {j \in V_2 \setminus A_2} such that {(i,j) \in E}. If not, there would be a path {s \rightarrow i \rightarrow j \rightarrow t}. That means every neighbor of a vertex {i ]\in V_1 \setminus A_1} resides in {A_2}. Hence, by assumption:

\displaystyle |A_2| \geq |V_1 \setminus A_1| = n- |A_1| \rightarrow |A_1| + |A_2| \geq n,

a contradiction. {\Box}

The Hall marriage theorem is easily generalized to something called Gale’s demand theorem. Suppose each vertex in {i \in V_1} is a demand vertex, demanding {d_i} units of a homogenous good. Each vertex {j \in V_2} is a supply vertex, supplying {s_j} units of that same good. Supply can be shipped to demand nodes only along the edges in {E}. Is there a feasible way to match supply with demand? Yes, iff. for all {S \subseteq V_1} we have

\displaystyle \sum_{i \in S}d_i \leq \sum_{j \in N(S)}s_j.

1.2. Border’s Theorem

For simplicity assume two agents, {T} be the type space of agent 1, {S} the type space of agent 2 and a single good. Suppose, also for simplicity, that types are independent draws. Denote by {f_i(x)} the probability of agent {i} having type {x}. Let {a_i(t,s)} the probability that the good is allocated to agent {i} when agent 1 reports {t} and agent 2 reports {s}. Feasibility requires that

\displaystyle a_1(t, s) + a_2(t, s) \leq 1 \,\, \forall t, s

\displaystyle a_i(t,s) \geq 0\,\, \forall i,\,\, \forall t, s

{a} is an allocation rule. Denote by {q_1(t)} the interim allocation probability to agent 1when she reports {t}. Note that

\displaystyle q_1(t) = \sum_{s \in S}a_1(t,s)f_2(s)

A similar expression applies for {q_2(s)}. Call an interim allocation probability implementable if there exists an allocation rule that corresponds to it. Here is the question posed originally posed by Steve Matthews (who conjectured the answer): characterize the implementable {q}‘s. The high level idea first. Roll up all the inequalities relating {a} to {q} and feasibility into matrix form:

\displaystyle {\cal I}q - {\cal P}a = 0

\displaystyle {\cal A}a \leq 1

\displaystyle a \geq 0

Denote by {F} the set of {(q,a)} that satisfy the system above. The purely mathematical question is this: for which {q}‘s does there exist at least one {a} such that {(q,a) \in K}? In other words, we want to identify the set {K}:

\displaystyle K = \{q: \exists a\,\, with\,\, (q,a) \in F\}.

The set {K} is called the reduced form. Geometrically, {K} is just the projection of {F} onto the {q}-space. By the Farkas lemma, the set {K} can be characterized in the following way. Let {C = \{(u,v): v \geq 0, -u{\cal P} + v{\cal A} = 0\}}. {C} is a cone. Then,

\displaystyle K = \{q: u \cdot q \leq v \cdot 1\,\, \forall (u,v) \in C\}.

It would seem that {K} is described by a continuum of inequalities. However, because {C} is a cone, it suffices to consider only the generators of the cone {C} and there are finitely many of them. The challenge now reduces to identifying the generators of {C}. When the matrices {{\cal A}} and {{\cal P}} have suitable structure this is possible and is the case here. For convenience assume {T = \{t, t'\}} and {S = \{s,s'\}}. Here are all the inequalities:

\displaystyle  a_1(t,s) + a_2(t,s) \leq 1

\displaystyle  a_1(t',s) + a_2(t',s) \leq 1

\displaystyle  a_1(t',s') + a_2(t',s') \leq 1

\displaystyle  a_1(t,s') + a_2(t,s') \leq 1

\displaystyle  a_1(t,s)f_2(s) + a_1(t,s')f_2(s') = q_1(t)

\displaystyle  a_1(t',s)f_2(s) + a_1(t',s')f_2(s') = q_1(t')

\displaystyle  a_2(t,s)f_1(t) + a_2(t',s)f_1(t') = q_2(s)

\displaystyle  a_2(t,s')f_1(t) + a_2(t',s')f_1(t') = q_2(s')

Now some appropriate scaling:

\displaystyle  f_1(t)f_2(s) a_1(t,s) + f_1(t)f_2(s)a_2(t,s) \leq f_1(t)f_2(s)

\displaystyle  f_1(t')f_2(s)a_1(t',s) + f_1(t')f_2(s)a_2(t',s) \leq f_1(t')f_2(s)

\displaystyle  f_1(t')f_2(s')a_1(t',s') + f_1(t')f_2(s')a_2(t',s') \leq f_1(t')f_2(s')

\displaystyle  f_1(t)f_2(s')a_1(t,s') + f_1(t)f_2(s')a_2(t,s') \leq f_1(t)f_2(s')

\displaystyle  a_1(t,s)f_1(t)f_2(s) + a_1(t,s')f_1(t)f_2(s') = f_1(t)q_1(t)

\displaystyle  a_1(t',s)f_1(t')f_2(s) + a_1(t',s')f_1(t')f_2(s') = f_1(t')q_1(t')

\displaystyle  a_2(t,s)f_1(t)f_2(s) + a_2(t',s)f_1(t')f_2(s) = f_2(s)q_2(s)

\displaystyle  a_2(t,s')f_1(t)f_2(s') + a_2(t',s')f_1(t')f_2(s') = f_2(s')q_2(s')

Now a change of variables to make things nice and neat: {x_i(u,v) = a_i(u,v)f_1(u)f_2(v)}.

\displaystyle  x_1(t,s) + x_2(t,s) \leq f_1(t)f_2(s)

\displaystyle x_1(t',s) + x_2(t',s) \leq f_1(t')f_2(s)

\displaystyle  x_1(t',s') + x_2(t',s') \leq f_1(t')f_2(s')

\displaystyle  x_1(t,s') + x_2(t,s') \leq f_1(t)f_2(s')

\displaystyle  x_1(t,s)+ x_1(t,s') = f_1(t)q_1(t)

\displaystyle  x_1(t',s) + x_1(t',s') = f_1(t')q_1(t')

\displaystyle  x_2(t,s)+ x_2(t',s) = f_2(s)q_2(s)

\displaystyle  x_2(t,s') + x_2(t',s') = f_2(s')q_2(s')

This system corresponds to the flow balance conditions for a flow problem on a bipartite graph. For each profile of types {(u,v)} introduce a supply vertex with supply {f_1(u)f_2(v)}. For each {u \in T} introduce a demand vertex with demand {f_1(t)q_1(t)}. From each supply vertex of the form {(u,v)}, introduce two edges. One directed from {(u,v)} to demand vertex {f_1(u)q_1(t)}; the flow on this edge is {x_1(u,v)}. The other edge directed into demand vertex {f_2(v)q_2(v)}; the flow on this edge is {x_2(u,v)}.

The first four inequalities say that the flow out of a supply vertex cannot exceed its available supply. The second four ensure that the flow into a demand vertex matches the demand.

Using this flow interpretation we can use Gale’s generalization of the Hall marriage theorem to determine when a feasible flow exists. Existence of a feasible flow means that {q} is implementable. Let {A \subseteq T} and {B \subseteq S}. Then, for all such {A} and {B},

\displaystyle \sum_{t \in A}f_1(t)q_1(t) + \sum_{s \in B}f_2(t)q_2(s) \leq \sum_{t \in A}f_1(t)[\sum_{s \in S}f_2(s)] + \sum_{s \in B}f_2(s)[\sum_{t \in T}f_1(t)] = \sum_{t \in A}f_1(t) + \sum_{s \in B}f_2(s)

is necessary and sufficient for {q} to be implementable. The left hand side of this inequality is the expected quantity of the good allocated to agents when their types come from {A} and {B}. The right hand side is the probability that at least one agent has a type in {A \cup B}. In fact we can rewrite this inequality to read

\displaystyle \sum_{t \in A}f_1(t)q_1(t) + \sum_{s \in B}f_2(t)q_2(s) \leq 1 - \sum_{(u,v) \in A^c \times B^c}f_1(u)f_2(v).

Here, {A^c} and {B^c} are the complements of {A} and {B}.

As you will see later one can generalize this analysis to cases where the allocation rule, {a} must satisfy additional constraints. For details see Che, Kim and Mierendorff as well as Federgruen and Groenvelt.

2. Polymatroids

Let {E} be a ground set. A real valued function {f} defined on subsets of {E} is

  • non-decreasing if {S \subseteq T \Rightarrow f(S) \leq f(T)}, and
  • {f} is submodular if {\forall S, T \subset E}

    \displaystyle f(S) + f(T) \geq f(S \cup T) + f(S \cap T).

  • {f} is supermodular if {-f} is submodular.

Let {f} be a submodular function on {E}. The polytope:

\displaystyle P(f) = \{x \in \Re^n_+: \sum_{j \in S}x_j \leq f(S)\quad \forall S \subseteq E\}

is the polymatroid associated with {(E,f)}. Notice that {P(f) \neq \emptyset} iff {f(S) \geq 0\,\, \forall S \subseteq E}. Polymatroids arise in a number of economic settings. One that we have just discussed are the Border inequalities. Suppose a single object to allocate among {n} bidders whose types are independent draws from the same distribution (with density {\pi}). Restrict attention to allocation rules that do not depend on the identity of the agent. Let {q(t)} be the interim allocation probability if an agent reports type {t}. Then, {q} is implementable iff.

\displaystyle n \sum_{t \in S}\pi(t)q(t) \leq 1 - \Pi_{t \not \in S}\pi(t)\,\, \forall S \subseteq T.

Call the right hand side of the above {f(S)} and consider the change of variables {x_t = n\pi(t)q(t)}:

\displaystyle \sum_{t \in S}x_t \leq f(S)\,\, \forall S \subseteq T.

Now {f} is a monotone polymatroid. So, the feasible region described by the Border inequalities is a polymatroid. The theorem below tells us that every extremal interim allocation probability can be implemented as a priority rule. That is, types are assigned priorities and in any profile the good is allocated to the agent with the highest priority. This connection to priority rules is useful for analyzing optimal auctions in dynamic settings (see my survey paper on dynamic mechanism design) as well as competition between auctions (see Pai’s jmp).

Theorem 3 Let {f} be a non-decreasing, integer valued, submodular function on {E} with {f( \emptyset) =0}. Then, the polymatroid {P(f)} has all integral extreme points.

Proof: Choose a weight vector {c} and order the elements of {E} by decreasing weight:

\displaystyle c_1 \geq c_2 \ldots \geq c_k \geq 0 > c_{k+1} \ldots \geq c_n.

let {S^0 = \emptyset} and {S^j = \{1, 2, \ldots, j\}} for all {j \in E}. Consider the vector {x} obtained by the following greedy procedure:

  1. {x_j = f(S^j) - f(S^{j-1})} for {1 \leq j \leq k}
  2. {x_j = 0} for {j \geq k+1}.

I show that {x \in \arg \max\{cz : z \in P(f)\}.} Note {x} is integral and non-negative because {f} is non-decreasing. To verify that {x \in P(f)} observe:

\displaystyle  \sum_{j \in T}x_j = \sum_{j \in T\cap S^k}[f(S^j) - f(S^{j-1})] \leq \sum_{j\in T\cap S^k}[f(S^j \cap T) - f(S^{j-1} \cap T)] \leq f(S^k \cap T) - f(\emptyset) \leq f(T).

The dual to our optimization problem is:

\displaystyle  \min \sum_{S \subseteq E}f(S)y_S

subject to

\displaystyle  \sum_{S \ni j}y_S \geq c_j\quad \forall j \in E

\displaystyle  y_S \geq 0 \quad \forall S \subseteq E

To show that {x} is an optimal primal solution we identify a dual feasible solution with an objective function value of {cx}. Set {y_{S^j} = c_j - c_{j+1}} for {1 \leq j < k}. Set {y_{S^k} = c_k} and {y_{S^j} = 0} for {j \geq k+1}.

Notice that {y_S \geq 0} for all {S \subseteq E} and is feasible in the dual because:

\displaystyle \sum_{S \ni j}y_S = y_{S^j} + \ldots y_{S^k} = c_j

for all {j \leq k} and

\displaystyle \sum_{S \ni j}y_S \geq 0 \geq c_j

if {j \geq k+1}.

The dual objective function value is:

\displaystyle \sum_{j=1}^{k-1}(c_j - c_{j+1})f(S^j) + c_kf(S^k) = \sum_{j=1}^kc_j[f(S^j) - f(S^{j-1})] = cx.

Now suppose {P(f)} has a fractional extreme point. Choose {c} to be the unique weight vector that is optimized at that point. However, as shown above, every weight vector has an integral point in {P(f)} that optimizes it, contradiction.{\Box}

If we drop the requirement that {f} be non-decreasing then {\max\{cz: z \in P(f)\}} can still be solved by using the greedy algorithm because

\displaystyle \max\{cz: z \in P(f)\} = \max \{cz: z \in P(f_{mon})\}.

Here {f_{mon}(S) = \min\{f(T): S \subseteq T \subseteq E\}} which is submodular and non-decreasing. Now I describe a non-constructive proof of Theorem 3 which uses an `uncrossing’ trick that is good to know. Let {y^*} be an optimal solution to the polymatroid optimization problem. Let {{\cal F} = \{S: y^*_S > 0\}}. Then, we can assume that {{\cal F}} is laminar. If true, this means that the optimal basis in the dual is TUM. But this is also an an optimal basis for the primal. Thus, there is an optimal primal solution that is integral. Ok, suppose {{\cal F}} is not laminar. Then there exist {S, T \in {\cal F}} such that {S \cap T \neq \emptyset}. Construct a new dual solution as follows: Increase {y^*_{S \cup T}, y^*_{S \cap T}} by {\epsilon} each. Decrease {y^*_S}, {y^*_T} by {\epsilon} each. This new dual solution feasible. The change in objective function value is

\displaystyle \epsilon f(S \cup T) + \epsilon f(S \cap T) - \epsilon f(S) - \epsilon f(T)

which cannot exceed zero by submodularity. This is the uncrossing step. Repeated applications produces an optimal dual solution which is laminar.

The following remarkable result on polymatroid intersection is due to Edmonds.

Theorem 4 Let {f} and {g} be non-decreasing, integer valued, submodular functions on {E} with {f( \emptyset) = g(\emptyset) =0}. Then {P(f) \cap P(g)} is integral.

Proof: Pick an arbitrary integral {c} and consider {max \{cx: x \in P(f) \cap P(g)\}}. The dual to this linear program is

\displaystyle  \min \sum_{S \subseteq E}f(S)y_S + \sum_{S \subseteq E}g(S)z_S

subject to

\displaystyle  \sum_{S \ni j}y_S + \sum_{S \ni j}z_S \geq c_j\quad \forall j \in E

\displaystyle  y_S, z_S \geq 0 \quad \forall S \subseteq E

Let {(y^*,z^*)} be an optimal dual solution. Let {{\cal F} = \{S: y^*_S > 0\}} and {{\cal G} = \{S: z^*_S > 0\}}. By uncrossing we can assume that each of these laminar. Recall from the first session that the constraint matrix associated with {{\cal F}} can be arranged so that it has exactly one non-zero entry in each row. Similarly with {{\cal G}}. Hence, the constraint matrix formed from the concatenation of {{\cal F}} and {{\cal G}} (which is also the optimal basis) is TUM. {\Box}

From the proof we get another result for free. Let {{\cal F}} and {{\cal G}} be two laminar collections on the same ground set {E}. Consider the following polyhedron:

\displaystyle  \sum_{j \in T} x_j \leq b_T\,\, \forall T \in {\cal F}

\displaystyle \sum_{j \in T} x_j \leq b'_T\,\, \forall T \in {\cal G}

As long as the {b}‘s are integral, this polyhedron is integral.

How is this useful? Return to the SS model. For any set of edges, {K} (of the form {(i,j)} with {i \in B} and {j \in S}) define {f(K)} to be the number of vertices in {B} that the edges in {K}are incident to. Similarly, define {g(K)} to be the number of vertices in {S} that edges in {K} are incident to. Note {f} and {g} are submodular. Here is an equivalent formulation of the problems of finding an efficient assignment:

\displaystyle \max \sum_{i \in B}\sum_{j \in S}u_{ij}x_{ij}

subject to

\displaystyle  x \in P(f) \cap P(g)

Why is this useful? Notice, by choosing {f} and {g} differently we can model more constraints than just the number of edges incident to a vertex is at most 1.