In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let {x^k_j = 1} if object {j \in M} is given to agent {k \in N} and zero otherwise. Denote the utility of agent {k \in N} from consuming bundle {S \subseteq M} by

\displaystyle u_k(S) = \sum_{i \in S}u^k_i + \sum_{i, j \in S}w^k_{ij}.

The problem of maximizing total welfare is

\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{i \neq j}w^k_{ij}x^k_ix^k_j

subject to

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M

\displaystyle x^k_i \in \{0,1\}\,\, \forall i \in M, k \in N

I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is called sign consistency. For each {i,j \in M}, the sign of {w^k_{ij} } and {w^r_{ij}} for any {k, r \in N} is the same. Furthermore, this applies to all pairs {i, j \in M}.

Let {G^w} be a graph with vertex set {M} and for any {i,j \in M} such that {w^k_{ij} \neq 0} introduce an edge {(i,j)}. Because of the sign consistency condition we can label the edges of {G^w} as being positive or negative depending on the sign of {w^k_{ij}}. Let {E^+ = \{(i,j): w^k_{ij} \geq 0\}} and {E^- = \{(i,j): w^k_{ij} < 0\}}. The second condition is that {G^w} be a tree.

The following is the relaxation that they consider:

\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{(i,j) \in E^+ \cup E^-}w^k_{ij}z^k_{ij}

subject to

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M

\displaystyle z^k_{ij} \leq x^k_i, x^k_j\,\, \forall k \in N, (i,j) \in E^+

\displaystyle z^k_{ij} \geq x^k_i + x^k_j - 1\,\, \forall k \in N, (i,j) \in E^-

\displaystyle x^k_i, z^k_{ij} \in \{0,1\}\,\, \forall i \in M, k \in N

Denote by {P} the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of {P} are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”

Let {{\cal C}} be the maximal connected components of {G^w} after deletion of the edges in {E^-} (call this {G^+}). The proof will be by induction on {|{\cal C}|}. The case {|{\cal C}| = 1} follows from total unimodularity. I prove this later.

Suppose {|{\cal C}| > 1}. Let {({\bar z}, {\bar x})} be an optimal solution to our linear program. We can choose {({\bar z}, {\bar x})} to be an extreme point of {P}. As {G^w} is a tree, there must exist a {C \in {\cal C}} incident to exactly one negative edge, say {(p,q)}. Denote by {P'} the polyhedron {P} restricted to just the vertices of {C} and by {Q} the polyhedron {P} restricted to just the vertices in the complement of {C}. By the induction hypothesis, both {P'} and {Q} are integral polyhedrons. Each extreme point of {P'} ({Q}) assigns a vertex of {C} (the complement of {C}) to a particular agent. Let {X_1, X_2, \ldots, X_a} be the set of extreme points of {P'}. If in extreme point {X_r}, vertex {p} is assigned to agent {k} we write {X_{rk} = 1} and zero otherwise. Similarly with the extreme points {Y_1, Y_2, \ldots, Y_b} of {Q}. Thus, {Y_{rk} = 1} is {Y_r} assigns vertex {q} to agent {k}. Let {v(X_{r})} be the objective function value of the assignment {X_r}, similarly with {v(Y_{rk})}.

Now {({\bar z}, {\bar x})} restricted to {P'} can be expressed as {\sum_r\lambda_{r}X_{r}}. Similarly, {({\bar z}, {\bar x})} restricted to {Q} can be expressed as {\sum_r\mu_{r}Y_{r}}. We can now reformulate our linear program as follows:

\displaystyle \max \sum_r\lambda_{r}v(X_{r}) + \sum_r\mu_{r}v(Y_{r}) -\sum_{k \in N} |w^k_{pq}|y^k_{pq}

subject to

\displaystyle -\sum_r\lambda_{rk} = -1

\displaystyle -\sum_r\mu_{rk} = -1

\displaystyle \sum_{r: X_{rk} = 1}\lambda_{r}X_{r} + \sum_{r: Y_{rk} = 1}\mu_{r}Y_{r} \leq y^k_{pq} + 1\,\, \forall k \in N

\displaystyle \lambda_{r}, \mu_{r}, y^k_{pq} \geq 0\,\, \forall r, k

The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because {X_{rk}} and {X_{rk'}} cannot both be 1, similarly with the {Y}‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in {G^w} every cycle must contain a positive even number of negative edges.

Return to the case {|{\cal C}| = 1}. Consider the polyhedron {P} restricted to just one {C \in {\cal C}}. It will have the form:

\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in C

\displaystyle z^k_{ij} \leq x^k_i, x^k_j\,\, \forall k \in N, (i,j) \in E^+ \cap C

\displaystyle x^k_i, z^k_{ij} \in \{0,1\}\,\, \forall i \in C, k \in N

Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, {S}, of rows/constraints. The goal is to partition them into two sets {L} and {R} so that column by column the difference in the sum of the non-zero entries in {L} and and the sum of the nonzero entries in {R} differ by at most one.

Observe that the rows associated with constraints {\sum_{k \in N}x^k_i \leq 1} are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If {y^k_{ij} - x^k_i \leq 0} is present in {S} but {y^k_{ij} -x^k_j \leq 0} is absent (or vice-versa), we are free to assign the row associated with {y^k_{ij} - x^k_i \leq 0} in any way to satisfy the GH theorem. The difficulty will arise when both {y^k_{ij}}, {x^k_i} and {x^k_j} are present in {S}. To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with {y^k_{ij} - x^k_i \leq 0} and {y^k_{ij} -x^k_j \leq 0} be separated.

When {S} is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to {L} and {R} making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).

Fix an agent {k \in N}. The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in {C} (which is a tree) to be a root and direct all edges `away’ from the root (when {S} is a subset of the constraints we delete from {C} any edge {(i,j)} in which at most one from the pair {y^k_{ij} - x^k_i \leq 0} and {y^k_{ij} -x^k_j \leq 0} appears in {S}) . Label the root {L}. Label all its neighbors {R}, label the neighbors of the neighbors {L} and so on. If vertex {i \in C} was labeled {L} assign the row {\sum_{k \in N}x^k_i \leq 1} to the set {L} otherwise to the row {R}. This produces a partition of the constraints of the form {\sum_{k \in N}x^k_i \leq 1} satisfying GH.

Initially, all leaves and edges of {C} are unmarked. Trace out a path from the root to one of the leaves of {C} and mark that leaf. Each unmarked directed edge {(i,j)} on this path corresponds to the pair {y^k_{ij} - x^k_i \leq 0} and {y^k_{ij} -x^k_j \leq 0}. Assign {y^k_{ij} - x^k_i \leq 0} to the same set that is the label of {i}. Assign {y^k_{ij} - x^k_j \leq 0} to the same set that is the label of vertex {j}. Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge {(i,j)}. If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge {(i,j)} as well as {(i,t)}. Suppose {i} was labeled {L} on the first iteration and {(i,j)} was marked. This means {y^k_{ij} - x^k_{i} \leq 0} was assigned to {L}. Subsequently {y^k_{it} - x^k_i \leq 0} will also be assigned to {L} which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.