A recent paper by Bergeman, Brooks and Morris (BBM) supposes a monopolist free to segment the market in any way she can (without worrying about arbitrage), and asks what is the achievable set of pairs of producer and consumer surplus? BBM gives a simple and satisfying answer to this question. This post attempts a short proof of their characterization.

A monopolist faces a market consisting of buyers with valuations ${\{v_1, v_2, \ldots, v_n\}}$. Order them so that ${v_1 < v_2 < \ldots < v_n}$. The number of buyers with valuation ${v_i}$ is ${d_i}$ and assume the buyers are divisble. A segmentation of the market is a partition of the buyers into upto ${n}$ markets with the property that the profit maximizing price in market ${j = 1, \ldots, n}$ is ${v_j}$. If we let ${x_{ij}}$ be the number of buyers with valuation ${v_i}$ in market ${j}$, then any segmentation is characterized by the following:

$\displaystyle \sum_{j=1}^{n}x_{ij} = d_i\,\, \forall i = 1, \ldots, n$

$\displaystyle v_j\sum_{r \geq j}x_{rj} \geq v_t \sum_{r \geq t}x_{rj}\,\, \forall j = 1, \ldots, n, \,\, \forall t \neq j$

$\displaystyle x_{ij} \geq 0\,\, \forall i,j$

Denote by ${X(d)}$ the set of feasible segmentations. Let ${P = \sum_{j=1}^{n}v_j\sum_{r= j}^nx_{rj}}$ be the profit earned by the monopolist under the segmentation ${x \in X(d)}$. The consumer surplus of buyers under the segmentation ${x \in X(d)}$ is ${C = \sum_{j=1}^{n}\sum_{r = j}^n[v_r-v_j]x_{rj}.}$

It is easy to see that ${P \leq \sum_{j=1}^nv_jd_j = \Pi_1(d)}$. The upper bound follows from the segmentation that assigns all buyers with valuation ${v_j}$ to the ${j^{th}}$ market and no others. This corresponds to first degree price discrimination. It is also easy to see that ${P \geq \max_{1 \leq j \leq n} v_j\sum_{r \geq j}d_j = \Pi_0(d)}$. The lower bound comes from the segmentation that assigns all customers to market ${k}$, where ${v_k}$ is the profit maximizing monopoly price without discrimination. BBM show the following:

Theorem ${(P,C)}$ is feasible iff ${\Pi_0(d) \leq P \leq \Pi_1(d)}$ and ${P + C \leq \sum_jv_jd_j}$.

That ${\Pi_0(d) \leq P \leq \Pi_1(d)}$, is straightforward. The hard part is to show two things.

1) For any ${P}$ such that ${\Pi_0(d) \leq P \leq \Pi_1(d)}$ there is a ${C}$ such that ${P + C = \sum_{j=1}^nv_jd_j}$.

2)There exists an ${x \in X(d)}$ such that ${P = \Pi_0(d)}$ and ${C = 0}$.

To prove the first item (which BBM note in the paper is easy) on this list, call a segmentation ${x \in X(d)}$ upper triangular if ${x_{ij} = 0}$ for all ${i < j}$. Note ${v_k > v_t}$.

Let ${r = \arg \max_{1 \leq q \leq t}v_q\sum_{i=q}^tx_{ik}}$. We construct a new segmentation ${y \in X(d)}$ from ${x}$ by shifting the buyers in market ${k}$ with values below ${v_k}$ into market ${r}$. As the profit maximizing price just to this portion of buyers is ${v_r}$, moving them into market ${r}$ leaves the profit maximizing price in market ${r}$ unchanged. Formally:

${y_{ij} = x_{ij}}$ for all ${i}$ and ${j \neq k, r}$.
${y_{ik} = x_{ik}}$ for all ${i \geq k}$.
${y_{ik} = 0}$ for all ${i t}$.
${y_{ir} = x_{ir} +x_{ik}}$ for all ${i \leq t}$.

Under segmentation ${y}$, both ${P}$ and ${C}$ increased in value, contradicting the initial choice of ${x}$.

To prove the second item on the list, among all feasible segmentations such that ${P = \Pi_0(d)}$, choose one that minimizes ${C}$, say ${x}$. Call ${x}$ lower triangular if ${x_{ij} = 0}$ for all ${i > j}$. I show that ${x}$ must be lower triangular from which it follows that ${C = 0}$. The proof will be by induction on ${n}$ the number of distinct valuations.

The case ${n=2}$ is straightforward. Suppose first that ${\Pi_0(d) = v_1(d_1 + d_2)}$. The following segmentation, as can be verified, does the trick:

${x_{22} = d_2}$
${ x_{12} = \alpha}$ where ${v_1[\alpha + d_2] = v_2d_2}$
${x_{21} = 0}$
${x_{11} = d_1 - \alpha}$

If ${\Pi_0(d) = v_2d_2}$, the segmentation that assigns all buyers to market 2 will have the requiste property.

Now consider the case of arbitrary ${n}$ and suppose first that ${\Pi_0(d) > v_1\sum_{i=1}^nd_i}$. Given an instance ${d}$ on ${n}$ valuations construct an instance ${d'}$ on ${n-1}$ valuations ${\{v'_2, \ldots, v'_{n}\}}$ by setting ${d'_i = d_i}$ for all ${i \geq 2}$. It is easy to see that ${\Pi_0(d') = \Pi_0(d)}$, i.e., the optimal monopoly profit with no discrimination remains unchanged. By the induction hypothesis there is a segmentation ${y \in X(d')}$ that is lower triangular. To conclude the argument we must show how to convert ${y}$ into a legitimate segmentation for ${d}$.

${x_{ij} = y_{ij}}$ for ${i,j \geq 2}$.
${x_{1i} = \mu_i}$ where ${v_1[\mu_i + \sum_{r=2}^nx_{ri}] \leq v_{ii}x_{ii}}$ for all ${i \geq 1}$ and ${\sum_{i=1}^n\mu_i = d_1}$.
If the ${\mu_i}$‘s can indeed be chosen as specified, then, ${x \in X(d)}$ is lower triangular and the corresponding ${P = \Pi_0(d)}$ and ${C = 0}$. To verify that appropriate ${\mu_i}$‘s exist, it is enough to check that
$\displaystyle d_1 \leq \sum_{i=1}^n\frac{v_{ii}}{v_1}x_{ii} - \sum_{i=1}^n \sum_{r=2}^nx_{ri} = \frac{1}{v_1}\Pi_0(d') - \sum_{r=2}^nd_r$

which follows from the hypothesis that ${v_1\sum_{r=1}^nd_r \leq \Pi_0(d)}$.

To conclude, suppose now that ${\Pi_0(d) = v_1\sum_{i=1}^nd_i}$. Construct a new instance ${d'}$ on ${n-1}$ valuations ${\{v'_1, v'_3 \ldots, v'_{n}\}}$ by setting ${d'_i = d_i}$ for all ${i \geq 3}$ and ${d'_1 = d_1 + d_2}$. Notice, ${\Pi_0(d') = \Pi_0(d)}$. By the induction hypothesis there is a segmentation ${y \in X(d')}$ that is lower triangular. To conclude the argument we must show how to convert ${y}$ into a legitimate segmentation for ${d}$.

${x_{ij} = y_{ij}}$ for ${i,j \geq 3}$.
${x_{2i} = \mu_i}$ for all ${i \geq 1}$ where ${v_2[\mu_i + \sum_{r \geq 3}x_{ri}] \leq v_{ii}x_{ii}}$, ${\mu_i \leq y_{1i}}$ and ${\sum_{r=1}^n\mu_i = d_2}$.
${x_{1i} = y_{1i} - \mu_i}$ for all ${i \geq 1}$.

If the ${\mu_i}$‘s can be chosen as specified then, ${x}$ is lower triangular, in ${X(d)}$ and the corresponding ${P = \Pi_0}$ and ${C = 0}$. Verifying that the appropriate ${\mu_i}$‘s exist, can be done in the same way as the previous case.

This week saw the start of my second attempt at a graduate course on market design. Although a reading course (in the sense that students present papers) I began with a survey of what I think are the main ideas one should be familiar with. With over 20 people in the room, I was sweating bullets, having expected at most half that. I will post the substance of the first lecture at a later date. In the next session I will use electricity markets as a vehicle to show how one should think about the questions to be answered as well as the kinds of issues one faces when designing a market.

The list of papers to be covered appears below, along with the instructions I gave the students on presentations.

Presentation Rules: I expect every presentation to accomplish the following (in addition to the usual of having well prepared slides, understanding and answering questions etc.):

1. Provide background for the issue to be discussed.
2. An unambiguous statement of the issue and an explanation for why its resolution is not obvious.
3. A description of the resolution proposed by the author(s) of the paper. This is where one discusses the model in the paper.
4. An exposition of the analysis of the model. A good exposition will focus on the main point and perhaps even identify a telling example that conveys it as well as the full model does. The point of such an exposition is to show the role of each assumption in arriving at the authors main conclusion.
5. A critique. Did authors address the right question? Is the way they chose to frame the issue sensible? Is the model persuasive? If not, why not? Simiply saying its not general enough is insufficient. No model if it is to be tractable can be fully general.
6. Suggestions for future research.

Topics & Papers

1. IPOs

Jaganathan and Sherman: Why do IPO Auctions Fail

Ritter: Equilibrium in the IPO market

Ljungqvist: IPO Underpricing: A Survey
Ritter and Welch: A review of IPO activity, pricing and allocations

2. Health Care & Insurance Markets

Cochran, J: Time Consistent Health Insurance, JPE 1995.
Cochran, J. : After the ACA: Freeing the market for health care
Fang & Gazzara: Dynamic Inefficiencies in an Employment-Based Health Insurance System: Theory and Evidence
Handel, Hendel and Whinston: Equilibria in Health Exchanges: Adverse Selection vs. Reclassification Risk
Levine, Kremer and Albright: Making Markets for Vaccines
Kremer: Creating Markets for New Vaccines Part 1: rationale
Kremer: Creating Markets for New Vaccines Part 2: design issues

Rothschild, M. & J. Stiglitz: Equilibrium in Competitive Insurance Markets, QJE 1976.
Fang, H: Insurance Markets in China

3. Market for Cybersecurity Insurance

Kesan, Majuca & Yurcik: Cyberinsurance as a market based solution to the problem of cybersecurity

4. Affirmative Action

Hickman: Effort, Race Gaps and Affirmative Action: A Game Theoretic Analsyis of College Admissions
Chung: Affirmative Action as an Implementation Problem
Fryer and Loury: Valuing Identity: The simple economics of Affirmative action policies

Fang and Moro: Theories of Statistical Discrimination and Affirmative Action: a survey

5. Assigning Counsel

Friedman & Schulhofer: Rethinking Indigent Defense: Promoting Effective Representation Through Consumer Sovereignty and Freedom of Choice for All Criminal Defendants

6. The Role of Politics

Acemoglu: Why not a political Coase theorem?
Acemoglu: Modeling Inefficient Institutions

In part two, as promised, I turn to the welfare maximization problem. To formulate the problem of finding a welfare maximizing allocation let ${x^k_j = 1}$ if object ${j \in M}$ is given to agent ${k \in N}$ and zero otherwise. Denote the utility of agent ${k \in N}$ from consuming bundle ${S \subseteq M}$ by

$\displaystyle u_k(S) = \sum_{i \in S}u^k_i + \sum_{i, j \in S}w^k_{ij}.$

The problem of maximizing total welfare is

$\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{i \neq j}w^k_{ij}x^k_ix^k_j$

subject to

$\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M$

$\displaystyle x^k_i \in \{0,1\}\,\, \forall i \in M, k \in N$

Welfare maximization with BQP preferences is in general NP-hard. One proof relies on a reduction to the multi-way cut problem. Given a graph ${G=(V,E)}$ with edge weight ${c_{ij}}$ for each ${(i,j) \in E}$, and a set of terminal vertices ${T = \{v_1, v_2, \ldots, v_n\}}$, a multiway cut is a set of edges whose removal disconnects every pair of terminal vertices. The problem of finding the multiway cut of minimum total weight is called the multiway cut problem. When ${T}$ consists of only two terminals (${k = 2}$) the problem reduces to the well known minimum cut problem. For ${k \geq 3}$, it is known that the problem is ${NP-}$hard even on planar graphs.

We can obtain the multiway cut problem by setting ${u^k_i = 0}$ for all ${k \in N}$ and ${i \in M}$ and ${w^k_{ij} = c_{ij}}$ for all ${k \in N}$ and ${(i,j) \in E}$. Any pair ${(i,j)}$ such that ${x^k_ix^r_j = 1}$ for ${k \neq r}$ will be part of a multi-way cut. This reduction implies that welfare maximization when ${w^k_{ij} \geq 0}$ for all ${k \in N}$ and ${i,j \in M}$ is NP-hard. This is in contrast to the case of surplus maximization.

,Candogan, Ozdaglar and Parrilo (2013), the paper that prompted this post, identify a solvable instance of the welfare maximization problem. They impose two conditions. The first is called sign consistency. For each ${i,j \in M}$ the sign of ${w^k_{ij} }$ and ${w^r_{ij}}$ for any ${k, r \in N}$ is the same. Furthermore, this applies to all pairs ${i, j \in M}$. Sign consistency by itself is not sufficient to obtain a solvable instance. Another condition is needed. Let ${G^w}$ be a graph with vertex set ${M}$ and for any ${i,j \in M}$ such that ${w^k_{ij} \neq 0}$ introduce an edge ${(i,j)}$. The second condition is that ${G^w}$ be a tree. Interestingly, Erdos and Sz\’{e}kely (1995) show that a generalization of the multiway cut problem which corresponds to welfare maximization under sign consistency and ${u^k_i = 0}$ for all ${k \in N}$ and ${i \in M}$, is polynomially solvable when the underlying graph ${G}$ is a tree. The Candogan, Ozdaglar and Parrilo (COP) proof is based on a dynamic programming argument similar to the one used in Erdos and Sz\’{e}kely (1994).

The key result in COP is the following natural linearization of the welfare maximization problem admits an integral optimal solution.

$\displaystyle \max \sum_{k \in N}\sum_{i \in M}u^k_ix^k_i + \sum_{k \in N}\sum_{i \neq j}w^k_{ij}y^k_{ij}$

subject to

$\displaystyle \sum_{k \in N}x^k_i \leq 1\,\, \forall i \in M$

$\displaystyle y^k_{ij} \leq \{x^k_i, x^k_j\}^-\,\, \forall w^k_{ij} > 0$

$\displaystyle y^k_{ij} \geq x^k_i + x^k_j -1\,\, \forall w^k_{ij} < 0$

$\displaystyle x^k_i \in \{0,1\}\,\, \forall i \in M, k \in N$

There is a connection between the welfare maximization problem and packing subtrees into trees that I want to highlight. It suggests a possible avenue by which one might enlarge the class of preferences COP consider.

Because of the sign consistency condition we can label the edges of ${G^w}$ as being positive or negative depending on the sign of ${w^k_{ij}}$. Let ${E^+ = \{(i,j): w^k_{ij} \geq 0\}}$ and ${E^- = \{(i,j): w^k_{ij} < 0\}}$. Let ${{\cal C}}$ be the maximal connected components of ${G^w}$ after deletion of the edges in ${E^-}$ (call this ${G^+}$). For any ${C \in {\cal C}}$ and ${S \subseteq C}$ let

$\displaystyle v_k(S) = \sum_{i \in S}u^k_i + \sum_{i,j \in S}w^k_{ij}.$

By the way, for each ${C \in {\cal C}}$ and ${k \in N}$, ${v_k}$ is supermodular over the subsets of ${C}$. Let ${z_k(S) = 1}$ if ${S \subseteq C}$ is assigned to agent ${k}$ and zero otherwise. The problem of finding a welfare maximizing allocation can be expressed as follows:

$\displaystyle \max \sum_{k \in N}\sum_{C \in {\cal C}}\sum_{S \subseteq C}v_k(S)z_k(S) - \sum_{k \in N}\sum_{(i,j) \in E^-}|w^k_{ij}|y^k_{ij}$

subject to

$\displaystyle \sum_{S \ni i}\sum_{k \in N}z_k(S) \leq 1\,\, \forall i \in M,\,\, S \subseteq C \in {\cal C}$

$\displaystyle y^k_{ij} \geq \sum_{S \ni i}z_k(S) + \sum_{S \ni j}z_k(S) - 1\,\, \forall k \in N, (i,j) \in E^-$

$\displaystyle z_k(S) \in \{0,1\}\,\, \forall k \in N, S \subseteq C \in {\cal C}$

$\displaystyle y^k_{ij} \in \{0,1\}\,\, \forall k \in N, (i,j) \in E^-$

In the above program, we can, without loss, restrict attention to subsets ${S}$ that a subtrees (connected subgraphs) of ${E^+}$. To see why, suppose in that in some optimal solution to the above integer program, ${z_k(S) = 1}$ where ${S}$ is not a subtree. Then, we can write ${S = A \cup B}$ where both ${A}$ and ${B}$ are in the same component of ${C}$ as ${S}$ is. Furthermore, it must be the the case that there is no edge ${(i,j) \in E^+}$ such that ${i \in A}$ and ${j \in B}$. Therefore, ${v_k(S) = v_k(A) + v_k(B)}$. That means we can construct a new optimal solution by setting ${z_k(S) = 0}$ and raising ${z_k(A)}$ and ${z_k(B)}$ to 1. Note, in the original solution ${z_k(A), z_k(B) =0}$ by virtue of the first constraint. As long as ${S}$ is not a subtree we can repeat this argument.

Hence, if we let ${{\cal T}}$ be the set of subtrees of ${G^+}$, the welfare maximization problem can be expressed as follows:

$\displaystyle \max \sum_{k \in N}\sum_{T \in {\cal T}}v_k(T)z_k(T) - \sum_{k \in N}\sum_{(i,j) \in E^-}|w^k_{ij}|y^k_{ij}$

subject to

$\displaystyle \sum_{T \in {\cal T}}\sum_{T \ni i}\sum_{k \in N}z_k(T) \leq 1\,\, \forall i \in M,$

$\displaystyle y^k_{ij} \geq \sum_{T \ni i}z_k(T) + \sum_{S \ni j}z_k(S) - 1\,\, \forall k \in N, (i,j) \in E^-$

$\displaystyle z_k(S) \in \{0,1\}\,\, \forall k \in N, T \in {\cal T}$

$\displaystyle y^k_{ij} \in \{0,1\}\,\, \forall k \in N, (i,j) \in E^-$

Its important to emphasize that no ${T \in {\cal T}}$ contains a negative edge.

Were it not for the second set of the constraints, integrality would follow from Barany, Edmonds and Wolsey (1986). I’m not aware of this variation of the tree packing problem having been considered. A follow up paper by Aghezzaf and Wolsey (1994) comes close in the sense of allowing for a piecewise linear concave objective function.

In 1961, Clarence Earl Gideon was charged with breaking and entering with intent to commit petty larceny. Appearing without counsel, Gideon invoked reverent authority:

The United States Supreme Court says I am entitled to be represented by Counsel.

Those words set in train a chain of events that confirmed, three years later, the right of indigent defendants in criminal proceedings, upon request, to have counsel appointed both during trial and on appeal. Qualified counsel, however, is scarce and there is an abundance of indigent defendants. Over the years the state has chosen to solve the problem of matching counsel to indigent defendant by fiat. Judge Posner in US vs Ely justifies this as follows:

There are practical reasons for not giving indigent criminal defendants their choice of counsel. Appointed counsel are not paid at munificent rates under the Criminal Justice Act, 18 U.S.C. § 3006A(d); in the Central District of Illinois, in the most recent year for which data are available (1980), the average fee per case under the Act was only \$426.31. Director of Adm. Off. of U.S. Cts., 1982 Ann.Rep. 511 (Exh. C-1). The best criminal lawyers who accept appointments therefore limit the amount of time they are willing to devote to this relatively unremunerative type of work; some criminal lawyers, indeed, only reluctantly agree to serve as appointed counsel, under pressure by district judges to whom they feel a sense of professional obligation. The services of the criminal defense bar cannot be auctioned to the highest bidder among the indigent accused — by definition, indigents are not bidders. But these services must be allocated somehow; indigent defendants cannot be allowed to paralyze the system by all flocking to one lawyer.

Time to sharpen pencils and put on the thinking cap. For a graduate student looking for a topic in market design, I cannot think of a more interesting question than how to match counsel to indigent defendants. One has: moral hazard (on the part of attorney), asymmetry of information (how does one distinguish between a good lawyer and a bad one), informed third parties with divided interests (Judges who appoint counsel, but may be more interested in a speedy trial than a vigorous defense), budget constraints (on the part of defendants) and competing objectives (speedy resolution vs. correct adjudication). For a description of the institution as it is currently structured and a proposal to revise it based on vouchers see Friedman and Schulhofer (yes Friedman fils).

In the analysis of combinatorial auctions two problems arise:

1. Surplus maximization: given prices for each item and a utility function over bundles of goods, find a bundle that maximizes the utility of a bundle less the price paid.
2. Welfare maximization: determine an allocation of goods to agents so as to maximize total welfare.

In general both problems are NP-hard and therefore a number of scholars have sought to identify conditions on preferences (i.e. ${u}$) that yield efficiently solvable instances of both problems. In this post I discuss a class preferences motivated by Cramton, Ausubel, McAfee and McMillan (1997) related to binary quadratic programs (BQP). A recent paper by Candogan, Ozdaglar and Parrilo (2013) prompted this two part post. I will say more about this paper in part 2. In this part I focus on surplus maximization.

Say that ${u}$ represents BQP preferences if

$\displaystyle u(S) = \sum_{i \in S}u_i + \sum_{i, j \in S}w_{ij}\,\, \forall S \subseteq M$

Interpret ${u_i}$ as the utility of consuming object ${i \in M}$ alone and ${w_{ij}}$ is the added benefit from consuming both ${i}$ and ${j }$ (if ${w_{ij} > 0}$), otherwise the disutility of doing so. The surplus maximization problem for BQP preferences is

$\displaystyle \max_{S \subseteq M} \sum_{i \in S}u_i + \sum_{i, j \in S}w_{ij} - \sum_{i \in S}p_i$

which can be rewritten as a binary quadratic program

$\displaystyle \max \sum_{i \in M}(u_i - p_i)x_i + \sum_{i \neq j}w_{ij}x_ix_j$

subject to

$\displaystyle x_i \in \{0,1\}\,\, \forall i \in M$

This problem is NP-hard as it encodes the independent set problem. To see why, let ${G = (V, E)}$ be a graph and let  ${w_{ij} = 0}$ if ${(i,j) \not \in E}$, ${u_i - p_i = 1}$ for all ${i \in M}$ and ${w_{ij} = -K}$ for all ${(i,j) \in E}$ where ${K }$ is a large positive number.

If ${w_{ij} \geq 0}$ for all ${i,j \in M}$ (pairs of goods are complements) there is a straightforward linearization of the surplus maximization problem:

$\displaystyle \max \sum_{i \in M}(u_i - p_i)x_i + \sum_{i \neq j}w_{ij}y_{ij}$

subject to

$\displaystyle y_{ij} \leq x_i\,\, \forall i,j \in M$

$\displaystyle y_{ij} \leq x_j\,\, \forall i,j \in M$

$\displaystyle 0 \leq y_{ij}, x_i \leq 1\,\, \forall i, j \in M$

Note that the constraint matrix is totally unimodular. Therefore, the surplus maximization problem can be solved as a linear program.

Thus, if ${w_{ij} > 0}$ for all ${i,j}$, surplus maximization is easy. With no restriction on the ${w_{ij}}$‘s, surplus maximization is NP-hard. Is there anything between these two extremes? Yes, Hansen and Simeone (1986) give us one case. To describe this case, we associate a graph with the ${w_{ij}}$‘s. One vertex for each ${i \in M}$ and for each ${(i,j)}$ such that ${w_{ij} \neq 0}$ an edge ${(i,j)}$. Call this graph ${G^w}$. Say that ${G^w}$ is sign balanced if it does not contain any cycle with an odd number of negative edges (${w_{ij} < 0}$). This implies that deleting all negative edges of ${G}$ disconnects it (say ${G =G_1\cup G_2}$), and ${\delta(G_1,G_2) }$ is the set of negative edges.

Consider the following linear relaxation of the surplus maximization problem:

$\displaystyle \max \sum_{i \in M}(u_i - p_i)x_i + \sum_{i \neq j}w_{ij}y_{ij}$

subject to

$\displaystyle y_{ij} \leq x_i\,\, \forall w_{ij} \geq 0$

$\displaystyle y_{ij} \leq x_j\,\, \forall w_{ij} \geq 0$

$\displaystyle y_{ij} \geq x_i + x_j -1\,\, \forall w_{ij} < 0$

$\displaystyle 0 \leq y_{ij}, x_i \leq 1\,\, \forall i,j$

If ${G^w}$ is sign balanced, Hansen and Simeone showed that the constraint matrix of this linear relaxation is totally unimodular. Therefore, the relaxation is exact. Here is a short argument that establishes integrality of the relaxation based on randomized rounding (due to Bertsimas, Teo and Vohra (1999)).

1. Let ${({\bar x}, {\bar y})}$ be an optimal solution to the relaxation.
2. Generate a single random number ${U}$ uniformly in ${[0,1]}$.
3. Round ${x_{i}}$ to 1 if ${i \in G_1}$ and ${{\bar x}_i \geq U}$, or ${i\in G_2}$ and ${{\bar x}_i \geq 1-U}$.

Since ${1-U}$ is also uniformly distributed in ${[0,1]}$, ${P(x_i =1) = {\bar x}_i}$. For ${i,j}$ both in ${ G_1}$ or both in ${G_2}$, ${P(x_i x_j=1) = \min \{{\bar x}_i,{\bar x}_j\}.}$ For ${i\in G_1}$ and ${j\in G_2}$,

$\displaystyle P(x_i x_j=1) = P(U\leq {\bar x}_i,1-U\leq {\bar x}_j) =[0, {\bar x}_i+{\bar x}_j-1]^+.$

At optimality, ${{\bar y}_{ij}= [{\bar x}_i, {\bar x}_j]^-}$ if ${w_{ij} \geq 0}$, ${{\bar y}_{ij}=[0, {\bar x}_i+{\bar x}_j-1]^+}$ if ${w_{ij} < 0}$. The expected objective function value of the random integral solution generated in this way is

$\displaystyle \sum_{i\neq j} w_{ij} E[x_i x_j] + \sum_i [u_i-p_i] x_i= \sum_{i,j} w_{ij} {\bar y}_{ij}+\sum_i [u_i-p_i] {\bar x}_i.$

Notice this is the optimal objective function value of the linear relaxation and this completes the argument.

In part 2 I turn to the welfare maximization question.

On his way to receive the Nobel, physicist Peter Higgs (of Higgs-Boson) was interviewed by the Grauniad. He said:

What should one make of this statement? Looking at Higgs’ description of his record at the time, it seems very clear that he would not have made it through the ranks now. Bad for Higgs. But is it bad for Science as some  have concluded? The article suggests so. But, there is a gap in logic. It implies, for example, that were Newton to have succumbed to the plague, Calculus and  the laws of motion would not be uncovered. This we know is false. The question of the best way to engineer a climate most conducive to research is not resolved by such reflections. It is not obvious that long periods of inactivity punctuated by bursts of magisterial insights is any better or worse than continued activity generating small insights that eventually cumulate into something larger.

Two public service announcements. The first about post-doc positions. The second about nominations for SIGecom dissertation awards.

Postdoc Positions

Penn’s newly-launched Warren Center for Network and Data Sciences seeks nominations for 2014 Warren Postdoctoral Fellowships. Warren Fellow candidates should have research interests in the subjects supported by the center, which include but are not limited to network science (including the study of social, technological, economic, organizational and biological networks, as well as underlying foundational areas such as graph theory, game theory, mechanism design) and data science (including machine learning, statistics, data privacy and security). The ideal candidate will have a strongly interdisciplinary research agenda with a demonstrated track record, and would be nominated by faculty affiliates of the Warren Center in two or more Penn departments who will act as hosts, advisors and collaborators of the candidate.

Warren Postdoctoral Fellows will receive generous and competitive stipends and research support, and will participate in a vibrant and community of Warren Center faculty affiliates, postdocs and students. We expect to fund multiple Warren Fellows for the 2014-15 academic year. Fellowships are for a one-year period, extendable to two by mutual agreement.

Dissertation Awards

SIGecom Doctoral Dissertation Award for dissertations defended in 2013. Nomination deadline end of February, 2014.

You tell us
It looks bad for our cause.
The darkness gets deeper. The powers get less.
Now, after we worked for so many years
We are in a more difficult position than at the start.
But the enemy stands there, stronger than ever before.
His powers appear to have grown. He has taken on
an aspect of invincibility.
We however have made mistakes; there is no denying it.
Our numbers are dwindling.
Our slogans are in disarray. The enemy has twisted
Part of our words beyond recognition.

What is now false of what we said:
Some or all?
Whom do we still count on? Are we just left over, thrown out
Of the living stream? Shall we remain behind
Understanding no one and understood by none?

Have we got to be lucky?

I think that today’s (Nov. 24, 2013) interim agreement in Geneva between the powers and Iran to “freeze” the advancement of Iran towards nuclear capability is another example of a culture-gap mutual mis-reading in bargaining – in particular the powers ascribing excess importance to the contracted-upon words as conveying their literal content, rather than viewing the meaning of the written agreement as only one thin layer of a complex and evolving communication and engagement between the parties.

At the same time, the agreement is in line with my suggestion here in the last paragraph of my post on Iran from two years ago:

• A potentially better strategy would be to encourage Iran to follow its own interest by transparently staying only on the brink of military nuclear capability, and at the same time to admit Iran as a de facto member of the “nuclear club”. If, then, Iran nevertheless prefers to curtail transparency and renounce international recognition of its power, it will not only suffer the consequences of undermining its own interests, but might ignite an escalatory pace in which it is likely to suffer much more.

This strategy has, of course, its pros and cons as any other classical brinkmanship.