You are currently browsing the monthly archive for January 2014.
In this, the second lecture, I focus on electricity markets. I’ll divide the summary of that lecture into two parts.
Until the 1980s electricity markets around the world operated as regulated monopolists. Generation (power plants) and distribution (the wires) were combined into a single entity. Beginning with Chile, a variety of Latin American countries started to privatize their electricity markets. So, imagine you were a bright young thing in the early 1980s, freshly baptised in the waters of Lake Michigan off Hyde Park. The General approaches you and says I want a free market in electricity, make it so (Quiero un mercado libre de la electricidad, que asi sea.) What would you reccomend?
Obviously, privatize the generators by selling them off, perhaps at auction (or given one’s pedigree, allocate them at random and allow the owners to trade among themeselves). What about the wire’s that carry electricity from one place to another. Tricky. Owner of the wire will have monopoly power, unless there are multiple parrallell wires. However, that would lead to inefficient duplication of resources. As a first pass, lets leave the wires in Government hands. Not obviously wrong. We do that with the road network. The Government owns and mainatins it and for a fee grants access to all.
So, competition to supply power but central control of the wires. Assuming an indifferent and benign authority controlling the wires, what will the market for generation look like? To fix ideas, consider a simple case. Two generators and a customer
.
Generator has unlimited supply and a constant marginal cost of production of $20 a unit. Generator 2 has an unlimited supply and a constant marginal cost of production of $40 a unit. Customer 3 has a constant marginal value of upto 1500 units and zero thereafter. Assume
to be sufficiently large to make all subsequent statements true. Initially there are only two wires, one from generator 1 to customer 3 and the other from generator 2 to customer 3. Suppose
are all price takers. Then, the Walrasian price for this economy will be $20. For customer 3 this clearly a better outcome than unregulated monopoly, where the price would be
. What if the price taking assumption is not valid? An alternative model would be Bertrand competition between 1 and 2. So, the outcome would be a `hairs breadth’ below $40. Worse than the Walrasian outcome but still better than unregulated monopoly. It would seem that deregulation would be a good idea and as the analysis above suggest, there is no necessity for a market to be designed. There is a catch. Is unregulated monopolist the right benchmark? Surely, a regulated monopolist would be better. Its not clear that one does better than the regulated monopolist.
Now lets add a wrinkle. Suppose the wire between 1 and 3 has capacity 600 units. There are two ways to think of this capacity constraint. The first is a capacity constraint on generator 1 that we have chosen to model as a constraint on the wire . The second is that it is indeed a constraint on the wire
. The difference is not cosmetic as we shall see in a moment.
Suppose its a constraint on generator 1’s capacity. Then, under the price taking assumption, the Walrasian price in this economy will be $40. An alternative model of competition would be Bertrand-Edgeworth. In general equilibria are mixed, but whatever the mixture, the expected price per unit customer 3 will pay cannot exceed $40 a unit. In both cases, the outcome is better for customer 3 than unregulated monopolist.
Assume now the capacity constraint is on the wire instead. Under the price taking assumption, at a price of $20 unit, generator 1 is indifferent between supplying any non-negative amount. Generator 3’s supply correspondence is the empty set. However there is no way for supply to meet demand. Why is this? In the usal Walrasian set up each agent reports their supply and demand correspondence based on posted prices and their own information only. To obtain a sensible answer in this case, generator 1 must be aware of the capacity of the network into which its supply will be injected. As the next scenario we consider shows, this is not easy when it comes to electricity.
Suppose there is now a link joining generator 1 and 2 with no capacity constraint. There is still a 600 unit capacity constraint on the link between 1 and 3. One might think, that in this scenario, customer 3 can receive all its demand from generator 1. It turns out that this is not possible because of the way electricity flows in networks.
From the Fens of East Anglia comes a curious tale of a gift to Cambridge to endow a chair in honor of Stephen Hawking. The donor, Dennis Avery, put forward $6 million of which $2 million is to cover the costs of the Hawking chair. The balance is to be managed by charity, the Avery-Tsai foundation to, in the words of J. K. M. Sanders (Pro-Vice-Chancellor for Institutional Affairs),
“……advance education and promote research in the science of cosmology at the University of Cambridge for the public benefit, and in particular to support the University in securing the best possible candidate as the Stephen W. Hawking Professor of Cosmology.”
There are some unusual terms attached to the gift described below. Mr.Avery, however, passed away soon after the terms were negotiated. Renegotiation, now impossible, the University must decide whether to accept the gift as constructed or not at all.
What makes the gift unusual?
1) Monies held by the foundation can be used to `top up’ the salary, so paying the individual an amount in excess of what the University might pay its top chair holders (which I believe is around 130K pounds).
2) The chair is to be housed in the Department of Applied Mathematics and Theoretical Physics (DAMTP). The gift requires DAMTP to certify each year to the Trustees that the base salary of the Hawking Professor is at least the average of other Professors in the Department.
3) The holder is subject to annual review and the monies from the foundation are contingent upon the outcome of that review.
4) It limits the tenure of the Professor to seven years, renewable for five and exceptionally for a further five.
Let the fireworks begin. From the head of the DAMTP, P.H. Haynes and a supporter of the gift:
“The unusual detailed arrangements surrounding this Professorship have rightly triggered significant debate amongst my Departmental colleagues and they have required detailed and robust discussion between Department, School, and the University.”
Professor Goldstine of the DAMTP, responds with an objection to the circumvention of University rules with a delightful analogy to line integrals:
“In the field of thermodynamics there is the concept of a ‘state function’, a quantity that is independent of the path by which a system is brought to a given point. This is one of those. It does not matter whether the payment goes through the University payroll or not if the University itself is signing off on the agreement and the funds are in its endowment. The choice of path certainly does not matter in the court of public opinion. How can the University contemplate an arrangement whose purpose is to circumvent its own rules?”
He goes on to point out that it is inconceivable that a holder of the chair would accept a cut in pay upon expiration of the term. He is, by own admission rendered:
“….almost speechless at Paragraph 9 of the deed, which asserts that the Department must certify each year to the Trustees that the base salary of the Hawking Professor is at least the average of other Professors in the Department. First, the requirement itself indicates a profound level of distrust of the Department’s operations. But second, how can it possibly be fair to tie one Professor’s salary to that of others? All their hard work over their career to date is used to define a starting point for his salary, independent of his qualifications. Moreover, if the Department chose to pay that minimum (which it might in light of other financial burdens), then the Stephen Hawking Professor would automatically get a raise if any other Professor did. This cannot be fair. I thought we strove to have a meritocracy in this University.”
From an emeritus professor medieval history, G. R. Evans, clearly, a guardian of ancient rights:
“….opening the doors to allowing outside bodies or donors to fund Professorships has led to the opening of further doors and only those with long constitutional memories may remember how it all began. I speak today just to put a reminder into the record, for this proposal has a constitutional context and if it is accepted, it will undoubtedly have constitutional consequences.”
Dr. A. Pesci of the DAMTP opens with this gem:
“This Chair looks to me like that pair of shoes at the Christmas sale. They looked beautiful and were half price. They were also two sizes too small and buying the matching dress would lead to bankruptcy. Hence, if one buys them, they would have to be left vacant, for if one wears them, they would cause enormous irreversible long-term damage.”
The link to a full summary of the discussion can be found here. It is both interesting reading and amusing when compared with the earliest and best pamphlets on University politics that I know of: Microcosmographia Academica. I give one example from it:
The Principle of the Dangerous Precedent is that you should not now do an admittedly right action for fear you, or your equally timid successors, should not have the courage to do right in some future case, which, ex hypothesi, is essentially different, but superficially resembles the present one. Every public action which is not customary, either is wrong, or, if it is right, is a dangerous precedent. It follows that nothing should ever be done for the first time.
I began the first class with a discussion of the definition of market design. Here, as best I can, I recall my remarks.
The flippant answer is its whatever Alvin Roth says it is. A serious answer is the same, witness the following from Econometrica 2002:
…..design involves a responsibility for detail; this creates a need to deal with complications. Dealing with complications requires not only careful attention to the institutional details of a particular market, it also requires new tools, to supplement the traditional analytical toolbox of the theorist.
Two items to highlight: institutional details and and new tools. So, it is not `generic’ theorizing and one might wish to (should?) use experiments and data to support an analysis.
The NBER offers the following:
“Market design” examines the reasons why markets institutions fail and considers the properties of alternative mechanisms, in terms of efficiency, fairness, incentives, and complexity. Research on market design is influenced by ideas from industrial organization and microeconomic theory; it brings together theoretical, empirical, and experimental methods, with an aim of studying policy-relevant tradeoffs with practical consequences.
Notice the concern with market failure, the clearly normative perspective and the intent to influence policy. Finally, there is the Journal of Economic Literature which now recognizes it as a subfield (JEL classification D47) and offers this definition:
Covers studies concerning the design and evolution of economic institutions, the design of mechanisms for economic transactions (such as price determination and dispute resolution), and the interplay between a market’s size and complexity and the information available to agents in such an environment.
In this definition no methodological stand is taken nor is there an explicit concern for practical consequences. Two additional lines, that I do not reproduce, exclude straight mechanism design and straight empirical work.
Where does that leave us? Clearly, decision theory, repeated games, refinements are not market design. What about the study of tax policy to encourage investments in human capital? What about labor economics, in particular the thread related to search and matching? Regulating oligopoly and merger policy? By my reading they count. Are the labor economists all going to label their papers D47? Unlikely. We generally know we are writing for, so, perhaps the flippant answer I gave first is the correct one!
Do I have a proposal? Yes, Richard Whatley’s 1831 suggestion for what to call what we now dub economics.
Next, what principles, if any, are there to guide the market designer? Roth’s 2007 Hahn lecture suggests three:
1) Thickness
By which he means encouraging coordination on a single venue for trade. Why? Presumably beause it reduces search costs and increases the speed of execution. One way (not the only) of formalizing the notion of thickness is stability or the core.
2) Reduce Congestion
Roth argues that as a market thickens, it produces congestion. This runs counter to the benefits of thickness: increasing speed of execution. What he has in mind are markets where transactions are heterogenous and offers are personalized. During the time in which an offer is being evaluated, other potential opportunities may evaporate. I’m not yet convinced that thickness produces congestion in this sense. I cannot see why the time to evaluate an offer and conclude a transaction should depend on the thickness of the market. However, in order to benefit from the increased opportunities that thickness provides, it makes sense that one would want to increase the speed at which offers are screened. I think the correct way to phrase this might be in terms of bottlenecks. As the market thickens steps in the trading process that were not bottlenecks might become so. Not removing them, defeats the gains to be had from thickness.
3) Discourage Welfare Reducing Strategic Behavior
Equivalently, make participation simple.
While these 3 items are useful rules of thumbs in thinking about the design goals, they do not cover the details the designer should pay attention to in achieving them. I’ve tried to list what I think these are below.
a) Carefully decide on the asset to be traded.
I’m channeling Coase. To illustrate, recall the use of auctions to allocate spectrum rights. A frequently heralded an example of successful market design. The use of auctions takes as given that the asset to be traded is rights to a range of frequencies. Those rights protect the holder from `harmful interference’. However, as Coase himself observed, an entirely different asset should be the object of analysis:
What does not seem to have been understood is that what is being allocated by the Federal Communications Commission, or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies of the ether.
As one can imagine this might lead to a very different way of organizing the market for wireless communication. For an illustration of how different views on spectrum property rights affect market outcomes see the spirited piece on the Lightsquared debacle by Hazlett and Skorup. An important take away from this piece is the (constructive/destructive) role that government plays in markets.
b) Nature of contracts.
What kinds of contracts are feasible? Must they stipulate a uniform price? Can they be perpetual (indentured servitude is typically outlawed)? We know, for example, that Walrasian prices can implement certain outcomes but not others.
c) What is the medium of exchange?
In some settings we have money, in others it is ruled out. That money is ruled out does not eliminate other mediums of exchange. Prisoners have been known to use cigarettes. One can trade years of service for preferential postings. One can exchange kidneys for kidneys. What about livers for kidneys, or health insurance for livers?
d) What is the measure of performance and what is the status quo?
Typically one is concerned with proposing a set of changes to an existing institution. What is the measure by which we decide that a proposed change is a good one? I rope into this question the decision about which agents preferences matter in the design. In the literature on resident matching the focus is on stability (thickness). However, one might also focus on the effect on wages. Does the use of a stable mechanism depress wages for interns? If one focuses on wages, then one must specify what the status quo is. For example, is it one where wages are set by a perfectly competitive centralized market? Or is it an imperfectly competitive one where wages are set by bilateral contracting? How would one model this (see Bulow and Levin for an example)? Even if the status quo were a monopoly, is it regulated or unregulated?
In choosing a measure of performance one will bump up against the `universal interconnectedness of all things’. In ancient times this challenge was called `regulating the second best’. Imagine a polluting monopolist charging a uniform price. If we replace our monopolist by a perfectly competitive market we reduce the distortion caused by high prices but increase pollution (assuming it increases with output). To make headway one must be prepared to draw a boundary and ignore what happens outside of it.
e) Why does a market need to be designed?
Many markets are the product of evolution rather than intelligent design. So, why is it necessary to design a market? One answer is that there is an externality that is difficult to price without a high degree of coordination. Electricity markets are offered as an example. In the second lecture we will examine this in more detail.
A recent paper by Bergeman, Brooks and Morris (BBM) supposes a monopolist free to segment the market in any way she can (without worrying about arbitrage), and asks what is the achievable set of pairs of producer and consumer surplus? BBM gives a simple and satisfying answer to this question. This post attempts a short proof of their characterization.
A monopolist faces a market consisting of buyers with valuations . Order them so that
. The number of buyers with valuation
is
and assume the buyers are divisble. A segmentation of the market is a partition of the buyers into upto
markets with the property that the profit maximizing price in market
is
. If we let
be the number of buyers with valuation
in market
, then any segmentation is characterized by the following:
Denote by the set of feasible segmentations. Let
be the profit earned by the monopolist under the segmentation
. The consumer surplus of buyers under the segmentation
is
It is easy to see that . The upper bound follows from the segmentation that assigns all buyers with valuation
to the
market and no others. This corresponds to first degree price discrimination. It is also easy to see that
. The lower bound comes from the segmentation that assigns all customers to market
, where
is the profit maximizing monopoly price without discrimination. BBM show the following:
Theorem is feasible iff
and
.
That , is straightforward. The hard part is to show two things.
1) For any such that
there is a
such that
.
2)There exists an such that
and
.
To prove the first item (which BBM note in the paper is easy) on this list, call a segmentation upper triangular if
for all
. Note
.
Let . We construct a new segmentation
from
by shifting the buyers in market
with values below
into market
. As the profit maximizing price just to this portion of buyers is
, moving them into market
leaves the profit maximizing price in market
unchanged. Formally:
for all
and
.
for all
.
for all
.
for all
.
Under segmentation , both
and
increased in value, contradicting the initial choice of
.
To prove the second item on the list, among all feasible segmentations such that , choose one that minimizes
, say
. Call
lower triangular if
for all
. I show that
must be lower triangular from which it follows that
. The proof will be by induction on
the number of distinct valuations.
The case is straightforward. Suppose first that
. The following segmentation, as can be verified, does the trick:
where
If , the segmentation that assigns all buyers to market 2 will have the requiste property.
Now consider the case of arbitrary and suppose first that
. Given an instance
on
valuations construct an instance
on
valuations
by setting
for all
. It is easy to see that
, i.e., the optimal monopoly profit with no discrimination remains unchanged. By the induction hypothesis there is a segmentation
that is lower triangular. To conclude the argument we must show how to convert
into a legitimate segmentation for
.
for
.
where
for all
and
.
If the ‘s can indeed be chosen as specified, then,
is lower triangular and the corresponding
and
. To verify that appropriate
‘s exist, it is enough to check that
which follows from the hypothesis that .
To conclude, suppose now that . Construct a new instance
on
valuations
by setting
for all
and
. Notice,
. By the induction hypothesis there is a segmentation
that is lower triangular. To conclude the argument we must show how to convert
into a legitimate segmentation for
.
for
.
for all
where
,
and
.
for all
.
If the ‘s can be chosen as specified then,
is lower triangular, in
and the corresponding
and
. Verifying that the appropriate
‘s exist, can be done in the same way as the previous case.
This week saw the start of my second attempt at a graduate course on market design. Although a reading course (in the sense that students present papers) I began with a survey of what I think are the main ideas one should be familiar with. With over 20 people in the room, I was sweating bullets, having expected at most half that. I will post the substance of the first lecture at a later date. In the next session I will use electricity markets as a vehicle to show how one should think about the questions to be answered as well as the kinds of issues one faces when designing a market.
The list of papers to be covered appears below, along with the instructions I gave the students on presentations.
Presentation Rules: I expect every presentation to accomplish the following (in addition to the usual of having well prepared slides, understanding and answering questions etc.):
- Provide background for the issue to be discussed.
- An unambiguous statement of the issue and an explanation for why its resolution is not obvious.
- A description of the resolution proposed by the author(s) of the paper. This is where one discusses the model in the paper.
- An exposition of the analysis of the model. A good exposition will focus on the main point and perhaps even identify a telling example that conveys it as well as the full model does. The point of such an exposition is to show the role of each assumption in arriving at the authors main conclusion.
- A critique. Did authors address the right question? Is the way they chose to frame the issue sensible? Is the model persuasive? If not, why not? Simiply saying its not general enough is insufficient. No model if it is to be tractable can be fully general.
- Suggestions for future research.
Topics & Papers
1. IPOs
Jaganathan and Sherman: Why do IPO Auctions Fail
Ritter: Equilibrium in the IPO market
Background Reading (to be read by all)
Ljungqvist: IPO Underpricing: A Survey
Ritter and Welch: A review of IPO activity, pricing and allocations
2. Health Care & Insurance Markets
Cochran, J: Time Consistent Health Insurance, JPE 1995.
Cochran, J. : After the ACA: Freeing the market for health care
Fang & Gazzara: Dynamic Inefficiencies in an Employment-Based Health Insurance System: Theory and Evidence
Handel, Hendel and Whinston: Equilibria in Health Exchanges: Adverse Selection vs. Reclassification Risk
Levine, Kremer and Albright: Making Markets for Vaccines
Kremer: Creating Markets for New Vaccines Part 1: rationale
Kremer: Creating Markets for New Vaccines Part 2: design issues
Background Reading (to be read by all)
Rothschild, M. & J. Stiglitz: Equilibrium in Competitive Insurance Markets, QJE 1976.
Fang, H: Insurance Markets in China
3. Market for Cybersecurity Insurance
Kesan, Majuca & Yurcik: Cyberinsurance as a market based solution to the problem of cybersecurity
4. Affirmative Action
Hickman: Effort, Race Gaps and Affirmative Action: A Game Theoretic Analsyis of College Admissions
Chung: Affirmative Action as an Implementation Problem
Fryer and Loury: Valuing Identity: The simple economics of Affirmative action policies
Background Reading (to be read by all)
Fang and Moro: Theories of Statistical Discrimination and Affirmative Action: a survey
5. Assigning Counsel
Friedman & Schulhofer: Rethinking Indigent Defense: Promoting Effective Representation Through Consumer Sovereignty and Freedom of Choice for All Criminal Defendants
6. The Role of Politics
Acemoglu: Why not a political Coase theorem?
Acemoglu: Modeling Inefficient Institutions
In part two, as promised, I turn to the welfare maximization problem. To formulate the problem of finding a welfare maximizing allocation let if object
is given to agent
and zero otherwise. Denote the utility of agent
from consuming bundle
by
The problem of maximizing total welfare is
subject to
Welfare maximization with BQP preferences is in general NP-hard. One proof relies on a reduction to the multi-way cut problem. Given a graph with edge weight
for each
, and a set of terminal vertices
, a multiway cut is a set of edges whose removal disconnects every pair of terminal vertices. The problem of finding the multiway cut of minimum total weight is called the multiway cut problem. When
consists of only two terminals (
) the problem reduces to the well known minimum cut problem. For
, it is known that the problem is
hard even on planar graphs.
We can obtain the multiway cut problem by setting for all
and
and
for all
and
. Any pair
such that
for
will be part of a multi-way cut. This reduction implies that welfare maximization when
for all
and
is NP-hard. This is in contrast to the case of surplus maximization.
,Candogan, Ozdaglar and Parrilo (2013), the paper that prompted this post, identify a solvable instance of the welfare maximization problem. They impose two conditions. The first is called sign consistency. For each the sign of
and
for any
is the same. Furthermore, this applies to all pairs
. Sign consistency by itself is not sufficient to obtain a solvable instance. Another condition is needed. Let
be a graph with vertex set
and for any
such that
introduce an edge
. The second condition is that
be a tree. Interestingly, Erdos and Sz\'{e}kely (1995) show that a generalization of the multiway cut problem which corresponds to welfare maximization under sign consistency and
for all
and
, is polynomially solvable when the underlying graph
is a tree. The Candogan, Ozdaglar and Parrilo (COP) proof is based on a dynamic programming argument similar to the one used in Erdos and Sz\'{e}kely (1994).
The key result in COP is the following natural linearization of the welfare maximization problem admits an integral optimal solution.
subject to
There is a connection between the welfare maximization problem and packing subtrees into trees that I want to highlight. It suggests a possible avenue by which one might enlarge the class of preferences COP consider.
Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of
. Let
and
. Let
be the maximal connected components of
after deletion of the edges in
(call this
). For any
and
let
By the way, for each and
,
is supermodular over the subsets of
. Let
if
is assigned to agent
and zero otherwise. The problem of finding a welfare maximizing allocation can be expressed as follows:
subject to
In the above program, we can, without loss, restrict attention to subsets that a subtrees (connected subgraphs) of
. To see why, suppose in that in some optimal solution to the above integer program,
where
is not a subtree. Then, we can write
where both
and
are in the same component of
as
is. Furthermore, it must be the the case that there is no edge
such that
and
. Therefore,
. That means we can construct a new optimal solution by setting
and raising
and
to 1. Note, in the original solution
by virtue of the first constraint. As long as
is not a subtree we can repeat this argument.
Hence, if we let be the set of subtrees of
, the welfare maximization problem can be expressed as follows:
subject to
Its important to emphasize that no contains a negative edge.
Were it not for the second set of the constraints, integrality would follow from Barany, Edmonds and Wolsey (1986). I’m not aware of this variation of the tree packing problem having been considered. A follow up paper by Aghezzaf and Wolsey (1994) comes close in the sense of allowing for a piecewise linear concave objective function.
In 1961, Clarence Earl Gideon was charged with breaking and entering with intent to commit petty larceny. Appearing without counsel, Gideon invoked reverent authority:
The United States Supreme Court says I am entitled to be represented by Counsel.
Those words set in train a chain of events that confirmed, three years later, the right of indigent defendants in criminal proceedings, upon request, to have counsel appointed both during trial and on appeal. Qualified counsel, however, is scarce and there is an abundance of indigent defendants. Over the years the state has chosen to solve the problem of matching counsel to indigent defendant by fiat. Judge Posner in US vs Ely justifies this as follows:
There are practical reasons for not giving indigent criminal defendants their choice of counsel. Appointed counsel are not paid at munificent rates under the Criminal Justice Act, 18 U.S.C. § 3006A(d); in the Central District of Illinois, in the most recent year for which data are available (1980), the average fee per case under the Act was only $426.31. Director of Adm. Off. of U.S. Cts., 1982 Ann.Rep. 511 (Exh. C-1). The best criminal lawyers who accept appointments therefore limit the amount of time they are willing to devote to this relatively unremunerative type of work; some criminal lawyers, indeed, only reluctantly agree to serve as appointed counsel, under pressure by district judges to whom they feel a sense of professional obligation. The services of the criminal defense bar cannot be auctioned to the highest bidder among the indigent accused — by definition, indigents are not bidders. But these services must be allocated somehow; indigent defendants cannot be allowed to paralyze the system by all flocking to one lawyer.
Time to sharpen pencils and put on the thinking cap. For a graduate student looking for a topic in market design, I cannot think of a more interesting question than how to match counsel to indigent defendants. One has: moral hazard (on the part of attorney), asymmetry of information (how does one distinguish between a good lawyer and a bad one), informed third parties with divided interests (Judges who appoint counsel, but may be more interested in a speedy trial than a vigorous defense), budget constraints (on the part of defendants) and competing objectives (speedy resolution vs. correct adjudication). For a description of the institution as it is currently structured and a proposal to revise it based on vouchers see Friedman and Schulhofer (yes Friedman fils).
In the analysis of combinatorial auctions two problems arise:
- Surplus maximization: given prices for each item and a utility function over bundles of goods, find a bundle that maximizes the utility of a bundle less the price paid.
- Welfare maximization: determine an allocation of goods to agents so as to maximize total welfare.
In general both problems are NP-hard and therefore a number of scholars have sought to identify conditions on preferences (i.e. ) that yield efficiently solvable instances of both problems. In this post I discuss a class preferences motivated by Cramton, Ausubel, McAfee and McMillan (1997) related to binary quadratic programs (BQP). A recent paper by Candogan, Ozdaglar and Parrilo (2013) prompted this two part post. I will say more about this paper in part 2. In this part I focus on surplus maximization.
Say that represents BQP preferences if
Interpret as the utility of consuming object
alone and
is the added benefit from consuming both
and
(if
), otherwise the disutility of doing so. The surplus maximization problem for BQP preferences is
which can be rewritten as a binary quadratic program
subject to
This problem is NP-hard as it encodes the independent set problem. To see why, let be a graph and let
if
,
for all
and
for all
where
is a large positive number.
If for all
(pairs of goods are complements) there is a straightforward linearization of the surplus maximization problem:
subject to
Note that the constraint matrix is totally unimodular. Therefore, the surplus maximization problem can be solved as a linear program.
Thus, if for all
, surplus maximization is easy. With no restriction on the
‘s, surplus maximization is NP-hard. Is there anything between these two extremes? Yes, Hansen and Simeone (1986) give us one case. To describe this case, we associate a graph with the
‘s. One vertex for each
and for each
such that
an edge
. Call this graph
. Say that
is sign balanced if it does not contain any cycle with an odd number of negative edges (
). This implies that deleting all negative edges of
disconnects it (say
), and
is the set of negative edges.
Consider the following linear relaxation of the surplus maximization problem:
subject to
If is sign balanced, Hansen and Simeone showed that the constraint matrix of this linear relaxation is totally unimodular. Therefore, the relaxation is exact. Here is a short argument that establishes integrality of the relaxation based on randomized rounding (due to Bertsimas, Teo and Vohra (1999)).
- Let
be an optimal solution to the relaxation.
- Generate a single random number
uniformly in
.
- Round
to 1 if
and
, or
and
.
Since is also uniformly distributed in
,
. For
both in
or both in
,
For
and
,
At optimality, if
,
if
. The expected objective function value of the random integral solution generated in this way is
Notice this is the optimal objective function value of the linear relaxation and this completes the argument.
In part 2 I turn to the welfare maximization question.
Recent Comments