You are currently browsing the category archive for the ‘market design’ category.
Over a lunch of burgers and envy, Mallesh Pai and discussed an odd feature of medical reidencies. This post is a summary of that discussion. It began with this question: Who should pay for the apprenticeship portion of a Doctor’s training? In the US, the apprenticeship, residency, is covered by Medicare. This was `enshrined’ in the 1965 act that established Medicare:
Educational activities enhance the quality of care in an institution, and it is intended, until the community undertakes to bear such education costs in some other way, that a part of the net cost of such activities (including stipends of trainees, as well as compensation of teachers and other costs) should be borne to an appropriate extent by the hospital insurance program .
House Report, Number 213, 89th Congress, 1st session 32 (1965) and Senate Report, Number 404 Pt. 1 89th Congress 1 Session 36 (1965)).
Each year about $9.5 billion in medicare funds and another $2 billion in medicaid dollars go towards residency programs. There is also state government support (multiplied by Federal matching funds). At 100K residents a year, this translates into about about $100 K per resident. The actual amounts each program receives per resident can vary (we’ve seen figures in the range of $50K to $150K) because of the formula used to compute the subsidy. In 1997, Congress capped the amount that Medicare would provide, which results in about 30K medical school graduates competing for about 22.5K slots.
Why should the costs of apprenticeship be borne by the government? Lawyers, also undertake 7 years of studies before they apprentice. The cost of their apprenticeship is borne by the organization that hires them out of law school. What makes Physicians different?
Two arguments we are aware of. First, were one to rely on the market to supply physicians, it is possible that we might get to few (think of booms and busts) in some periods. Assuming sufficient risk aversion on the part of society, there will be an interest in ensuring a sufficient supply of physicians. Note similar arguments are also used to justify farm subsidies. In other words, insurance against shortfalls. Interestingly, we know of no Lawyer with the `dershowitz’ to make such a claim. Perhaps, Dick the butcher (Henry VI, Part 2 Act 4) has cowed them.
The second is summarized in the following from Gbadebo and Reinhardt:
“Thus, it might be argued … that the complete self-financing of medical education with interest-bearing debt … would so commercialize the medical profession as to rob it of its traditional ethos to always put the interest of patients above its own. Indeed, it can be argued that even the current extent of partial financing of their education by medical students has so indebted them as to place the profession’s traditional ethos in peril.”
Note, the Scottish master said as much:
“We trust our health to the physician: our fortune and sometimes our life and reputation to the lawyer and attorney. Such confidence could not safely be reposed in people of a very mean or low condition. Their reward must be such, therefore, as may give them that rank in the society which so important a trust requires. The long time and the great expense which must be laid out in their education, when combined with this circumstance, necessarily enhance still further the price of their labour.”
Interestingly, he includes Lawyers.
If we turn the clock back to before WWII, Hospitals paid for trainees (since internships were based in hospitals, not medical schools) and recovered the costs from patient charges. Interns were inexpensive and provided cheap labor. After WWII, the GI Bill provides subsidies for graduate medical education, residency slots increased and institutions were able to pass along the costs to insurers. Medicare opened up the spigot and residencies become firmly ensconced in the system. Not only do they provide training but they allow hospitals to perform a variety of other functions such as care for the indigent at lower cost than otherwise.
Ignoring the complications associated with the complementary activities that surround residency programs, who should pay for the residency? Three obvious candidates: insurers, hospitals and the doctors themselves. From Coase we know that in a world without frictions, it does not matter. With frictions, who knows?
Having medicare pay makes residency slots an endowment to the institution. The slots assign to a hospital will not reflect what’s best for the intern or the healthcare system. Indeed a recent report by from the Institute of Medicine summarizes some of these distortions. However, their response to is urge for better rules governing the distribution of monies.
If hospitals themselves pay, its unclear what the effect might be. For example, as residents costs less than doctors, large hospitals my bulk up of residents and reduce their reliance of doctors. However, assuming no increases in the supply of residents, wages for residents will rise etc etc. If insurers pay there might be overprovision of residents.
What about doctors? To practice, a doctor must have a license. The renewal fee on a medical license is, at the top end (California), around $450 a year. In Florida it is about half that amount. There are currently about 800K active physicians in the US. To recover $10 billion (current cost of residency programs) one would have to raise the fee by a $1000 a year at least. The average annual salary for the least remunerative specialties is around $150K. At the high end about $400K. From these summary statistics, it does not appear that an extra $1K a year will break the bank, or corrupt physicians, particularly if it is pegged as a percentage rather than flat amount. The monies collected can be funneled to the program in which the physician completed his or her residency.
In 1937, representatives of the Plywood trust called upon Comrade Professor Leonid Vitalievich Kantorovich with a problem. The trust produced 5 varieties of plywood using 8 different machines. How, they asked, should they allocate their limited supply of raw materials to the various machines so as to produce the maximum output of plywood in the required proportions? As problems go, it was, from this remove unremarkable. Remarkable is that the Comrade Professor agreed to take it on. The so called representatives might have been NKVD. Why? Uncle Joe’s first act upon taking power in 1929 was to purge the economists, or more precisely the Jewish ones. This was well before the purge of the communist party in 1936. Why the economists? They complained about waste in a planned economy `dizzy with success.’ Yet, here were the apparatchiks of the Trust asking the Comrade Professor to reduce waste.
Kantorovich writes, that at the time he was burnt out by pure mathematics. Combined with a concern at the rise of Hitler, he felt compelled to do something practical. And, so he turned his mind to the problem of the Plywood Trust. Frances Spufford, in his delightful work of `faction’ called Red Plenty, imagines what Kantorovich might have been thinking.
He had thought about ways to distinguish between better answers and worse answers to questions which had no right answer. He had seen a method which could do what the detective work of conventional algebra could not, in situations like the one the Plywood Trust described, and would trick impossibility into disclosing useful knowledge. The method depended on measuring each machine’s output of one plywood in terms of all the other plywoods it could have made.
If he was right — and he was sure he was, in essentials — then anyone applying the new method to any production situation in the huge family of situations resembling the one at the Plywood Trust, should be able to count on a measureable percentage improvement in the quantity of product they got from a given amount of raw materials. Or you could put that the other way around: they would make a measureable percentage saving on the raw materials they needed to make a given amount of product.
He didn’t know yet what sort of percentage he was talking about, but just suppose it was 3%. It might not sound like much, only a marginal gain, an abstemious eking out of a little bit more from the production process, at a time when all the newspapers showed miners ripping into fat mountains of solid metal, and the output of plants booming 50%, 75%, 150%. But it was predictable. You could count on the extra 3% year after year. Above all it was free. It would come merely by organising a little differently the tasks people were doing already. It was 3% of extra order snatched out of the grasp of entropy. In the face of the patched and mended cosmos, always crumbling of its own accord, always trying to fall down, it built; it gained 3% more of what humanity wanted, free and clear, just as a reward for thought. Moreover, he thought, its applications did not stop with individual factories, with getting 3% more plywood, or 3% more gun barrels, or 3% more wardrobes. If you could maximise, minimise, optimise the collection of machines at the Plywood Trust, why couldn’t you optimise a collection of factories, treating each of them, one level further up, as an equation? You could tune a factory, then tune a group of factories, till they hummed, till they purred. And that meant —
An english description of Kantorovich’s appeared in the July 1960 issue of Management Science. The opening line of the paper is:
The immense tasks laid down in the plan for the third Five Year Plan period require that we achieve the highest possible production on the basis of the optimum utilization of the existing reserves of industry: materials, labor and equipment.
The paper contains a formulation of the Plywood Trust’s problem as a linear program. A recognition of the existence of an optimal solution at an extreme point as well as the hopelessness of enumerating extreme as a solution method. Kantorovich then goes on to propose his method, which he calls the method of resolving multipliers. Essentially, Kantorovich proposes that one solve the dual and then use complementary slackness to recover the primal. One might wonder how Kantorovich’s contribution differs from the contributions of Koopmans and Dantzig. That is another story and as fair a description of the issues as I know can be found in Roy Gardner’s 1990 piece in the Journal of Economic Literature. I reproduce one choice remark:
Thus, the situation of Kantorovich is rather like that of the discoverer Columbus. He really never touched the American mainland, and he didn’t give its name, but he was the first one in the area.
As an aside, David Gale is the one often forgotten in this discussion. If the Nobel committee has awarded the prize for Linear Programming, Dantzig and Gale would have been included. Had Gale lived long enough, he might have won it again for matching making him the third to have won the prize twice in the same subject. The others are John Bardeen and Frederick Sanger.
Continuing with Spufford’s imaginings:
— and that meant that you could surely apply the method to the entire Soviet economy, he thought. He could see that this would not be possible under capitalism, where all the factories had separate owners, locked in wasteful competition with one another. There, nobody was in a position to think systematically. The capitalists would not be willing to share information about their operations; what would be in it for them? That was why capitalism was blind, why it groped and blundered. It was like an organism without a brain. But here it was possible to plan for the whole system at once. The economy was a clean sheet of paper on which reason was writing. So why not optimise it? All he would have to do was to persuade the appropriate authorities to listen.
Implementation of Kantorovich’s solution at the Plywood trust led to success. Inspired, Kantorovich sent a letter to Gosplan urging adoption of his methods. Here the fact that Kantorovich solved the dual first rather than the primal is important. Kantorovich interpreted his resolving multipliers (shadow prices today) as objectively determined prices. Kantorovich’s letter to Gosplan urged a replacement of the price system in place by his resolving multipliers. Kantorovich intended to implement optimal production plans through appropriate pieces. Gosplan, responded that reform was unecessary. Kantorovich narrowly missed a trip to the Gulag and stopped practicing Economics, for a while. Readers wanting a fuller sense of what mathematical life was like in this period should consult this piece by G. G. Lorentz.
After the war, Kantorovich took up linear programming again. At Lenningrad, he headed a team to reduce scrap metal produced at the Egorov railroad-car plant. The resulting reduction in waste reduced the supply of scrap iron for steel mills disrupting their production! Kantorovich escaped punishment by the Leningrad regional party because of his work on atomic reactors.
Kantorovich’s interpretation of resolving multipliers which he renamed as objectively determined valuations put him at odds with the prevailing labor theory of value. In the post Stalin era, he was criticized for being under the sway of Bohm-Bawerk, author of the notion of subjective utility. Aron Katsenelinboigen, relates a joke played by one of these critics on Kantorovich. A production problem was presented to Kantorovich where the labor supply constraint would be slack at optimality. Its `objectively determined valuation’ was therefore zero, contradicting the labor theory of value.
Nevertheless, Kantorovich survived. This last verse from the Ballard of L. V. Kantorvich authored by Josph Lakhman explains why:
Then came a big scholar with a solution.
Alas, too clever a solution.
`Objectively determined valuations’-
That’s the panacea for each and every doubt!
Truth be told, the scholar got his knukcles rapped
For such an unusual advice
That threatened to overturn the existing order.
After some thought, however, the conclusion was reached
That the valuations had been undervalued
This is the first of a series of posts about stability and equilibrium in trading networks. I will review and recall established results from network flows and point out how they immediately yield results about equilibria, stability and the core of matching markets with quasi-linear utility. It presumes familiarity with optimization and the recent spate of papers on matchings with contracts.
The simplest trading network one might imagine would involve buyers () and sellers () of a homogenous good and a set of edges between them. No edges between sellers and no edges between buyers. The absence of an edge in linking and means that and cannot trade directly. Suppose buyer has a constant marginal value of upto some amount and zero thereafter. Seller has a constant marginal cost of upto some capacity and infinity thereafter.
Under the quasi-linear assumption, the problem of finding the efficient set of trades to execute can be formulated as a linear program. Let for denote the amount of the good purchased by buyer from seller . Then, the following program identifies the efficient allocation:
This is, of course, an instance of the (discrete) transportation problem. The general version of the transportation problem can be obtained by replacing each coefficient of the objective function by arbitrary numbers . This version of the transportation problem is credited to the mathematician F. J. Hitchcock and published in 1941. Hitchcock’s most famous student is Claude Shannon.
The `continuous’ version of the transportation problem was formulated by Gaspard Monge and described in his 1781 paper on the subject. His problem was to split two equally large volumes (representing the initial location and the final location of the earth to be shipped) into infinitely many small particles and then `match them with each other so that the sum of the products of the lengths of the paths used by the particles and the volume of the particles is minimized. The ‘s in Monge’s problem have a property since called the Monge property, that is the same as submodularity/supermodularity. This paper describes the property and some of its algorithmic implications. Monge’s formulation was subsequently picked up by Kantorovich and the study of it blossomed into the specialty now called optimal transport with applications to PDEs and concentration of measure. That is not the thread I will follow here.
Returning to the Hitchcock, or rather discrete, formulation of the transportation problem let be the dual variables associated with the first set of constraints (the supply side) and the dual variables associated with the second or demand set of constraints. The dual is
We can interpret as the unit price of the good sourced from seller and as the surplus that buyer will enjoy at prices . Three things are immediate from the duality theorem, complementary slackness and dual feasibility.
- If is a solution to the primal and an optimal solution to the dual, then, the pair form a Walrasian equilibrium.
- The set of optimal dual prices, i.e., Walrasian prices live in a lattice.
- The dual is a (compact) representation of the TU (transferable utility) core of the co-operative game associated with this economy.
- Suppose the only bilateral contracts we allow between buyer and seller are when . Furthermore, a contract can specify only a quantity to be shipped and price to be paid. Then, we can interpret the set of optimal primal and dual solutions to be the set of contracts that cannot be blocked (suitably defined) by any buyer seller pair .
- Because the constraint matrix of the transportation problem is totally unimodular, the previous statements hold even if the goods are indivisible.
As these are standard, I will not reprove them here. Note also, that none of these conclusions depend upon the particular form of the coefficients in the objective function of the primal. We could replace by where we interpret to be the joint gain gains from trade (per unit) to be shared by buyer and seller .
Now, suppose we replace constant marginal values by increasing concave utility functions, and constant marginal costs by ? The problem of finding the efficient allocation becomes:
This is an instance of a concave flow problem. The Kuhn-Tucker-Karush conditions yield the following:
- If is a solution to the primal and an optimal Lagrangean, then, the pair form a Walrasian equilibrium.
- The set of optimal Lagrange prices, i.e., Walrasian prices live in a lattice.
- Suppose the only bilateral contracts we allow between buyer and seller are when . Furthermore, a contract can specify only a quantity to be shipped and price to be paid. Then, we can interpret the set of optimal primal and dual solutions to be the set of contracts that cannot be blocked (suitably defined) by any buyer seller pair .
Notice, we lose the extension to indivisibility. As the objective function in the primal is now concave, an optimal solution to the primal may occur in the interior of the feasible region rather than at an extreme point. To recover `integrality’ we need to impose a stronger condition on and , specifically, they be -concave and convex respectively. This is a condition tied closely to the gross substitutes condition. More on this in a subsequent post.
On many campuses one will find notices offering modest sums to undergraduates to participate in experiments. When the experimenter does not attract sufficiently many subjects to participate at the posted rate, does she raise it? Do undergraduates make counter offers? If not, why not? An interesting contrast is medical research where there has arisen a class of human professional guinea pigs. They have a jobzine and the anthropologist Roberto Abadie has book on the subject. Prices paid to healthy subjects to participate in trials vary and increase with the potential hazards. The jobzine I mention earlier provides ratings of various research organizations who carry out such studies. A number of questions come to mind immediately: how are prices determined, are subjects in a position to offer informed consent, should such contracts be forbidden and does relying on such subjects induce a selection bias?
In the March 23rd edition of the NY Times Mankiw proposes a `do no harm’ test for policy makers:
…when people have voluntarily agreed upon an economic arrangement to their mutual benefit, that arrangement should be respected.
There is a qualifier for negative externalities, and he goes on to say:
As a result, when a policy is complex , hard to evaluate and disruptive of private transactions, there is good reason to be skeptical of it.
Minimum wage legislation is offered as an example of a policy that fails the do no harm test.
The association with the Hippocratic oath gives it an immediate appeal. I think the test to be more Panglossian (or should I say Leibnizian) than Hippocratic.
There is an immediate `heart strings’ argument against the test, because indentured servitude passes the `do no harm’ test. However, indentured servitude contracts are illegal in many jurisdictions ( repugnant contracts?). This argument raises only more questions, like why would we rule out such contracts? I want to focus instead on two other aspects of the `do no harm’ principle contained in the words `voluntarily’ and `benefit’. What is voluntary and benefit compared to what?
To fix ideas imagine two parties, who if they work together and expend equal effort can jointly produce a good worth $1. How should they split the surplus produced? How will they split the surplus produced? An immediate answer to the `should’ question is 50-50. A deeper answer would suggest that they each receive their marginal product (or added value) of $1, but this impossible without an injection of money from the outside. There is no immediate answer to the `will’ question as it will depend on the outside options of each of the agents and their relative patience. Suppose for example, the outside option of each party is $0, one agent is infinitely patient and the other has a high discount rate. It isn’t hard to construct a model of bargaining where the lions share of the gains from trade go to the patient agent. Thus, what `will’ happen will be very different from what `should’ happen. What `will’ happen depends on the relative patience and outside options of the agents at the time of bargaining. In my extreme example of a very impatient agent, one might ask why is it that one agent is so impatient? Is the patient agent exploiting the impatience of the other agent coercion?
When parties negotiate to their mutual benefit, it is to their benefit relative to the status quo. When the status quo presents one agent an outside option that is untenable, say starvation, is bargaining voluntary, even if the other agent is not directly threatening starvation? The difficulty with the `do no harm’ principle in policy matters is the assumption that the status quo does less harm than a change in it would. This is not clear to me at all. Let me illustrate this with two examples to be found in any standard microeconomic text book.
Assuming a perfectly competitive market, imposing a minimum wage constraint above the equilibrium wage would reduce total welfare. What if the labor market were not perfectly competitive? In particular, suppose it was a monopsony employer constrained to offer the same wage to everyone employed. Then, imposing a minimum wage above the monopsonist’s optimal wage would increase total welfare.
Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.
What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. Note that other sources, like Wikipedia give other reasons, for why electricity is special:
Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.
I’m skeptical. To see why, replace electricity by air travel.
Let be the set of vertices and the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in . Let be the set of directed arcs that result. Hence, mens that the edge is directed from to . Notice, if , then .
Associated with each is a number that we interpret as a flow of electricity. If we interpret this to be a flow from to . If we interpret this as a flow from to .
- Let is the resistance on link .
- unit cost of injecting current into node .
- marginal value of current consumed at node .
- amount of current consumed at node .
- amount of current injected at node .
- capacity of link .
Current must satisfy two conditions. The first is conservation of flow at each node:
The second is Ohm’s law. There exist node potentials such that
Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.
At each node there is a power supplier with constant marginal cost of production of upto units. At each there is a consumer with constant marginal value of upto units. A natural optimization problem to consider is
This is the problem of finding a flow that maximizes surplus.
Let be the set of cycles in . Observe that each corresponds to a cycle in if we ignore the orientation of the edges. For each cycle , let denote the edges in that are traversed in accordance with their orientation. Let be the set of edges in that are traversed in the opposing orientation.
We can project out the variables and reformulate as
Recall the scenario we ended with in part 1. Let , and in addition suppose for all . Only has a capacity constraint of 600. Let and . Also and and each have unlimited capacity. At node 3, the marginal value is upto 1500 units and zero thereafter. The optimization problem is
Notice, for every unit of flow sent along , half a unit of flow must be sent along and as well to satisfy the cycle flow constraint.
The solution to this problem is , , , , and . What is remarkable about this not all of customer 3’s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along and 900 units along . This flow violates the cycle constraint.
In this example, when generator 1 injects electricity into the network to serve customer 3’s demand, a positive amount of that electricity must flow along every path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3’s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.
The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.
Let be the dual variable associated with the flow balance constraint. Let be associated with the cycle constraints. Let and be associated with link capacity constraints. Let and be associated with the remaining tow constraints. These can be interpreted as the profit of supplier and the surplus of customer respectively. For completeness the dual would be:
Now has a natural interpretation as a price to be paid for consumption at node for supply injected at node . and can be interpreted as the price of capacity. However, is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.
In this, the second lecture, I focus on electricity markets. I’ll divide the summary of that lecture into two parts.
Until the 1980s electricity markets around the world operated as regulated monopolists. Generation (power plants) and distribution (the wires) were combined into a single entity. Beginning with Chile, a variety of Latin American countries started to privatize their electricity markets. So, imagine you were a bright young thing in the early 1980s, freshly baptised in the waters of Lake Michigan off Hyde Park. The General approaches you and says I want a free market in electricity, make it so (Quiero un mercado libre de la electricidad, que asi sea.) What would you reccomend?
Obviously, privatize the generators by selling them off, perhaps at auction (or given one’s pedigree, allocate them at random and allow the owners to trade among themeselves). What about the wire’s that carry electricity from one place to another. Tricky. Owner of the wire will have monopoly power, unless there are multiple parrallell wires. However, that would lead to inefficient duplication of resources. As a first pass, lets leave the wires in Government hands. Not obviously wrong. We do that with the road network. The Government owns and mainatins it and for a fee grants access to all.
So, competition to supply power but central control of the wires. Assuming an indifferent and benign authority controlling the wires, what will the market for generation look like? To fix ideas, consider a simple case. Two generators and a customer .
Generator has unlimited supply and a constant marginal cost of production of $20 a unit. Generator 2 has an unlimited supply and a constant marginal cost of production of $40 a unit. Customer 3 has a constant marginal value of upto 1500 units and zero thereafter. Assume to be sufficiently large to make all subsequent statements true. Initially there are only two wires, one from generator 1 to customer 3 and the other from generator 2 to customer 3. Suppose are all price takers. Then, the Walrasian price for this economy will be $20. For customer 3 this clearly a better outcome than unregulated monopoly, where the price would be . What if the price taking assumption is not valid? An alternative model would be Bertrand competition between 1 and 2. So, the outcome would be a `hairs breadth’ below $40. Worse than the Walrasian outcome but still better than unregulated monopoly. It would seem that deregulation would be a good idea and as the analysis above suggest, there is no necessity for a market to be designed. There is a catch. Is unregulated monopolist the right benchmark? Surely, a regulated monopolist would be better. Its not clear that one does better than the regulated monopolist.
Now lets add a wrinkle. Suppose the wire between 1 and 3 has capacity 600 units. There are two ways to think of this capacity constraint. The first is a capacity constraint on generator 1 that we have chosen to model as a constraint on the wire . The second is that it is indeed a constraint on the wire . The difference is not cosmetic as we shall see in a moment.
Suppose its a constraint on generator 1’s capacity. Then, under the price taking assumption, the Walrasian price in this economy will be $40. An alternative model of competition would be Bertrand-Edgeworth. In general equilibria are mixed, but whatever the mixture, the expected price per unit customer 3 will pay cannot exceed $40 a unit. In both cases, the outcome is better for customer 3 than unregulated monopolist.
Assume now the capacity constraint is on the wire instead. Under the price taking assumption, at a price of $20 unit, generator 1 is indifferent between supplying any non-negative amount. Generator 3’s supply correspondence is the empty set. However there is no way for supply to meet demand. Why is this? In the usal Walrasian set up each agent reports their supply and demand correspondence based on posted prices and their own information only. To obtain a sensible answer in this case, generator 1 must be aware of the capacity of the network into which its supply will be injected. As the next scenario we consider shows, this is not easy when it comes to electricity.
Suppose there is now a link joining generator 1 and 2 with no capacity constraint. There is still a 600 unit capacity constraint on the link between 1 and 3. One might think, that in this scenario, customer 3 can receive all its demand from generator 1. It turns out that this is not possible because of the way electricity flows in networks.
I began the first class with a discussion of the definition of market design. Here, as best I can, I recall my remarks.
The flippant answer is its whatever Alvin Roth says it is. A serious answer is the same, witness the following from Econometrica 2002:
…..design involves a responsibility for detail; this creates a need to deal with complications. Dealing with complications requires not only careful attention to the institutional details of a particular market, it also requires new tools, to supplement the traditional analytical toolbox of the theorist.
Two items to highlight: institutional details and and new tools. So, it is not `generic’ theorizing and one might wish to (should?) use experiments and data to support an analysis.
The NBER offers the following:
“Market design” examines the reasons why markets institutions fail and considers the properties of alternative mechanisms, in terms of efficiency, fairness, incentives, and complexity. Research on market design is influenced by ideas from industrial organization and microeconomic theory; it brings together theoretical, empirical, and experimental methods, with an aim of studying policy-relevant tradeoffs with practical consequences.
Notice the concern with market failure, the clearly normative perspective and the intent to influence policy. Finally, there is the Journal of Economic Literature which now recognizes it as a subfield (JEL classification D47) and offers this definition:
Covers studies concerning the design and evolution of economic institutions, the design of mechanisms for economic transactions (such as price determination and dispute resolution), and the interplay between a market’s size and complexity and the information available to agents in such an environment.
In this definition no methodological stand is taken nor is there an explicit concern for practical consequences. Two additional lines, that I do not reproduce, exclude straight mechanism design and straight empirical work.
Where does that leave us? Clearly, decision theory, repeated games, refinements are not market design. What about the study of tax policy to encourage investments in human capital? What about labor economics, in particular the thread related to search and matching? Regulating oligopoly and merger policy? By my reading they count. Are the labor economists all going to label their papers D47? Unlikely. We generally know we are writing for, so, perhaps the flippant answer I gave first is the correct one!
Do I have a proposal? Yes, Richard Whatley’s 1831 suggestion for what to call what we now dub economics.
Next, what principles, if any, are there to guide the market designer? Roth’s 2007 Hahn lecture suggests three:
By which he means encouraging coordination on a single venue for trade. Why? Presumably beause it reduces search costs and increases the speed of execution. One way (not the only) of formalizing the notion of thickness is stability or the core.
2) Reduce Congestion
Roth argues that as a market thickens, it produces congestion. This runs counter to the benefits of thickness: increasing speed of execution. What he has in mind are markets where transactions are heterogenous and offers are personalized. During the time in which an offer is being evaluated, other potential opportunities may evaporate. I’m not yet convinced that thickness produces congestion in this sense. I cannot see why the time to evaluate an offer and conclude a transaction should depend on the thickness of the market. However, in order to benefit from the increased opportunities that thickness provides, it makes sense that one would want to increase the speed at which offers are screened. I think the correct way to phrase this might be in terms of bottlenecks. As the market thickens steps in the trading process that were not bottlenecks might become so. Not removing them, defeats the gains to be had from thickness.
3) Discourage Welfare Reducing Strategic Behavior
Equivalently, make participation simple.
While these 3 items are useful rules of thumbs in thinking about the design goals, they do not cover the details the designer should pay attention to in achieving them. I’ve tried to list what I think these are below.
a) Carefully decide on the asset to be traded.
I’m channeling Coase. To illustrate, recall the use of auctions to allocate spectrum rights. A frequently heralded an example of successful market design. The use of auctions takes as given that the asset to be traded is rights to a range of frequencies. Those rights protect the holder from `harmful interference’. However, as Coase himself observed, an entirely different asset should be the object of analysis:
What does not seem to have been understood is that what is being allocated by the Federal Communications Commission, or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies of the ether.
As one can imagine this might lead to a very different way of organizing the market for wireless communication. For an illustration of how different views on spectrum property rights affect market outcomes see the spirited piece on the Lightsquared debacle by Hazlett and Skorup. An important take away from this piece is the (constructive/destructive) role that government plays in markets.
b) Nature of contracts.
What kinds of contracts are feasible? Must they stipulate a uniform price? Can they be perpetual (indentured servitude is typically outlawed)? We know, for example, that Walrasian prices can implement certain outcomes but not others.
c) What is the medium of exchange?
In some settings we have money, in others it is ruled out. That money is ruled out does not eliminate other mediums of exchange. Prisoners have been known to use cigarettes. One can trade years of service for preferential postings. One can exchange kidneys for kidneys. What about livers for kidneys, or health insurance for livers?
d) What is the measure of performance and what is the status quo?
Typically one is concerned with proposing a set of changes to an existing institution. What is the measure by which we decide that a proposed change is a good one? I rope into this question the decision about which agents preferences matter in the design. In the literature on resident matching the focus is on stability (thickness). However, one might also focus on the effect on wages. Does the use of a stable mechanism depress wages for interns? If one focuses on wages, then one must specify what the status quo is. For example, is it one where wages are set by a perfectly competitive centralized market? Or is it an imperfectly competitive one where wages are set by bilateral contracting? How would one model this (see Bulow and Levin for an example)? Even if the status quo were a monopoly, is it regulated or unregulated?
In choosing a measure of performance one will bump up against the `universal interconnectedness of all things’. In ancient times this challenge was called `regulating the second best’. Imagine a polluting monopolist charging a uniform price. If we replace our monopolist by a perfectly competitive market we reduce the distortion caused by high prices but increase pollution (assuming it increases with output). To make headway one must be prepared to draw a boundary and ignore what happens outside of it.
e) Why does a market need to be designed?
Many markets are the product of evolution rather than intelligent design. So, why is it necessary to design a market? One answer is that there is an externality that is difficult to price without a high degree of coordination. Electricity markets are offered as an example. In the second lecture we will examine this in more detail.
This week saw the start of my second attempt at a graduate course on market design. Although a reading course (in the sense that students present papers) I began with a survey of what I think are the main ideas one should be familiar with. With over 20 people in the room, I was sweating bullets, having expected at most half that. I will post the substance of the first lecture at a later date. In the next session I will use electricity markets as a vehicle to show how one should think about the questions to be answered as well as the kinds of issues one faces when designing a market.
The list of papers to be covered appears below, along with the instructions I gave the students on presentations.
Presentation Rules: I expect every presentation to accomplish the following (in addition to the usual of having well prepared slides, understanding and answering questions etc.):
- Provide background for the issue to be discussed.
- An unambiguous statement of the issue and an explanation for why its resolution is not obvious.
- A description of the resolution proposed by the author(s) of the paper. This is where one discusses the model in the paper.
- An exposition of the analysis of the model. A good exposition will focus on the main point and perhaps even identify a telling example that conveys it as well as the full model does. The point of such an exposition is to show the role of each assumption in arriving at the authors main conclusion.
- A critique. Did authors address the right question? Is the way they chose to frame the issue sensible? Is the model persuasive? If not, why not? Simiply saying its not general enough is insufficient. No model if it is to be tractable can be fully general.
- Suggestions for future research.
Topics & Papers
Jaganathan and Sherman: Why do IPO Auctions Fail
Ritter: Equilibrium in the IPO market
Background Reading (to be read by all)
Ljungqvist: IPO Underpricing: A Survey
Ritter and Welch: A review of IPO activity, pricing and allocations
2. Health Care & Insurance Markets
Cochran, J: Time Consistent Health Insurance, JPE 1995.
Cochran, J. : After the ACA: Freeing the market for health care
Fang & Gazzara: Dynamic Inefficiencies in an Employment-Based Health Insurance System: Theory and Evidence
Handel, Hendel and Whinston: Equilibria in Health Exchanges: Adverse Selection vs. Reclassification Risk
Levine, Kremer and Albright: Making Markets for Vaccines
Kremer: Creating Markets for New Vaccines Part 1: rationale
Kremer: Creating Markets for New Vaccines Part 2: design issues
Background Reading (to be read by all)
Rothschild, M. & J. Stiglitz: Equilibrium in Competitive Insurance Markets, QJE 1976.
Fang, H: Insurance Markets in China
3. Market for Cybersecurity Insurance
Kesan, Majuca & Yurcik: Cyberinsurance as a market based solution to the problem of cybersecurity
4. Affirmative Action
Hickman: Effort, Race Gaps and Affirmative Action: A Game Theoretic Analsyis of College Admissions
Chung: Affirmative Action as an Implementation Problem
Fryer and Loury: Valuing Identity: The simple economics of Affirmative action policies
Background Reading (to be read by all)
Fang and Moro: Theories of Statistical Discrimination and Affirmative Action: a survey
5. Assigning Counsel
Friedman & Schulhofer: Rethinking Indigent Defense: Promoting Effective Representation Through Consumer Sovereignty and Freedom of Choice for All Criminal Defendants
6. The Role of Politics
Acemoglu: Why not a political Coase theorem?
Acemoglu: Modeling Inefficient Institutions