You are currently browsing rvohra’s articles.
The other day, Andrew Postlewaite remarked that it is very hard to find a PhD economist whose academic ancestor thrice removed, was not a mathematician. Put differently, which PhD economists can trace their lineage back to Marshal, Keynes and perhaps even the Scottish master himself? An obvious problem is that it is unclear what it means for so&so to be one’s academic father. A strict definition might be thesis advisor. However, the PhD degree as we know it (some combination of study and research apprenticeship) is a relatively new thing. Arguably, the first modern PhD was granted by Yale in the early 1900s. Doctorate degrees were available in Germany prior to that. However, that degree was awarded upon submission of a body of work. There was no formal apprenticeship requirement. The UK did not introduce a doctorate degree until the early 1900s and that mimicked the German degree (and was introduced, apparently, to compete for US students who were flocking to Germany).
So, lets start at Yale with Irving Fisher. A celebrated economist, and justly so, at an institution that was the first to hand out PhD degrees. Fisher himself was a student of Josiah Willard Gibbs (mathematician and physicist, and, if you believe the mathematical genealogy project, descended from Poisson). What about Fisher’s descendants? Not a single of one of the laudatory pieces on Fisher here mention his students. Some digging uncovered James Harvey Rogers, who went on to become Sterling Professor of Economics at Yale and a panjandrum in the treasury. The university maintains an archive of his papers . Rogers also studied with Pareto. Rogers begat Walt Whitman Rostow. Rostow begat Everett Clyde Upshaw and that is where the line ends.
Lets try one more, Richard T. Ely, after whom the AEA has named one of its lecture series and credited as a founder of land economics. The Kirkus review of 1938 warmly endorses his biography, `The Ground Under our Feet.’ Ely begat John R. Commons, W. A. Scott and E. A. Ross. Commons begat Edwin Witte, the father of social security. Wikipedia credits Commons with influencing Gunnar Myrdal, Oliver Williamson and Herbert Simon, but `influencing’ is not the same as thesis advisor. But, this line seems promising, however other duties intrude.
Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.
My former colleague Asher Wolinsky, once remarked that development economists had better hurry up lest the regions they studied became developed. From the March 2nd, 2014 edition of the NY Times, comes an announcement that the fateful day is upon us. The title of the piece is `The End of the Developing World‘.
In the movie Elysium, the 1%, set up a gated community in space to separate themselves from the proles. Why only one Elysium? On earth, there is still a teeming mass of humanity that needs goods and services. Fertile ground for another 1% arise to meet these needs and eventually build another Elysium. Perhaps there is no property rights regime on Earth that encourages investment etc. Not the case in the movie because there is a police force, legal and parole system apparently administered by Robots. Furthermore, the robots are controlled by the 1% off site. Why do the 1% need to maintain control of the denizens of Earth? Elysium appears to be completely self-sustaining. No resources are apparently needed by it from the Earth. The only visible operation run by Elysium on earth is a company that manufactures robots. The head man is an Elysium expatriate but everyone else working at the factory is a denizen of Earth. Is Earth a banana republic to which Elysium outsources production? No, contradicts the self-sustaining assumption earlier. In short, the economics of the world envisioned in the movie make no sense. It used to be that scientific plausibility was a constraint on science fiction (otherwise its fantasy or magical realism for snobs). I’d add another criteria, economic plausibility. Utopias (or dystopias) must be economically plausible. With these words, can I lay claim to have started to a new branch of literary criticism: the economic analysis of utopian/dystopian fiction?
Back to the subject of this post. Pay attention to the robots in the movie. They have the agility and dexterity of humans. They are stronger. They can even detect sarcasm. Given this, its unclear why human are needed to work in the robot factory. Robots could be used to repair robots and produce new ones. What should such a world look like? Well I need only one `universal’ robot to begin with to produce and maintain robots for other tasks: farming, medical care, construction etc. Trade would become unnecessary for most goods and services. The only scarce resource would be the materials needed to produce and maintain robots (metals, rare earths etc.). Profits would accrue to the individuals who owned these resources. These individuals might trade among themselves, but would have no reason to trade with anyone outside this group. So, a small group of individuals would maintain large armies of robots to meet their needs and maintain their property rights over the inputs to robot production. Everyone else is surplus to needs. Thats a movie I would go to the cinema to see!
From the New York Times comes a straightforward example of 3rd degree price discrimination. Prices of certain luxury vehicles are much higher in China than in the U.S. For example, the Porsche Cayenne, has a base price of $150,000 in China but $50,000 in the U.S. Price discrimination invites arbitrage, and the invitation in this case is so generous, that many people have accepted. Curiously, some of those who have accepted have been arrested, charged and fined for mail fraud and violations of customs laws.
Manufacturers have restrictions in their contracts with dealers to prohibit this sort of arbitrage, in part this is because cars produced for sale in one country will not be comply with extant regulation in another country. Interestingly, it is illegal for anyone other than the original equipment manufacturer to export NEW cars overseas. USED cars are an entirely different matter. US customs believes that if I buy a new car, and then drive it straight to the port to ship to China, it remains a new car. On the other hand, if I drive it home, it is used. Subsequent cases will turn upon the question of makes a car new vs. used?
As an aside, Ken Sparks, spokesman for BMW North America, defended the Governments vigorous pursuit of the arbitrageurs with these words:
Illegal exports deny legitimate customers here in the U.S. the popular vehicles, which are in high demand.
I can only imagine the pain of being denied a BMW. Perhaps, they could better show their concern by giving away the car for free.
In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by
The problem of maximizing total welfare is
I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is called sign consistency. For each , the sign of and for any is the same. Furthermore, this applies to all pairs .
Let be a graph with vertex set and for any such that introduce an edge . Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . The second condition is that be a tree.
The following is the relaxation that they consider:
Denote by the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”
Let be the maximal connected components of after deletion of the edges in (call this ). The proof will be by induction on . The case follows from total unimodularity. I prove this later.
Suppose . Let be an optimal solution to our linear program. We can choose to be an extreme point of . As is a tree, there must exist a incident to exactly one negative edge, say . Denote by the polyhedron restricted to just the vertices of and by the polyhedron restricted to just the vertices in the complement of . By the induction hypothesis, both and are integral polyhedrons. Each extreme point of () assigns a vertex of (the complement of ) to a particular agent. Let be the set of extreme points of . If in extreme point , vertex is assigned to agent we write and zero otherwise. Similarly with the extreme points of . Thus, is assigns vertex to agent . Let be the objective function value of the assignment , similarly with .
Now restricted to can be expressed as . Similarly, restricted to can be expressed as . We can now reformulate our linear program as follows:
The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because and cannot both be 1, similarly with the ‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in every cycle must contain a positive even number of negative edges.
Return to the case . Consider the polyhedron restricted to just one . It will have the form:
Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, , of rows/constraints. The goal is to partition them into two sets and so that column by column the difference in the sum of the non-zero entries in and and the sum of the nonzero entries in differ by at most one.
Observe that the rows associated with constraints are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If is present in but is absent (or vice-versa), we are free to assign the row associated with in any way to satisfy the GH theorem. The difficulty will arise when both , and are present in . To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with and be separated.
When is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to and making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).
Fix an agent . The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in (which is a tree) to be a root and direct all edges `away’ from the root (when is a subset of the constraints we delete from any edge in which at most one from the pair and appears in ) . Label the root . Label all its neighbors , label the neighbors of the neighbors and so on. If vertex was labeled assign the row to the set otherwise to the row . This produces a partition of the constraints of the form satisfying GH.
Initially, all leaves and edges of are unmarked. Trace out a path from the root to one of the leaves of and mark that leaf. Each unmarked directed edge on this path corresponds to the pair and . Assign to the same set that is the label of . Assign to the same set that is the label of vertex . Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge . If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge as well as . Suppose was labeled on the first iteration and was marked. This means was assigned to . Subsequently will also be assigned to which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.
What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. Note that other sources, like Wikipedia give other reasons, for why electricity is special:
Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.
I’m skeptical. To see why, replace electricity by air travel.
Let be the set of vertices and the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in . Let be the set of directed arcs that result. Hence, mens that the edge is directed from to . Notice, if , then .
Associated with each is a number that we interpret as a flow of electricity. If we interpret this to be a flow from to . If we interpret this as a flow from to .
- Let is the resistance on link .
- unit cost of injecting current into node .
- marginal value of current consumed at node .
- amount of current consumed at node .
- amount of current injected at node .
- capacity of link .
Current must satisfy two conditions. The first is conservation of flow at each node:
The second is Ohm’s law. There exist node potentials such that
Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.
At each node there is a power supplier with constant marginal cost of production of upto units. At each there is a consumer with constant marginal value of upto units. A natural optimization problem to consider is
This is the problem of finding a flow that maximizes surplus.
Let be the set of cycles in . Observe that each corresponds to a cycle in if we ignore the orientation of the edges. For each cycle , let denote the edges in that are traversed in accordance with their orientation. Let be the set of edges in that are traversed in the opposing orientation.
We can project out the variables and reformulate as
Recall the scenario we ended with in part 1. Let , and in addition suppose for all . Only has a capacity constraint of 600. Let and . Also and and each have unlimited capacity. At node 3, the marginal value is upto 1500 units and zero thereafter. The optimization problem is
Notice, for every unit of flow sent along , half a unit of flow must be sent along and as well to satisfy the cycle flow constraint.
The solution to this problem is , , , , and . What is remarkable about this not all of customer 3′s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along and 900 units along . This flow violates the cycle constraint.
In this example, when generator 1 injects electricity into the network to serve customer 3′s demand, a positive amount of that electricity must flow along every path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3′s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.
The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.
Let be the dual variable associated with the flow balance constraint. Let be associated with the cycle constraints. Let and be associated with link capacity constraints. Let and be associated with the remaining tow constraints. These can be interpreted as the profit of supplier and the surplus of customer respectively. For completeness the dual would be:
Now has a natural interpretation as a price to be paid for consumption at node for supply injected at node . and can be interpreted as the price of capacity. However, is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.
In this, the second lecture, I focus on electricity markets. I’ll divide the summary of that lecture into two parts.
Until the 1980s electricity markets around the world operated as regulated monopolists. Generation (power plants) and distribution (the wires) were combined into a single entity. Beginning with Chile, a variety of Latin American countries started to privatize their electricity markets. So, imagine you were a bright young thing in the early 1980s, freshly baptised in the waters of Lake Michigan off Hyde Park. The General approaches you and says I want a free market in electricity, make it so (Quiero un mercado libre de la electricidad, que asi sea.) What would you reccomend?
Obviously, privatize the generators by selling them off, perhaps at auction (or given one’s pedigree, allocate them at random and allow the owners to trade among themeselves). What about the wire’s that carry electricity from one place to another. Tricky. Owner of the wire will have monopoly power, unless there are multiple parrallell wires. However, that would lead to inefficient duplication of resources. As a first pass, lets leave the wires in Government hands. Not obviously wrong. We do that with the road network. The Government owns and mainatins it and for a fee grants access to all.
So, competition to supply power but central control of the wires. Assuming an indifferent and benign authority controlling the wires, what will the market for generation look like? To fix ideas, consider a simple case. Two generators and a customer .
Generator has unlimited supply and a constant marginal cost of production of $20 a unit. Generator 2 has an unlimited supply and a constant marginal cost of production of $40 a unit. Customer 3 has a constant marginal value of upto 1500 units and zero thereafter. Assume to be sufficiently large to make all subsequent statements true. Initially there are only two wires, one from generator 1 to customer 3 and the other from generator 2 to customer 3. Suppose are all price takers. Then, the Walrasian price for this economy will be $20. For customer 3 this clearly a better outcome than unregulated monopoly, where the price would be . What if the price taking assumption is not valid? An alternative model would be Bertrand competition between 1 and 2. So, the outcome would be a `hairs breadth’ below $40. Worse than the Walrasian outcome but still better than unregulated monopoly. It would seem that deregulation would be a good idea and as the analysis above suggest, there is no necessity for a market to be designed. There is a catch. Is unregulated monopolist the right benchmark? Surely, a regulated monopolist would be better. Its not clear that one does better than the regulated monopolist.
Now lets add a wrinkle. Suppose the wire between 1 and 3 has capacity 600 units. There are two ways to think of this capacity constraint. The first is a capacity constraint on generator 1 that we have chosen to model as a constraint on the wire . The second is that it is indeed a constraint on the wire . The difference is not cosmetic as we shall see in a moment.
Suppose its a constraint on generator 1′s capacity. Then, under the price taking assumption, the Walrasian price in this economy will be $40. An alternative model of competition would be Bertrand-Edgeworth. In general equilibria are mixed, but whatever the mixture, the expected price per unit customer 3 will pay cannot exceed $40 a unit. In both cases, the outcome is better for customer 3 than unregulated monopolist.
Assume now the capacity constraint is on the wire instead. Under the price taking assumption, at a price of $20 unit, generator 1 is indifferent between supplying any non-negative amount. Generator 3′s supply correspondence is the empty set. However there is no way for supply to meet demand. Why is this? In the usal Walrasian set up each agent reports their supply and demand correspondence based on posted prices and their own information only. To obtain a sensible answer in this case, generator 1 must be aware of the capacity of the network into which its supply will be injected. As the next scenario we consider shows, this is not easy when it comes to electricity.
Suppose there is now a link joining generator 1 and 2 with no capacity constraint. There is still a 600 unit capacity constraint on the link between 1 and 3. One might think, that in this scenario, customer 3 can receive all its demand from generator 1. It turns out that this is not possible because of the way electricity flows in networks.
From the Fens of East Anglia comes a curious tale of a gift to Cambridge to endow a chair in honor of Stephen Hawking. The donor, Dennis Avery, put forward $6 million of which $2 million is to cover the costs of the Hawking chair. The balance is to be managed by charity, the Avery-Tsai foundation to, in the words of J. K. M. Sanders (Pro-Vice-Chancellor for Institutional Affairs),
“……advance education and promote research in the science of cosmology at the University of Cambridge for the public benefit, and in particular to support the University in securing the best possible candidate as the Stephen W. Hawking Professor of Cosmology.”
There are some unusual terms attached to the gift described below. Mr.Avery, however, passed away soon after the terms were negotiated. Renegotiation, now impossible, the University must decide whether to accept the gift as constructed or not at all.
What makes the gift unusual?
1) Monies held by the foundation can be used to `top up’ the salary, so paying the individual an amount in excess of what the University might pay its top chair holders (which I believe is around 130K pounds).
2) The chair is to be housed in the Department of Applied Mathematics and Theoretical Physics (DAMTP). The gift requires DAMTP to certify each year to the Trustees that the base salary of the Hawking Professor is at least the average of other Professors in the Department.
3) The holder is subject to annual review and the monies from the foundation are contingent upon the outcome of that review.
4) It limits the tenure of the Professor to seven years, renewable for five and exceptionally for a further five.
Let the fireworks begin. From the head of the DAMTP, P.H. Haynes and a supporter of the gift:
“The unusual detailed arrangements surrounding this Professorship have rightly triggered significant debate amongst my Departmental colleagues and they have required detailed and robust discussion between Department, School, and the University.”
Professor Goldstine of the DAMTP, responds with an objection to the circumvention of University rules with a delightful analogy to line integrals:
“In the field of thermodynamics there is the concept of a ‘state function’, a quantity that is independent of the path by which a system is brought to a given point. This is one of those. It does not matter whether the payment goes through the University payroll or not if the University itself is signing off on the agreement and the funds are in its endowment. The choice of path certainly does not matter in the court of public opinion. How can the University contemplate an arrangement whose purpose is to circumvent its own rules?”
He goes on to point out that it is inconceivable that a holder of the chair would accept a cut in pay upon expiration of the term. He is, by own admission rendered:
“….almost speechless at Paragraph 9 of the deed, which asserts that the Department must certify each year to the Trustees that the base salary of the Hawking Professor is at least the average of other Professors in the Department. First, the requirement itself indicates a profound level of distrust of the Department’s operations. But second, how can it possibly be fair to tie one Professor’s salary to that of others? All their hard work over their career to date is used to define a starting point for his salary, independent of his qualifications. Moreover, if the Department chose to pay that minimum (which it might in light of other financial burdens), then the Stephen Hawking Professor would automatically get a raise if any other Professor did. This cannot be fair. I thought we strove to have a meritocracy in this University.”
From an emeritus professor medieval history, G. R. Evans, clearly, a guardian of ancient rights:
“….opening the doors to allowing outside bodies or donors to fund Professorships has led to the opening of further doors and only those with long constitutional memories may remember how it all began. I speak today just to put a reminder into the record, for this proposal has a constitutional context and if it is accepted, it will undoubtedly have constitutional consequences.”
Dr. A. Pesci of the DAMTP opens with this gem:
“This Chair looks to me like that pair of shoes at the Christmas sale. They looked beautiful and were half price. They were also two sizes too small and buying the matching dress would lead to bankruptcy. Hence, if one buys them, they would have to be left vacant, for if one wears them, they would cause enormous irreversible long-term damage.”
The link to a full summary of the discussion can be found here. It is both interesting reading and amusing when compared with the earliest and best pamphlets on University politics that I know of: Microcosmographia Academica. I give one example from it:
The Principle of the Dangerous Precedent is that you should not now do an admittedly right action for fear you, or your equally timid successors, should not have the courage to do right in some future case, which, ex hypothesi, is essentially different, but superficially resembles the present one. Every public action which is not customary, either is wrong, or, if it is right, is a dangerous precedent. It follows that nothing should ever be done for the first time.
I began the first class with a discussion of the definition of market design. Here, as best I can, I recall my remarks.
The flippant answer is its whatever Alvin Roth says it is. A serious answer is the same, witness the following from Econometrica 2002:
…..design involves a responsibility for detail; this creates a need to deal with complications. Dealing with complications requires not only careful attention to the institutional details of a particular market, it also requires new tools, to supplement the traditional analytical toolbox of the theorist.
Two items to highlight: institutional details and and new tools. So, it is not `generic’ theorizing and one might wish to (should?) use experiments and data to support an analysis.
The NBER offers the following:
”Market design” examines the reasons why markets institutions fail and considers the properties of alternative mechanisms, in terms of efficiency, fairness, incentives, and complexity. Research on market design is influenced by ideas from industrial organization and microeconomic theory; it brings together theoretical, empirical, and experimental methods, with an aim of studying policy-relevant tradeoffs with practical consequences.
Notice the concern with market failure, the clearly normative perspective and the intent to influence policy. Finally, there is the Journal of Economic Literature which now recognizes it as a subfield (JEL classification D47) and offers this definition:
Covers studies concerning the design and evolution of economic institutions, the design of mechanisms for economic transactions (such as price determination and dispute resolution), and the interplay between a market’s size and complexity and the information available to agents in such an environment.
In this definition no methodological stand is taken nor is there an explicit concern for practical consequences. Two additional lines, that I do not reproduce, exclude straight mechanism design and straight empirical work.
Where does that leave us? Clearly, decision theory, repeated games, refinements are not market design. What about the study of tax policy to encourage investments in human capital? What about labor economics, in particular the thread related to search and matching? Regulating oligopoly and merger policy? By my reading they count. Are the labor economists all going to label their papers D47? Unlikely. We generally know we are writing for, so, perhaps the flippant answer I gave first is the correct one!
Do I have a proposal? Yes, Richard Whatley’s 1831 suggestion for what to call what we now dub economics.
Next, what principles, if any, are there to guide the market designer? Roth’s 2007 Hahn lecture suggests three:
By which he means encouraging coordination on a single venue for trade. Why? Presumably beause it reduces search costs and increases the speed of execution. One way (not the only) of formalizing the notion of thickness is stability or the core.
2) Reduce Congestion
Roth argues that as a market thickens, it produces congestion. This runs counter to the benefits of thickness: increasing speed of execution. What he has in mind are markets where transactions are heterogenous and offers are personalized. During the time in which an offer is being evaluated, other potential opportunities may evaporate. I’m not yet convinced that thickness produces congestion in this sense. I cannot see why the time to evaluate an offer and conclude a transaction should depend on the thickness of the market. However, in order to benefit from the increased opportunities that thickness provides, it makes sense that one would want to increase the speed at which offers are screened. I think the correct way to phrase this might be in terms of bottlenecks. As the market thickens steps in the trading process that were not bottlenecks might become so. Not removing them, defeats the gains to be had from thickness.
3) Discourage Welfare Reducing Strategic Behavior
Equivalently, make participation simple.
While these 3 items are useful rules of thumbs in thinking about the design goals, they do not cover the details the designer should pay attention to in achieving them. I’ve tried to list what I think these are below.
a) Carefully decide on the asset to be traded.
I’m channeling Coase. To illustrate, recall the use of auctions to allocate spectrum rights. A frequently heralded an example of successful market design. The use of auctions takes as given that the asset to be traded is rights to a range of frequencies. Those rights protect the holder from `harmful interference’. However, as Coase himself observed, an entirely different asset should be the object of analysis:
What does not seem to have been understood is that what is being allocated by the Federal Communications Commission, or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies of the ether.
As one can imagine this might lead to a very different way of organizing the market for wireless communication. For an illustration of how different views on spectrum property rights affect market outcomes see the spirited piece on the Lightsquared debacle by Hazlett and Skorup. An important take away from this piece is the (constructive/destructive) role that government plays in markets.
b) Nature of contracts.
What kinds of contracts are feasible? Must they stipulate a uniform price? Can they be perpetual (indentured servitude is typically outlawed)? We know, for example, that Walrasian prices can implement certain outcomes but not others.
c) What is the medium of exchange?
In some settings we have money, in others it is ruled out. That money is ruled out does not eliminate other mediums of exchange. Prisoners have been known to use cigarettes. One can trade years of service for preferential postings. One can exchange kidneys for kidneys. What about livers for kidneys, or health insurance for livers?
d) What is the measure of performance and what is the status quo?
Typically one is concerned with proposing a set of changes to an existing institution. What is the measure by which we decide that a proposed change is a good one? I rope into this question the decision about which agents preferences matter in the design. In the literature on resident matching the focus is on stability (thickness). However, one might also focus on the effect on wages. Does the use of a stable mechanism depress wages for interns? If one focuses on wages, then one must specify what the status quo is. For example, is it one where wages are set by a perfectly competitive centralized market? Or is it an imperfectly competitive one where wages are set by bilateral contracting? How would one model this (see Bulow and Levin for an example)? Even if the status quo were a monopoly, is it regulated or unregulated?
In choosing a measure of performance one will bump up against the `universal interconnectedness of all things’. In ancient times this challenge was called `regulating the second best’. Imagine a polluting monopolist charging a uniform price. If we replace our monopolist by a perfectly competitive market we reduce the distortion caused by high prices but increase pollution (assuming it increases with output). To make headway one must be prepared to draw a boundary and ignore what happens outside of it.
e) Why does a market need to be designed?
Many markets are the product of evolution rather than intelligent design. So, why is it necessary to design a market? One answer is that there is an externality that is difficult to price without a high degree of coordination. Electricity markets are offered as an example. In the second lecture we will examine this in more detail.