On many campuses one will find notices offering modest sums to undergraduates to participate in experiments. When the experimenter does not attract sufficiently many subjects to participate at the posted rate, does she raise it? Do undergraduates make counter offers? If not, why not? An interesting contrast is medical research where there has arisen a class of human professional guinea pigs. They have a jobzine and the anthropologist Roberto Abadie has book on the subject. Prices paid to healthy subjects to participate in trials vary and increase with the potential hazards. The jobzine I mention earlier provides ratings of various research organizations who carry out such studies. A number of questions come to mind immediately: how are prices determined, are subjects in a position to offer informed consent, should such contracts be forbidden and does relying on such subjects induce a selection bias?

In the March 23rd edition of the NY Times Mankiw proposes a `do no harm’ test for policy makers:

…when people have voluntarily agreed upon an economic arrangement to their mutual benefit, that arrangement should be respected.

There is a qualifier for negative externalities, and he goes on to say:

As a result, when a policy is complex , hard to evaluate and disruptive of private transactions, there is good reason to be skeptical of it.

Minimum wage legislation is offered as an example of a policy that fails the do no harm test.

The association with the Hippocratic oath gives it an immediate appeal. I think the test to be more Panglossian (or should I say Leibnizian) than Hippocratic.

There is an immediate `heart strings’ argument against the test, because indentured servitude passes the `do no harm’ test. However, indentured servitude contracts are illegal in many jurisdictions ( repugnant contracts?). This argument raises only more questions, like why would we rule out such contracts? I want to focus instead on two other aspects of the `do no harm’ principle contained in the words `voluntarily’ and `benefit’. What is voluntary and benefit compared to what?

To fix ideas imagine two parties, who if they work together and expend equal effort can jointly produce a good worth $1. How should they split the surplus produced? How will they split the surplus produced? An immediate answer to the `should’ question is 50-50. A deeper answer would suggest that they each receive their marginal product (or added value) of $1, but this impossible without an injection of money from the outside. There is no immediate answer to the `will’ question as it will depend on the outside options of each of the agents and their relative patience. Suppose for example, the outside option of each party is $0, one agent is infinitely patient and the other has a high discount rate. It isn’t hard to construct a model of bargaining where the lions share of the gains from trade go to the patient agent. Thus, what `will’ happen will be very different from what `should’ happen. What `will’ happen depends on the relative patience and outside options of the agents at the time of bargaining. In my extreme example of a very impatient agent, one might ask why is it that one agent is so impatient? Is the patient agent exploiting the impatience of the other agent coercion?

When parties negotiate to their mutual benefit, it is to their benefit *relative* to the status quo. When the status quo presents one agent an outside option that is untenable, say starvation, is bargaining voluntary, even if the other agent is not directly threatening starvation? The difficulty with the `do no harm’ principle in policy matters is the assumption that the status quo does less harm than a change in it would. This is not clear to me at all. Let me illustrate this with two examples to be found in any standard microeconomic text book.

Assuming a perfectly competitive market, imposing a minimum wage constraint above the equilibrium wage would reduce total welfare. What if the labor market were not perfectly competitive? In particular, suppose it was a monopsony employer constrained to offer the same wage to everyone employed. Then, imposing a minimum wage above the monopsonist’s optimal wage would increase total welfare.

The abstract of a 2005 paper by Itti and Baldi begins with these words:

The concept of surprise is central to sensory processing, adaptation, learning, and attention. Yet, no widely-accepted mathematical theory currently exists to quantitatively characterize surprise elicited by a stimulus or event, for observers that range from single neurons to complex natural or engineered systems. We describe a formal Bayesian definition of surprise that is the only consistent formulation under minimal axiomatic assumptions.

They propose that surprise be measured by the Kullback-Liebler divergence between the prior and the posterior. As with many good ideas, Itti and Baldi are not the first to propose this. C. L. Martin and G. Meeden did so in 1984 in an unpublished paper entitled: `The distance between the prior and the posterior distributions as a measure of surprise.’ Itti and Baldi go further and provide experimental support that this notion of surprise comports with human notions of surprise. Recently, Ely, Frankel and Kamenica in Economics, have also considered the issue of surprise, focusing instead on how best to release information so as to maximize interest.

Surprise now being defined, one might go on to define novelty, interestingness, beauty and humor. Indeed, Jurgen Schmidhuber has done just that (and more). A paper on the optimal design of jokes cannot be far behind. Odd as this may seem, it is a part of a venerable tradition. Kant defined humor as the sudden transformation of a strained expectation into nothing. Birkhoff himself wrote an entire treatise on Aesthetic Measure (see the review by Garabedian). But, I digress.

Returning to the subject of surprise, the Kulback-Liebler divergence is not the first measure of surprise or even the most wide spread. I think that prize goes to the venerable -value. Orthodox Bayesians, those who tremble in the sight of measure zero events, look in horror upon the -value because it does not require one to articulate a model of the alternative. Even they would own, I think, to the convenience of having to avoid listing all alternative models and carefully evaluating them. Indeed I. J. Good writing in 1981 notes the following:

The evolutionary value of surprise is that it causes us to check our assumptions. Hence if an experiment gives rise to a surprising result given some null hypothesis it might cause us to wonder whether is true even in the absence of a vague alternative to .

Good, by the way, described himself as a cross between Bayesian and Frequentist, called a Doogian. One can tell from this label that he had an irrepressible sense of humor. Born Isadore Guldak Joseph of a Polish family in London, he changed his name to Ian Jack Good, close enough one supposes. At Bletchley park he and Turing came up with the scheme that eventually broke the German Navy’s enigma code. This led to the Good-Turing estimator. Imagine a sequence so symbols chosen from a finite alphabet. How would one estimate the probability of observing a letter from the alphabet that has not yet appeared in the sequence thus far? But, I digress.

Warren Weaver was, I think, the first to propose a measure of surpirse. Weaver is most well known as a popularizer of Science. Some may recall him as the Weaver on the slim volume by Shannon and Weaver on the Mathematical Theory of Communication. Well before that, Weaver played an important role at the Rockefeller foundation, where he used their resources to provide fellowships to many promising scholars and jump start molecular biology. The following is from page 238 of my edition Jonas’ book `The Circuit Riders’:

Given the unreliability of such sources, the conscientious philanthropoid has no choice but to become a circuit rider. To do it right, a circuit rider must be more than a scientifically literate ‘tape recorder on legs.’ In order to win the confidence of their informants, circuit riders for Weaver’s Division of Natural Sciences were called upon the offer a high level of ‘intellectual companionship – without becoming ‘too chummy’ with people whose work they had, ultimately, to judge.

But, I digress.

To define Weaver’s notion, suppose a discrete random variable that takes values in the set . Let be the probability that . The surprise index of outcome is . Good himself jumped into the fray with some generalizations of Weaver’s index. Here is one . Others involve the use of logs, leading to measures that are related to notions of entropy as well probability scoring rules. Good also proposed axioms that a good measure to satisfy, but I cannot recall if anyone followed up to derive axiomatic characterizations.

G. L. S. Shackle, who would count as one of the earliest decision theorists, also got into the act. Shackle departed from subjective probability and proposed to order degrees of beliefs by their potential degrees of surprise. Shackle also proposed, I think, that an action be judged interesting by its best possible payoff and its potential for surprise. Shackle, has already passed beyond the ken of men. One can get a sense of his style and vigor from the following response to an invitation to write a piece on Rational Expectations:

Rational expectations’ remains for me a sort of monster living in a cave. I have never ventured into the cave to see what he is like, but I am always uneasily aware that he may come out and eat me. If you will allow me to stir the cauldron of mixed metaphors with a real flourish, I shall suggest that ‘rational expectations’ is neo-classical theory clutching at the last straw. Observable circumstances offer us suggestions as to what may be the sequel of this act or that one. How can we know what invisible circumstances may take effect in time-to come, of which no hint can now be gained? I take it that ‘rational expectations’ assumes that we can work out what will happen as a consequence of this or that course of action. I should rather say that at most we can hope to set bounds to what can happen, at best and at worst, within a stated length of time from ‘the present’, and can invent an endless diversity of possibilities lying between them. I fear that for your purpose I am a broken reed.

The other day, Andrew Postlewaite remarked that it is very hard to find a PhD economist whose academic ancestor thrice removed, was not a mathematician. Put differently, which PhD economists can trace their lineage back to Marshal, Keynes and perhaps even the Scottish master himself? An obvious problem is that it is unclear what it means for so&so to be one’s academic father. A strict definition might be thesis advisor. However, the PhD degree as we know it (some combination of study and research apprenticeship) is a relatively new thing. Arguably, the first modern PhD was granted by Yale in the early 1900s. Doctorate degrees were available in Germany prior to that. However, that degree was awarded upon submission of a body of work. There was no formal apprenticeship requirement. The UK did not introduce a doctorate degree until the early 1900s and that mimicked the German degree (and was introduced, apparently, to compete for US students who were flocking to Germany).

So, lets start at Yale with Irving Fisher. A celebrated economist, and justly so, at an institution that was the first to hand out PhD degrees. Fisher himself was a student of Josiah Willard Gibbs (mathematician and physicist, and, if you believe the mathematical genealogy project, descended from Poisson). What about Fisher’s descendants? Not a single of one of the laudatory pieces on Fisher here mention his students. Some digging uncovered James Harvey Rogers, who went on to become Sterling Professor of Economics at Yale and a panjandrum in the treasury. The university maintains an archive of his papers . Rogers also studied with Pareto. Rogers begat Walt Whitman Rostow. Rostow begat Everett Clyde Upshaw and that is where the line ends.

Lets try one more, Richard T. Ely, after whom the AEA has named one of its lecture series and credited as a founder of land economics. The Kirkus review of 1938 warmly endorses his biography, `The Ground Under our Feet.’ Ely begat John R. Commons, W. A. Scott and E. A. Ross. Commons begat Edwin Witte, the father of social security. Wikipedia credits Commons with influencing Gunnar Myrdal, Oliver Williamson and Herbert Simon, but `influencing’ is not the same as thesis advisor. But, this line seems promising, however other duties intrude.

Penn state runs auctions to license its intellectual property. For each license on the block there is a brief description of what the relevant technology is and an opening bid which I interpret as a reserve price. It also notes whether the license is exclusive or not. Thus, the license is sold for a single upfront fee. No royalties or other form of contingent payment. As far as I can tell the design is an open ascending auction.

My former colleague Asher Wolinsky, once remarked that development economists had better hurry up lest the regions they studied became developed. From the March 2nd, 2014 edition of the NY Times, comes an announcement that the fateful day is upon us. The title of the piece is `The End of the Developing World‘.

In the movie Elysium, the 1%, set up a gated community in space to separate themselves from the proles. Why only one Elysium? On earth, there is still a teeming mass of humanity that needs goods and services. Fertile ground for another 1% arise to meet these needs and eventually build another Elysium. Perhaps there is no property rights regime on Earth that encourages investment etc. Not the case in the movie because there is a police force, legal and parole system apparently administered by Robots. Furthermore, the robots are controlled by the 1% off site. Why do the 1% need to maintain control of the denizens of Earth? Elysium appears to be completely self-sustaining. No resources are apparently needed by it from the Earth. The only visible operation run by Elysium on earth is a company that manufactures robots. The head man is an Elysium expatriate but everyone else working at the factory is a denizen of Earth. Is Earth a banana republic to which Elysium outsources production? No, contradicts the self-sustaining assumption earlier. In short, the economics of the world envisioned in the movie make no sense. It used to be that scientific plausibility was a constraint on science fiction (otherwise its fantasy or magical realism for snobs). I’d add another criteria, economic plausibility. Utopias (or dystopias) must be economically plausible. With these words, can I lay claim to have started to a new branch of literary criticism: the economic analysis of utopian/dystopian fiction?

Back to the subject of this post. Pay attention to the robots in the movie. They have the agility and dexterity of humans. They are stronger. They can even detect sarcasm. Given this, its unclear why human are needed to work in the robot factory. Robots could be used to repair robots and produce new ones. What should such a world look like? Well I need only one `universal’ robot to begin with to produce and maintain robots for other tasks: farming, medical care, construction etc. Trade would become unnecessary for most goods and services. The only scarce resource would be the materials needed to produce and maintain robots (metals, rare earths etc.). Profits would accrue to the individuals who owned these resources. These individuals might trade among themselves, but would have no reason to trade with anyone outside this group. So, a small group of individuals would maintain large armies of robots to meet their needs and maintain their property rights over the inputs to robot production. Everyone else is surplus to needs. Thats a movie I would go to the cinema to see!

From the New York Times comes a straightforward example of 3rd degree price discrimination. Prices of certain luxury vehicles are much higher in China than in the U.S. For example, the Porsche Cayenne, has a base price of $150,000 in China but $50,000 in the U.S. Price discrimination invites arbitrage, and the invitation in this case is so generous, that many people have accepted. Curiously, some of those who have accepted have been arrested, charged and fined for mail fraud and violations of customs laws.

Manufacturers have restrictions in their contracts with dealers to prohibit this sort of arbitrage, in part this is because cars produced for sale in one country will not be comply with extant regulation in another country. Interestingly, it is illegal for anyone other than the original equipment manufacturer to export NEW cars overseas. USED cars are an entirely different matter. US customs believes that if I buy a new car, and then drive it straight to the port to ship to China, it remains a new car. On the other hand, if I drive it home, it is used. Subsequent cases will turn upon the question of makes a car new vs. used?

As an aside, Ken Sparks, spokesman for BMW North America, defended the Governments vigorous pursuit of the arbitrageurs with these words:

Illegal exports deny legitimate customers here in the U.S. the popular vehicles, which are in high demand.

I can only imagine the pain of being denied a BMW. Perhaps, they could better show their concern by giving away the car for free.

In an earlier pair of posts I discussed a class of combinatorial auctions when agents have binary quadratic valuations. To formulate the problem of finding a welfare maximizing allocation let if object is given to agent and zero otherwise. Denote the utility of agent from consuming bundle by

The problem of maximizing total welfare is

subject to

I remarked that Candogan, Ozdaglar and Parrilo (2013) identified a solvable instance of the welfare maximization problem. They impose two conditions. The first is called **sign consistency**. For each , the sign of and for any is the same. Furthermore, this applies to all pairs .

Let be a graph with vertex set and for any such that introduce an edge . Because of the sign consistency condition we can label the edges of as being positive or negative depending on the sign of . Let and . The second condition is that be a tree.

The following is the relaxation that they consider:

subject to

Denote by the polyhedron of feasible solutions to the last program. I give a new proof of the fact that the extreme points of are integral. My thanks to Ozan Candogan for (1) patiently going through a number of failed proofs and (2) being kind enough not to say :“why the bleep don’t you just read the proof we have.”

Let be the maximal connected components of after deletion of the edges in (call this ). The proof will be by induction on . The case follows from total unimodularity. I prove this later.

Suppose . Let be an optimal solution to our linear program. We can choose to be an extreme point of . As is a tree, there must exist a incident to exactly one negative edge, say . Denote by the polyhedron restricted to just the vertices of and by the polyhedron restricted to just the vertices in the complement of . By the induction hypothesis, both and are integral polyhedrons. Each extreme point of () assigns a vertex of (the complement of ) to a particular agent. Let be the set of extreme points of . If in extreme point , vertex is assigned to agent we write and zero otherwise. Similarly with the extreme points of . Thus, is assigns vertex to agent . Let be the objective function value of the assignment , similarly with .

Now restricted to can be expressed as . Similarly, restricted to can be expressed as . We can now reformulate our linear program as follows:

subject to

The constraint matrix of this last program is totally unimodular. This follows from the fact that each variable appears in at most two constraints with coefficients of opposite sign and absolute value 1 (this is because and cannot both be 1, similarly with the ‘s). Total unimodularity implies that the last program has integral optimal solution and we are done. In fact, I believe the argument can be easily modified to to the case where in every cycle must contain a positive even number of negative edges.

Return to the case . Consider the polyhedron restricted to just one . It will have the form:

Notice the absence of negative edges. To establish total unimodularity we use the Ghouila-Houri (GH) theorem. Fix any subset, , of rows/constraints. The goal is to partition them into two sets and so that column by column the difference in the sum of the non-zero entries in and and the sum of the nonzero entries in differ by at most one.

Observe that the rows associated with constraints are disjoint, so we are free to partition them in any way we like. Fix a partition of these rows. We must show to partition the remaining rows to satisfy the GH theorem. If is present in but is absent (or vice-versa), we are free to assign the row associated with in any way to satisfy the GH theorem. The difficulty will arise when both , and are present in . To ensure that the GH theorem is satisfied we may have to ensure that the rows associated with and be separated.

When is the set of all constraints we show how to find a partition that satisfies the GH theorem. We build this partition by sequentially assigning rows to and making sure that after each assignment the conditions of the GH theorem are satisfied for the rows that have been assigned. It will be clear that this procedure can also be applied when only a subset of constraints are present (indeed, satisfying the GH theorem will be easier in this case).

Fix an agent . The following procedure will be repeated for each agent in turn. Pick an arbitrary vertex in (which is a tree) to be a root and direct all edges `away’ from the root (when is a subset of the constraints we delete from any edge in which at most one from the pair and appears in ) . Label the root . Label all its neighbors , label the neighbors of the neighbors and so on. If vertex was labeled assign the row to the set otherwise to the row . This produces a partition of the constraints of the form satisfying GH.

Initially, all leaves and edges of are unmarked. Trace out a path from the root to one of the leaves of and mark that leaf. Each unmarked directed edge on this path corresponds to the pair and . Assign to the same set that is the label of . Assign to the same set that is the label of vertex . Notice that in making this assignment the conditions of the GH theorem continues to satisfied. Mark the edge . If we repeat this procedure again with another path from the root to an unmarked leaf, we will violate the GH theorem. To see why suppose the tree contains edge as well as . Suppose was labeled on the first iteration and was marked. This means was assigned to . Subsequently will also be assigned to which will produce a partition that violates the GH theorem. We can avoid this problem by flipping the labels on all the vertices before repeating the path tracing procedure.

What is the institutional detail that makes electricity special? Its in the physics that I will summarize with a model of DC current in a resistive network. Note that other sources, like Wikipedia give other reasons, for why electricity is special:

Electricity is by its nature difficult to store and has to be available on demand. Consequently, unlike other products, it is not possible, under normal operating conditions, to keep it in stock, ration it or have customers queue for it. Furthermore, demand and supply vary continuously. There is therefore a physical requirement for a controlling agency, the transmission system operator, to coordinate the dispatch of generating units to meet the expected demand of the system across the transmission grid.

I’m skeptical. To see why, replace electricity by air travel.

Let be the set of vertices and the set of edges a the network. It will be convenient in what follows to assign (arbitrarily) an orientation to each edge in . Let be the set of directed arcs that result. Hence, mens that the edge is directed from to . Notice, if , then .

Associated with each is a number that we interpret as a flow of electricity. If we interpret this to be a flow from to . If we interpret this as a flow from to .

- Let is the resistance on link .
- unit cost of injecting current into node .
- marginal value of current consumed at node .
- amount of current consumed at node .
- amount of current injected at node .
- capacity of link .

Current must satisfy two conditions. The first is conservation of flow at each node:

The second is Ohm’s law. There exist node potentials such that

Using this systems equations one can derive the school boy rules for computing the resistance of a network (add them in series, add the reciprocals in parallel). At the end of this post is a digression that shows how to formulate the problem of finding a flow that satisfies Ohm’s law as an optimization problem. Its not relevant for the economics, but charming nonetheless.

At each node there is a power supplier with constant marginal cost of production of upto units. At each there is a consumer with constant marginal value of upto units. A natural optimization problem to consider is

subject to

This is the problem of finding a flow that maximizes surplus.

Let be the set of cycles in . Observe that each corresponds to a cycle in if we ignore the orientation of the edges. For each cycle , let denote the edges in that are traversed in accordance with their orientation. Let be the set of edges in that are traversed in the opposing orientation.

We can project out the variables and reformulate as

subject to

Recall the scenario we ended with in part 1. Let , and in addition suppose for all . Only has a capacity constraint of 600. Let and . Also and and each have unlimited capacity. At node 3, the marginal value is upto 1500 units and zero thereafter. The optimization problem is

subject to

Notice, for every unit of flow sent along , half a unit of flow must be sent along and as well to satisfy the cycle flow constraint.

The solution to this problem is , , , , and . What is remarkable about this not all of customer 3′s demand is met by the lowest cost producer even though that producer has unlimited capacity. Why is this? The intuitive solution would have been send 600 units along and 900 units along . This flow violates the cycle constraint.

In this example, when generator 1 injects electricity into the network to serve customer 3′s demand, a positive amount of that electricity must flow along *every* path from 1 to 3 in specific proportions. The same is true for generator 2. Thus, generator 1 is unable to supply all of customer 3′s demands. However, to accommodate generator 2, it must actually reduce its flow! Hence, customer 3 cannot contract with generators 1 and 2 independently to supply power. The shared infrastructure requires that they co-ordinate what they inject into the system. This need for coordination is the argument for a clearing house not just to manage the network but to match supply with demand. This is the argument for why electricity markets must be designed.

The externalities caused by electricity flows is not a proof that a clearing house is needed. After all, we know that if we price the externalities properly we should be able to implement the efficient outcome. Let us examine what prices might be needed by looking at the dual to the surplus maximization problem.

Let be the dual variable associated with the flow balance constraint. Let be associated with the cycle constraints. Let and be associated with link capacity constraints. Let and be associated with the remaining tow constraints. These can be interpreted as the profit of supplier and the surplus of customer respectively. For completeness the dual would be:

subject to

Now has a natural interpretation as a price to be paid for consumption at node for supply injected at node . and can be interpreted as the price of capacity. However, is trickier, price for flow around a cycle? It would seem that one would have to assign ownership of each link as well as ownership of cycles in order to have a market to generate these prices.

## Recent Comments