You are currently browsing rvohra’s articles.

On January 30th of this year, one of the arms of the BBC reported a row at Sheffield University about an economics exam question. The offending exam question is reproduced below. Is the question, as one student suggested, indistinguishable from Chinese?

Consider a country with many cities and assume there are N > 0 people in each city. Output per person is \sigma N^{0.5} and there is a coordination cost per person of \gamma N^2. Assume that \sigma > 0 and \gamma > 0.

a) What sort of things does the coordination cost term \gamma N^2 represent? Why does it make sense that the exponent on N is greater than 1?

b) Draw a graph of per-capita consumption as a function of N and derive the optimal city size N. How does it depend on the parameters \sigma and \gamma? Provide intuition for your answers.

c) Describe which combination of \sigma and \gamma generate a peasant economy, meaning an economy with no cities (or 1-person cities). Why might the values of the parameters \sigma and \gamma have changed over time? What do these changes imply in terms of optimal city size.

Without knowing what was covered in classes and homework one cannot tell what kind of tacit knowledge/conventions the examiner was justified in assuming in posing the question. Its easy, with experience at these things, to guess what the examiner had in mind. Nevertheless, the question is badly worded and allows a `sea lawyer‘ of a student to get full marks.

First, the sentence does not assert a connection between output and coordination. Thus, the answer to (a) should be:

Without knowing the purpose of the coordination, it is impossible to answer this question.

A better first sentence would have been:

Consider a country with many cities and assume there are N > 0 people in each city. Output per person is \sigma N^{0.5} and to achieve it requires a coordination cost per person of \gamma N^2.

Second, readers are not told the units in which output is denominated. Thus, part (b) cannot be answered unless one assumes that output has a constant dollar value. One might reasonably suppose this is not the case. The sea lawyer would answer:

As output can be generated at no cost, and is monotone in city size, the optimal size of the city is infinity. Note this does not depend on the values of \sigma or \gamma.

The answer to part (c), consistent with the earlier answers:

From the answer to part (b) we see that no combination of parameters would generate a peasant economy.

Yanis Varoufakis, the Greek Finance minister writes in the Feb 16 NY Times:

Game theorists analyze negotiations as if they were split-a-pie games involving selfish players. Because I spent many years during my previous life as an academic researching game theory, some commentators rushed to presume that as Greece’s new finance minister I was busily devising bluffs, stratagems and outside options, struggling to improve upon a weak hand.

Is this a case of a theorist mugged by reality or someone who misunderstands theory? The second. The first sentence quoted proves it because its false. Patently so. Yes, there are split-a-pie models of negotiation but they are not the only models. What about models where the pie changes in size with investments made by the players (i.e. double marginalization)?  Wait, this is precisely the situation that Varoufakis sees himself in:

…….table our proposals for regrowing Greece, explain why these are in Europe’s interest…….

He continues:

`If anything, my game-theory background convinced me that it would be pure folly to think of the current deliberations between Greece and our partners as a bargaining game to be won or lost via bluffs and tactical subterfuge.’

Bluff and subterfuge are not the only arrow in the Game Theorist’s quiver. Commitment is another. Wait! Here is Varoufakis trying to signal commitment:

Faithful to the principle that I have no right to bluff, my answer is: The lines that we have presented as red will not be crossed. Otherwise, they would not be truly red, but merely a bluff.

Talk is cheap but credible commitments are not. A `weak’ type sometimes has a strong incentive to claim they are committed to this much and no more. Thus, Varoufakis’ claim that he does not bluff rings hollow, because a liar would say as much. Perhaps Varoufakis should dust off his Schelling and bone up on his signaling  as well as war of attrition games. Varoufakis may not bluff, but his negotiating partners think he does. Protestations to the contrary, appeals to justice, Kant and imperatives are simply insufficient.

He closes with this:

One may think that this retreat from game theory is motivated by some radical-left agenda. Not so. The major influence here is Immanuel Kant, the German philosopher who taught us that the rational and the free escape the empire of expediency by doing what is right.

Nobel sentiments, but Kant also reminded us that
“Out of the crooked timber of humanity, no straight thing was ever made.”

My advice to Varoufakis: more Game Theory, less metaphysics.

Thom Tillis, Senator from the great state of North Carolina, was the subject of some barbs when he suggested that the health-code mandated sign that reads

“Employees must wash hands before returning to work.”

was an example of government over-regulation.

Quoting himself:

“I said that I don’t have any problem with Starbucks if they choose to opt out of this policy as long as they post a sign that says, ‘We don’t require our employees to wash their hands after leaving the restroom.’ The market will take care of that.”

Many found the sentiment ridiculous, but for the wrong reason. Tillis was not advocating the abolition of the hand washing injunction but replacing it with another that would, in his view, have the same effect. More generally, he seems to suggest the following rule: you can opt out of a regulation as long as one discloses this. If the two forms of regulation (all must follow vs. opt out but disclose) are outcome equivalent why should we prefer one to the other?

Monitoring costs are not lower; one still has to monitor those who opt out to verify they have disclosed. What constitutes disclosure? For example:

`We do not require our employees to wash their hands because they do so anyway.’

Would the following be acceptable?

“We operate a hostile work environment, but pay above above average wages to compensate for that.”

Is a question I thought as dead as a Dodo.  When I came upon it in an undergraduate philosophy of science class  the drums had been muffled and the mourners called. Nevertheless, there are still those who persist in resuscitating the corpse (see here for a recent example) and those, who for noble reasons, indulge them by responding.

There were, and are two good reasons for why this question should be left to rot in peace. The first is that the comparisons made to arrive at a demarcation are problematic. If Science were a country, Physics might be its capital. If one were to ask whether History is a Science, the customary thing to do is to measure the proximity of History to Science’s capital city. Why proximity to the capital and not to one of its outlying settlements like Geology and Archaeology? The second, better reason, is that the question, `is X a science?’ is of interest only if we believe that scientific knowledge should be privileged in some way. Perhaps it alone is valid and useful while nonscientific knowledge is not. If that is the case, the correct question should not be whether X is a science, but whether X produces knowledge that is valid and useful. Now we have something interesting to discuss: what constitutes useful or valid knowledge?

One might point to accurate prediction, but this alone cannot be the touchstone. How would we feel about the laws of Newtonian motion if we came upon them via regression? I suspect many of us would find such a theory to be incomplete, not least because of the concern with out of sample prediction. By the way, if you think this outlandish, I first learnt Newton’s laws by sending little carts down inclines with bits of ticker tape attached to them to so that we might, by induction, learn a linear relationship between velocity and acceleration. Truth be told, the Physics was sometimes lost in the enormous fun of racing the carts when the master’s back was turned. What if prediction is probabilistic rather than deterministic? In earlier posts on this blog you will find lengthy discussions of the problems associated with evaluating the accuracy of such predictions. I mention all this to hint at how difficult it is to pin down precisely what constitutes useful, reliable or valid knowledge.

Introduced externalities. The usual examples, pollution, good manners and flatulance. However, I also emphasized an externality we had dealt with all semester: when I buy a particular Picasso it prevents you from doing so, exerting a negative externality on you. I did this to point out that the problem with externalities is not their existence, but whether they are `priced’ into the market or not. For many of the examples of goods and services that we discussed in class, the externality is priced in and we get the efficient allocation.

What happens when the externality is not `priced in’? The hoary example of two firms, one upstream from the other with the upstream firm releasing a pollutant into the river (That lowers its costs but raises the costs of the downstream firm) was introduced and we went through the possibilities: regulation, taxation, merger/ nationalization and tradeable property rights.

Discussed pros and cons of each. Property rights (i.e. Coase), consumed a larger portion of the time; how would you define them, how would one ensure a perfectly competitive market in the trade of such rights? Nudged them towards the question of whether one can construct a perfectly competitive market for any property right.

To fix ideas, asked them to consider how a competitive market for the right to emit carbon might work. Factories can, at some expense lower carbon emissions. We each of us value a reduction in carbon (but not necessarily identically). Suppose we hand out permits to factories (recall, Coase says initial allocation of property rights is irrelevant) and have people buy the permits up to reduce carbon. Assuming carbon reduction is a public good (non-excludable and non-rivalrous), we have a classic public goods problem. Strategic behavior kills the market.

Some discussion of whether reducing carbon is a public good. The air we breathe (there are oxygen tanks)? Fireworks? Education? National Defense? Wanted to highlight that nailing down an example that fit the definition perfectly was hard. There are `degrees’. Had thought that Education would generate more of a discussion given the media attention it receives, it did not.

Concluded with an in depth discussion of electricity markets as it provides a wonderful vehicle to discuss efficiency, externalities as well as entry and exit in one package. It also provides a backdoor way into a discussion of net neutrality that seemed to generate some interest. As an aside I asked them whether perfectly competitively markets paid agents what they were worth? How should one measure an agents economic worth? Nudged them towards marginal product. Gave an example where Walrasian prices did not give each agent his marginal product (where the core does not contain the Vickrey outcome). So, was Michael Jordan overpaid or underpaid?
With respect to entry and exit I showed that the zero profit condition many had seen in earlier econ classes did not produce efficient outcomes. The textbook treatment assumes all potential entrants have the same technologies. What if the entrants have different technologies? For example, solar vs coal. Do we get the efficient mix of technologies? Assuming a competitive market that sets the Walrasian price for power, I showed them examples where we do not get the efficient mix of technologies.

An unintentionally amusing missive by Marion Fourcade, Etienne Ollion and Yann Algan’s discovers that the Economics profession is a self perpetuating oligarchy. This is as shocking as the discovery of gambling in Casablanca. Economists are human and respond to incentives just as others do (see the Zingales piece that makes this point). Are other disciplines free of such oligarchies? Or is the complaint that the Economist’s oligarchy is just order of magnitudes more efficient than other disciplines?

The abstract lists three points the authors wish to make.

1) We begin by documenting the relative insularity of economics, using bibliometric data.

A former colleague of mine once classified disciplines as sources (of ideas) and sinks (absorbers of them). One could just as well as describe the bibliometric data as showing that Economics is a source of ideas while other social sciences are sinks. if one really wanted to put the boot in, perhaps the sinks should be called back holes, ones from which no good idea ever escapes.
2) Next we analyze the tight management of the field from the top down, which gives economics its characteristic hierarchical structure.

Economists can be likened to the Borg, which are described by Wikipedia as follows:

“….. the Borg force other species into their collective and connect them to “the hive mind”; the act is called assimilation and entails violence, abductions, and injections of microscopic machines called nanoprobes.”

3) Economists also distinguish themselves from other social scientists through their much better material situation (many teach in business schools, have external consulting activities), their more individualist worldviews, and in the confidence they have in their discipline’s ability to fix the world’s problems.

If the authors had known of this recent paper in Science they could have explained all this by pointing out that Economists are wheat people and other social scientists are rice people.

General equilibrium! Crown jewel of micro-economic theory. Arrow and Hahn give the best justification:

“There is by now a long and fairly imposing line of economists from Adam Smith to the present who have sought to show that a decentralized economy motivated by self-interest and guided by price signals would be compatible with a coherent disposition of economic resources that could be regarded, in a well defined sense, as superior to a large class of possible alternative dispositions. Moreover the price signals would operate in a way to establish this degree of coherence. It is important to understand how surprising this claim must be to anyone not exposed to the tradition. The immediate `common sense’ answer to the question `What will an economy motivated by individual greed and controlled by a very large number of different agents look like?’ is probably: There will be chaos. That quite a different answer has long been claimed true and has permeated the economic thinking of a large number of people who are in no way economists is itself sufficient ground for investigating it seriously. The proposition having been put forward and very seriously
entertained, it is important to know not only whether it is true, but whether it could be true.”

But how to make it come alive for my students? When first I came to this subject it was in furious debates over central planning vs. the market. Gosplan, the commanding heights, indicative planning were as familiar in our mouths as Harry the King, Bedford and Exeter, Warwick and Talbot, Salisbury and Gloucester….England, on the eve of a general election was poised to leave all this behind. The question, as posed by Arrow and Hahn, captured the essence of the matter.

Those times have passed, and I chose instead to motivate the simple exchange economy by posing the question of how a sharing economy might work. Starting with two agents endowed with a positive quantity of each of two goods, and given their utility functions, I asked for trades that would leave each of them better off. Not only did such trades exist, there were more than one. Which one to pick? What if there were many agents and many goods? Would bilateral trades suffice to find mutually beneficial trading opportunities? Tri-lateral? The point of this thought experiment was to show how in the absence of prices, mutually improving trades might be very hard to find.

Next, introduce prices, and compute demands. Observed that demands in this world could increase with prices and offered an explanation. Suggested that this put existence of market clearing prices in doubt. Somehow, in the context of example this all works out. Hand waved about intermediate value theorem before asserting existence in general.

On to the so what. Why should one prefer the outcomes obtained under a Walrasian equilibrium to other outcomes? Notion of Pareto optimality and first welfare theorem. Highlighted weakness of Pareto notion, but emphasized how little information each agent needed other than price, own preferences and endowment to determine what they would sell and consume. Amazingly, prices coordinate everyone’s actions. Yes, but how do we arrive at them? Noted and swept under the rug, why spoil a good story just yet?

Gasp! Did not cover Edgeworth boxes.

Went on to introduce production. Spent some time explaining why the factories had to be owned by the consumers. Owners must eat as well. However it also sets up an interesting circularity in that in small models, the employee of the factory is also the major consumer of its output! Its not often that a firm’s employers are also a major fraction of their consumers.

Closed with, how in Walrasian equilibrium, output is produced at minimum total cost. Snuck in the central planner, who solves the problem of finding the minimum cost production levels to meet a specified demand. Point out that we can implement the same solution using prices that come from the Lagrange multiplier of the central planners demand constraint. Ended by coming back full circle, why bother with prices, why not just let the central planner have his way?

Starr’s ’69 paper considered Walrasian equilibria in exchange economies with non-convex preferences i.e., upper contour sets of utility functions are non-convex. Suppose {n} agents and {m} goods with {n \geq m}. Starr identified a price vector {p^*} and a feasible allocation with the property that at most {m} agents did not receiving a utility maximizing bundle at the price vector {p^*}.

A poetic interlude. Arrow and Hahn’s book has a chapter that describes Starr’s work and closes with a couple of lines of Milton:

A gulf profound as that Serbonian Bog
Betwixt Damiata and Mount Casius old,
Where Armies whole have sunk.

Milton uses the word concave a couple of times in Paradise Lost to refer to the vault of heaven. Indeed the OED lists this as one of the poetic uses of concavity.

Now, back to brass tacks. Suppose {u_i} is agent {i}‘s utility function. Replace the upper contour sets associated with {u_i} for each {i} by its convex hull. Let {u^*_i} be the concave utility function associated with the convex hulls. Let {p^*} be the Walrasian equilibrium prices wrt {\{u^*_i\}_{i=1}^n}. Let {x^*_i} be the allocation to agent {i} in the associated Walrasian equilibrium.

For each agent {i} let

\displaystyle S^i = \arg \max \{u_i(x): p^* \cdot x \leq p^*\cdot e^i\}

where {e^i} is agent {i}‘s endowment. Denote by {w} the vector of total endowments and let {S^{n+1} = \{-w\}}.

Let {z^* = \sum_{i=1}^nx^*_i - w = 0} be the excess demand with respect to {p^*} and {\{u^*_i\}_{i=1}^n}. Notice that {z^*} is in the convex hull of the Minkowski sum of {\{S^1, \ldots, S^n, S^{n+1}\}}. By the Shapley-Folkman-Starr lemma we can find {x_i \in conv(S^i)} for {i = 1, \ldots, n}, such that {|\{i: x_i \in S^i\}| \geq n - m} and {0 = z^* = \sum_{i=1}^nx_i - w}.

When one recalls, that Walrasian equilibria can also be determined by maximizing a suitable weighted (the Negishi weights) sum of utilities over the set of feasible allocations, Starr’s result can be interpreted as a statement about approximating an optimization problem. I believe this was first articulated by Aubin and Elkeland (see their ’76 paper in Math of OR). As an illustration, consider the following problem :

\displaystyle \max \sum_{j=1}^nf_j(y_j)

subject to

\displaystyle Ay = b

\displaystyle y \geq 0

Call this problem {P}. Here {A} is an {m \times n} matrix with {n > m}.

For each {j} let {f^*_j(\cdot)} be the smallest concave function such that {f^*_j(t) \geq f_j(t)} for all {t \geq 0} (probably quasi-concave will do). Instead of solving problem {P}, solve problem {P^*} instead:

\displaystyle \max \sum_{j=1}^nf^*_j(y_j)

subject to

\displaystyle Ay = b

\displaystyle y \geq 0

The obvious question to be answered is how good an approximation is the solution to {P^*} to problem {P}. To answer it, let {e_j = \sup_t [f_j^*(t) - f_j(t)]} (where I leave you, the reader, to fill in the blanks about the appropriate domain). Each {e_j} measures how close {f_j^*} is to {f_j}. Sort the {e_j}‘s in decreasing orders. If {y^*} is an optimal solution to {P^*}, then following the idea in Starr’s ’69 paper we get:

\displaystyle \sum_{j=1}^nf_j(y^*_j) \geq \sum_{j=1}^nf^*_j(y^*_j)- \sum_{j=1}^me_j

It states that the Minkowski sum of a large number of sets is approximately convex. The clearest statement  as well as the nicest proof  I am familiar with is due to J. W. S. Cassels. Cassels is a distinguished number theorist who for many years taught the mathematical economics course in the Tripos. The lecture notes  are available in a slender book now published by Cambridge University Press.

This central limit like quality of the lemma is well beyond the capacity of a hewer of wood like myself. I prefer the more prosaic version.

Let {\{S^j\}_{j=1}^n} be a collection of sets in {\Re ^m} with {n > m}. Denote by {S} the Minkowski sum of the collection {\{S^i\}_{i=1}^n}. Then, every {x \in conv(S)} can be expressed as {\sum_{j=1}^nx^j} where {x^j \in conv(S^j)} for all {j = 1,\ldots, n} and {|\{j: x^j \not \in S^j| \leq m}.

How might this be useful? Let {A} be an {m \times n} 0-1 matrix and {b \in \Re^m} with {n > m}. Consider the problem

\displaystyle \max \{cx: Ax = b, x_j \in \{0,1\}\ \forall \,\, j = 1, \ldots, n\}.

Let {x^*} be a solution to the linear relaxation of this problem. Then, the lemma yields the existence of a 0-1 vector {x} such that {cx \geq cx^* = z} and {||Ax - b||_{\infty} \leq m}. One can get a bound in terms of Euclidean distance as well.

How does one do this? Denote each column {j} of the {A} matrix by {a^j} and let {d^j = (c_j, a^j)}. Let {S^j = \{d^j, 0\}}. Because {z = cx^*} and {b = Ax^*} it follows that {(z,b) \in conv(S)}. Thus, by the Lemma,

\displaystyle (z, b) = \sum_{j=1}^n(c_j, a^j)y_j

where each {y_j \in [0,1]} and {|\{j: y_j \in (0,1) \}| \leq m }. In words, {y} has at most {m} fractional components. Now construct a 0-1 vector {y^*} from {y} as follows. If {y_j \in \{0,1\}}, set {y^*_j = y_j}. If {y_j } is fractional, round {y^*_j} upto 1 with probability {y_j} and down to zero otherwise. Observe that {||Ay - b||_{\infty} \leq m} and the {E(cy) = cx^*}. Hence, there must exist a 0-1 vector {x} with the claimed properties.

The error bound of {m} is to large for many applications. This is a consequence of the generality of the lemma. It makes no use of any structure encoded in the {A} matrix. For example, suppose x^* were an extreme point and A a totally unimodular matrix. Then, the number of fractional components of $x^*$ are zero. The rounding methods of Kiralyi, Lau and Singh as well as of Kumar, Marathe, Parthasarthy and Srinivasan exploit the structure of the matrix. In fact both use an idea that one can find in Cassel’s paper. I’ll follow the treatment in Kumar et. al.

As before we start with {x^*}. For convenience suppose {0 < x^*_j < 1} for all {j = 1, \ldots, n}. As {A} as has more columns then rows, there must be a non-zero vector {r} in the kernel of {A}, i.e., {Ar = 0}. Consider {x + \alpha r} and {x -\beta r}. For {\alpha > 0} and {\beta > 0} sufficiently small, {x_j + \alpha r_j, x_j - \beta r_j \in [0,1]} for all {j}. Increase {\alpha} and {\beta} until the first time at least one component of {x +\alpha r} and {x- \beta r} is in {\{0,1\}}. Next select the vector {x + \alpha r} with probability {\frac{\beta}{\alpha + \beta}} or the vector {x- \beta r} with probability {\frac{\alpha}{\alpha + \beta}}. Call the vector selected {x^1}.

Now {Ax^1 = b}. Furthermore, {x^1} has at least one more integer component than {x^*}. Let {J = \{j: x^1_j \in (0,1)\}}. Let {A^J} be the matrix consisting only of the columns in {J} and {x^1(J)} consist only of the components of {x^1} in {J}. Consider the system {A^Jx^1(J) = b - \sum_{j \not \in J}x^1_j}. As long as {A^J} has more columns then rows we can repeat the same argument as above. This iterative procedure gives us the same rounding result as the Lemma. However, one can do better, because it may be that even when the number of columns of the matrix is less than the number of rows, the system may be under-determined and therefore the null space is non-empty.

In a sequel, I’ll describe an optimization version of the Lemma that was implicit in Starr’s 1969 Econometrica paper on equilibria in economies with non-convexities.

Economists, I told my class, are the most empathetic and tolerant of people. Empathetic, as they learnt from game theory, because they strive to see the world through the eyes of others. Tolerant, because they never question anyone’s preferences. If I had the  talent I’d have broken into song with a version of `Why Can’t a Woman be More Like a Man’ :

Psychologists are irrational, that’s all there is to that!
Their heads are full of cotton, hay, and rags!
They’re nothing but exasperating, irritating,
vacillating, calculating, agitating,
Maddening and infuriating lags!

Why can’t a psychologist be more like an economist?

Back to earth with preference orderings. Avoided  the word rational to describe the restrictions placed on preference orderings, used `consistency’ instead. More neutral and conveys the idea that inconsistency makes prediction hard rather that suggesting a Wooster like IQ. Emphasized that utility functions were simply a succinct representation of consistent preferences and had no meaning beyond that.

In a bow to tradition went over the equi-marginal principle, a holdover from the days when economics students were ignorant of multivariable calculus. Won’t do that again. Should be banished from the textbooks.

Now for some meat: the income and substitution (I&S) effect. Had been warned this was tricky. `No shirt Sherlock,’ my students might say. One has to be careful about the set up.

Suppose price vector p and income I. Before I actually purchase anything, I contemplate what I might purchase to maximize my utility. Call that x.
Again, before I purchase x, the price of good 1 rises. Again, I contemplate what I might consume. Call it z. The textbook discussion of the income and substitution effect is about the difference between x and z.

As described, the agent has not purchased x or z. Why this petty foggery? Suppose I actually purchase $x$ before the price increase. If the price of good 1 goes up, I can resell it. This is both a change in price and income, something not covered by the I&S effect.

The issue is resale of good 1. Thus, an example of an I&S effect using housing should distinguish between owning vs. renting. To be safe one might want to stick to consumables. To observe the income effect, we would need a consumable that sucks up a `largish’ fraction of income. A possibility is low income consumer who spends a large fraction on food.


Get every new post delivered to your Inbox.

Join 161 other followers