You are currently browsing the monthly archive for November 2014.
General equilibrium! Crown jewel of micro-economic theory. Arrow and Hahn give the best justification:
“There is by now a long and fairly imposing line of economists from Adam Smith to the present who have sought to show that a decentralized economy motivated by self-interest and guided by price signals would be compatible with a coherent disposition of economic resources that could be regarded, in a well defined sense, as superior to a large class of possible alternative dispositions. Moreover the price signals would operate in a way to establish this degree of coherence. It is important to understand how surprising this claim must be to anyone not exposed to the tradition. The immediate `common sense’ answer to the question `What will an economy motivated by individual greed and controlled by a very large number of different agents look like?’ is probably: There will be chaos. That quite a different answer has long been claimed true and has permeated the economic thinking of a large number of people who are in no way economists is itself sufficient ground for investigating it seriously. The proposition having been put forward and very seriously
entertained, it is important to know not only whether it is true, but whether it could be true.”
But how to make it come alive for my students? When first I came to this subject it was in furious debates over central planning vs. the market. Gosplan, the commanding heights, indicative planning were as familiar in our mouths as Harry the King, Bedford and Exeter, Warwick and Talbot, Salisbury and Gloucester….England, on the eve of a general election was poised to leave all this behind. The question, as posed by Arrow and Hahn, captured the essence of the matter.
Those times have passed, and I chose instead to motivate the simple exchange economy by posing the question of how a sharing economy might work. Starting with two agents endowed with a positive quantity of each of two goods, and given their utility functions, I asked for trades that would leave each of them better off. Not only did such trades exist, there were more than one. Which one to pick? What if there were many agents and many goods? Would bilateral trades suffice to find mutually beneficial trading opportunities? Tri-lateral? The point of this thought experiment was to show how in the absence of prices, mutually improving trades might be very hard to find.
Next, introduce prices, and compute demands. Observed that demands in this world could increase with prices and offered an explanation. Suggested that this put existence of market clearing prices in doubt. Somehow, in the context of example this all works out. Hand waved about intermediate value theorem before asserting existence in general.
On to the so what. Why should one prefer the outcomes obtained under a Walrasian equilibrium to other outcomes? Notion of Pareto optimality and first welfare theorem. Highlighted weakness of Pareto notion, but emphasized how little information each agent needed other than price, own preferences and endowment to determine what they would sell and consume. Amazingly, prices coordinate everyone’s actions. Yes, but how do we arrive at them? Noted and swept under the rug, why spoil a good story just yet?
Gasp! Did not cover Edgeworth boxes.
Went on to introduce production. Spent some time explaining why the factories had to be owned by the consumers. Owners must eat as well. However it also sets up an interesting circularity in that in small models, the employee of the factory is also the major consumer of its output! Its not often that a firm’s employers are also a major fraction of their consumers.
Closed with, how in Walrasian equilibrium, output is produced at minimum total cost. Snuck in the central planner, who solves the problem of finding the minimum cost production levels to meet a specified demand. Point out that we can implement the same solution using prices that come from the Lagrange multiplier of the central planners demand constraint. Ended by coming back full circle, why bother with prices, why not just let the central planner have his way?
Starr’s ’69 paper considered Walrasian equilibria in exchange economies with non-convex preferences i.e., upper contour sets of utility functions are non-convex. Suppose agents and goods with . Starr identified a price vector and a feasible allocation with the property that at most agents did not receiving a utility maximizing bundle at the price vector .
A poetic interlude. Arrow and Hahn’s book has a chapter that describes Starr’s work and closes with a couple of lines of Milton:
A gulf profound as that Serbonian Bog
Betwixt Damiata and Mount Casius old,
Where Armies whole have sunk.
Milton uses the word concave a couple of times in Paradise Lost to refer to the vault of heaven. Indeed the OED lists this as one of the poetic uses of concavity.
Now, back to brass tacks. Suppose is agent ‘s utility function. Replace the upper contour sets associated with for each by its convex hull. Let be the concave utility function associated with the convex hulls. Let be the Walrasian equilibrium prices wrt . Let be the allocation to agent in the associated Walrasian equilibrium.
For each agent let
where is agent ‘s endowment. Denote by the vector of total endowments and let .
Let be the excess demand with respect to and . Notice that is in the convex hull of the Minkowski sum of . By the Shapley-Folkman-Starr lemma we can find for , such that and .
When one recalls, that Walrasian equilibria can also be determined by maximizing a suitable weighted (the Negishi weights) sum of utilities over the set of feasible allocations, Starr’s result can be interpreted as a statement about approximating an optimization problem. I believe this was first articulated by Aubin and Elkeland (see their ’76 paper in Math of OR). As an illustration, consider the following problem :
Call this problem . Here is an matrix with .
For each let be the smallest concave function such that for all (probably quasi-concave will do). Instead of solving problem , solve problem instead:
The obvious question to be answered is how good an approximation is the solution to to problem . To answer it, let (where I leave you, the reader, to fill in the blanks about the appropriate domain). Each measures how close is to . Sort the ‘s in decreasing orders. If is an optimal solution to , then following the idea in Starr’s ’69 paper we get:
Here is the question from Ross’ book that I posted last week
Question 1 We have two coins, a red one and a green one. When flipped, one lands heads with probability and the other with probability . Assume that . We do not know which coin is the coin. We initially attach probability to the red coin being the coin. We receive one dollar for each heads and our objective is to maximize the total expected discounted return with discount factor . Find the optimal policy.
This is a dynamic programming problem where the state is the belief that the red coin is . Every period we choose a coin to toss, get a reward and updated our state given the outcome. Before I give my solution let me explain why we can’t immediately invoke uncle Gittins.
In the classical bandit problem there are arms and each arm provides a reward from an unknown distribution . Bandit problems are used to model tradeoffs between exploitation and exploration: Every period we either exploit an arm about whose distribution we already have a good idea or explore another arm. The are randomized independently according to distributions , and what we are interested in is the expected discounted reward. The optimization problem has a remarkable solution: choose in every period the arm with the largest Gittins index. Then update your belief about that arm using Bayes’ rule. The Gittins index is a function which attaches a number (the index) to every belief about an arm. What is important is that the index of an arm depends only on — our current belief about the distribution of the arm — not on our beliefs about the distribution of the other arms.
The independence assumption means that we only learn about the distribution of the arm we are using. This assumption is not satisfied in the red coin green coin problem: If we toss the red coin and get heads then the probability that the green coin is decreases. Googling `multi-armed bandit’ with `dependent arms’ I got some papers which I haven’t looked at carefully but my superficial impression is that they would not help here.
Here is my solution. Call the problem I started with `the difficult problem’ and consider a variant which I call `the easy problem’. Let so that . In the easy problem there are again two coins but this time the red coin is with probability and with probability and, independently, the green coin is with probability and with probability . The easy problem is easy because it is a bandit problem. We have to keep track of beliefs and about the red coin and the green coin ( is the probability that the red coin is ), starting with and , and when we toss the red coin we update but keep fixed. It is easy to see that the Gittins index of an arm is a monotone function of the belief that the arm is so the optimal strategy is to play red when and green when . In particular, the optimal action in the first period is red when and green when .
Now here comes the trick. Consider a general strategy that assigns to every finite sequence of past actions and outcomes an action (red or green). Denote by and the rewards that gives in the difficult and easy problems respectively. I claim that
Why is that ? in the easy problem there is a probability that both coins are . If this happens then every gives payoff . There is a probability that both coins are . If this happens then every gives payoff . And there is a probability that the coins are different, and, because of the choice of , conditionally on this event the probability of being is . Therefore, in this case gives whatever gives in the difficult problem.
So, the payoff in the easy problem is a linear function of the payoff in the difficult problem. Therefore the optimal strategy in the difficult problem is the same as the optimal strategy in the easy problem. In particular, we just proved that, for every , the optimal action in the first period is red when and green with . Now back to the dynamic programming formulation, from standard arguments it follows that the optimal strategy is to keep doing it forever, i.e., at every period to toss the coin that is more likely to be the coin given the current information.
See why I said my solution is tricky and specific ? it relies on the fact that there are only two arms (the fact that the arms are coins is not important). Here is a problem whose solution I don’t know:
Question 2 Let . We are given coins, one of each parameter, all possibilities equally likely. Each period we have to toss a coin and we get payoff for Heads. What is the optimal strategy ?
It states that the Minkowski sum of a large number of sets is approximately convex. The clearest statement as well as the nicest proof I am familiar with is due to J. W. S. Cassels. Cassels is a distinguished number theorist who for many years taught the mathematical economics course in the Tripos. The lecture notes are available in a slender book now published by Cambridge University Press.
This central limit like quality of the lemma is well beyond the capacity of a hewer of wood like myself. I prefer the more prosaic version.
Let be a collection of sets in with . Denote by the Minkowski sum of the collection . Then, every can be expressed as where for all and .
How might this be useful? Let be an 0-1 matrix and with . Consider the problem
Let be a solution to the linear relaxation of this problem. Then, the lemma yields the existence of a 0-1 vector such that and . One can get a bound in terms of Euclidean distance as well.
How does one do this? Denote each column of the matrix by and let . Let . Because and it follows that . Thus, by the Lemma,
where each and . In words, has at most fractional components. Now construct a 0-1 vector from as follows. If , set . If is fractional, round upto 1 with probability and down to zero otherwise. Observe that and the . Hence, there must exist a 0-1 vector with the claimed properties.
The error bound of is to large for many applications. This is a consequence of the generality of the lemma. It makes no use of any structure encoded in the matrix. For example, suppose were an extreme point and a totally unimodular matrix. Then, the number of fractional components of $x^*$ are zero. The rounding methods of Kiralyi, Lau and Singh as well as of Kumar, Marathe, Parthasarthy and Srinivasan exploit the structure of the matrix. In fact both use an idea that one can find in Cassel’s paper. I’ll follow the treatment in Kumar et. al.
As before we start with . For convenience suppose for all . As as has more columns then rows, there must be a non-zero vector in the kernel of , i.e., . Consider and . For and sufficiently small, for all . Increase and until the first time at least one component of and is in . Next select the vector with probability or the vector with probability . Call the vector selected .
Now . Furthermore, has at least one more integer component than . Let . Let be the matrix consisting only of the columns in and consist only of the components of in . Consider the system . As long as has more columns then rows we can repeat the same argument as above. This iterative procedure gives us the same rounding result as the Lemma. However, one can do better, because it may be that even when the number of columns of the matrix is less than the number of rows, the system may be under-determined and therefore the null space is non-empty.
In a sequel, I’ll describe an optimization version of the Lemma that was implicit in Starr’s 1969 Econometrica paper on equilibria in economies with non-convexities.
Economists, I told my class, are the most empathetic and tolerant of people. Empathetic, as they learnt from game theory, because they strive to see the world through the eyes of others. Tolerant, because they never question anyone’s preferences. If I had the talent I’d have broken into song with a version of `Why Can’t a Woman be More Like a Man’ :
Psychologists are irrational, that’s all there is to that!
Their heads are full of cotton, hay, and rags!
They’re nothing but exasperating, irritating,
vacillating, calculating, agitating,
Maddening and infuriating lags!
Why can’t a psychologist be more like an economist?
Back to earth with preference orderings. Avoided the word rational to describe the restrictions placed on preference orderings, used `consistency’ instead. More neutral and conveys the idea that inconsistency makes prediction hard rather that suggesting a Wooster like IQ. Emphasized that utility functions were simply a succinct representation of consistent preferences and had no meaning beyond that.
In a bow to tradition went over the equi-marginal principle, a holdover from the days when economics students were ignorant of multivariable calculus. Won’t do that again. Should be banished from the textbooks.
Now for some meat: the income and substitution (I&S) effect. Had been warned this was tricky. `No shirt Sherlock,’ my students might say. One has to be careful about the set up.
Suppose price vector and income . Before I actually purchase anything, I contemplate what I might purchase to maximize my utility. Call that .
Again, before I purchase , the price of good 1 rises. Again, I contemplate what I might consume. Call it . The textbook discussion of the income and substitution effect is about the difference between and .
As described, the agent has not purchased or . Why this petty foggery? Suppose I actually purchase $x$ before the price increase. If the price of good 1 goes up, I can resell it. This is both a change in price and income, something not covered by the I&S effect.
The issue is resale of good 1. Thus, an example of an I&S effect using housing should distinguish between owning vs. renting. To be safe one might want to stick to consumables. To observe the income effect, we would need a consumable that sucks up a `largish’ fraction of income. A possibility is low income consumer who spends a large fraction on food.