Day 2 was devoted to marginal this, that and the other. I began by asking if a monopolist (with constant unit costs) who suffers an increase in its unit costs should pass along the full unit cost increase to their buyers? To make it more piquant, I asked them to assume a literal monopolist, i.e., sole seller. Some said maybe, because it depends on elasticity of demand. Others said, yes, what choice do buyers have? Alert ones said no, because you must be at an inelastic portion of the demand curve (thank you, markup formula). They will indeed increase the price but the increase is tempered by the high elasticity at the current profit maximizing price. Profit will go down. This example illustrates how both the demand side and cost side interact to influence profits. In day 1 we focused on how the demand side affected price, in day 2 we focus on the cost side.

To motivate the notion of marginal cost, I ask how they would define cost per unit to convey the idea that this is an ambiguous concept. A possible candidate is average cost but ist not helpful maing decisions about whether to increase of decrease output. For this, what we want is marginal cost. Define marginal cost, and onto constant, decreasing and increasing returns to scale and discussion of technologies that would satisfy each of these. Solving quadratics is a good example. The time to solve each is the marginal cost. If you have decreasing returns to scale in solving quadratics, a wit suggested, correctly, that one should give up mathematics.

Next, where do cost functions come from? Opportunity to introduce capital and labor and production function. Cost function is minimum cost way of combining K and L to produce a target quantity. Numerical example with Cobb-Douglas. Without explicitly mentioning isoquants and level curves, solved problem graphically (draw feasible region, move objective function hyperplane) as well as algebraically. Discussed impact of price change of inputs on mix used to produce target volume. Marginal productivity of labor, capital and marginal rate of technical substitution. Eyes glazing over. Why am I wasting time with this stuff? This is reading aloud. Never again.

Onto marginal revenue. By this time they should have realized the word marginal means derivative. Thankfully, they don’t ask why a new word is needed to describe something that already has a label: derivative. Marginal revenue should get their goat. Its a derivative of revenue, but with respect to what? Price or quantity? The term gives no clue. Furthermore, marginal revenue sounds like price. The result? Some students set price equal to marginal cost to maximize profit because thats what the slogan marginal revenue = marginal cost means. To compound matters, we then say the area under the marginal revenue curve is revenue. If marginal revenue is the derivative wrt quantity then integrating it should return the revenue. Does this really deserve comment? Perhaps watching paint dry would be more exciting. Wish I had the courage to dispense with the word `marginal’ altogether. Perhaps next year. Imagine the shock of my colleagues when the phrase `marginal blank’ is greeted with puzzled looks.

They’ve been very patient. Before class ends there should be a payoff. Show that marginal revenue = marginal cost is a necessary condition for profit maximization and is sufficient when we have decreasing returns to scale. This seems like small beer. What happens when we have increasing returns to scale? Why does this break down? Some pictures, of why the slogan is no longer sufficient and a discussion of how this relates to pricing for firms with increasing returns like a producer of an app who must rent server space and gets a quantity discount.

One more word about organ selling before I return to my comfort zone and talk about Brownian motion in Lie groups. Selling living human organs is repugnant, in part because the sellers cause damage to their bodies out of desperation. But what about allowing your relatives to sell what’s left of you when you’re gone ? I think this should be uncontroversial. And there are side advantages too, in addition to increasing the number of transplantations. For example, it will encourage you to quit smoking.

Over to you, Walter.

Something funny happened when I started watching Al Roth’s lecture and looked at the paper: I realized that what I always assumed is the meaning of `repugnant transactions’ is not exactly the phenomena that Roth talks about. What I thought `repugnant transaction’ means is a situation of `two rights makes a wrong': it’s totally awesome that Xanders is willing to donate his extra kidney to Zordiac, and it’s really nice of Zordiac to donate money to Xanders, but these two nobles acts done together in exchange for each other is imoral and should be outlawed. Roth however defines `repugnant transaction’ more broadly as any transaction that some people want to engage in and others don’t think they should. Consider the opening example of his paper: laws against selling horse meat in restaurants. Here what is repugnant is not the exchange but the good itself. It’s not two rights makes wrong. It’s just wrong. We outlaw the exchange simply because of constitutional reasons or because it’s impossible to enforce a ban on eating — people will simply order take away and perform the crime of eating at their homes.

Read the rest of this entry »

In my salad days, school masters would assign boys returning from the summer hols an essay: `What I did during the summer’. Yes, masters and boys. I served a portion of my youth in a `misbegotten penal colony upon a wind blasted heath’. The only females present were master’s wives, matrons and the French mistress. No, not that kind, the kind that offers instruction in French. As you can see, to the lascivious minds of boys, there was no end to the double entendres. However, I digress.

Over the summer Thanh Nguyen and myself completed a paper about stable matchings. The abstract is reproduced below.

The National Resident Matching program strives for a stable matching of medical students to teaching hospitals. With the presence of couples, stable matchings need not exist. For any student preferences, we show that each instance of a stable matching problem has a `nearby’ instance with a stable matching. The nearby instance is obtained by perturbing the capacities of the hospitals. Specifically, given a reported capacity k_h for each hospital h, we find a redistribution of the slot capacities k'_h satisfying |k_h-k'_h|\le 4 for all hospitals h and \sum_h k_h\le \sum k'_h \le \sum_h k_h+9, such that a stable matching exists with respect to k'. Our approach is general and applies to other type of complementarities, as well as matchings with side constraints and contracts.

In other words, with the addition of at most 9 additional slots, one can guarantee the existence of a stable matchings. This is independent of the size of the market or doctors preferences (it does assume responsive preferences on the part of hospitals). The key tool  is Scarf’s lemma which is a wonderful device for converting results about cardinal matching problems into results about ordinal matching problems. For more on this, consult the paper by Kiralyi and Pap, who should be credited with a formulation of Scarf’s lemma that makes its usefulness evident.

Here is Al Roth’s talk in the Lindau Meeting on Economic Sciences about repugnant transactions, which I guess is the technical term for the discomfort I feel at the idea of people donating their extra kidney to those who need it in return to, you know, money.

Before he was a Nobel Laureate Roth was a Nancy L. Schwartz Memorial Lecturer. His talk was about kidney exchanges — these are exchanges between several pairs of donor+recipient involving no money but only kidneys — and he started with a survey of the audience: who is in favor of allowing selling and buying of kidneys in the free market ? (I am glad I didn’t raise my hand. The next question was about selling and buying of living hearts.) I remember noticing that there was a correlation between raised hands and seniority: For whatever reason, seniors were more likely to be in favor of the free market than juniors.

In the dinner after the talk I ended up in a table of juniors & spouses and we got to discuss our objection to the idea of letting Bob sell his Kidney to Alice, so that Bob can afford to send his daughter to college, and in doing so save Alice’s small child from orphanhood. Turned out we agreed on the policy but for different reasons. I don’t remember which was my reason. I still find both of them convincing, though less so simultaneously.

Reason I: The market price would be too low. Hungry people will compete selling their organs for a bowl of red pottage out of desperation. The slippery slope leads to poor people being harvested for their body parts.

Reason II: The market price would be too high. Only the 0.01 % will be able to afford it. The slippery slope leads to a small aristocracy who live forever by regenerating their bodies.

As I said, both (somewhat) convincing. And please don’t ask me what would be the fair price, that is neither too low nor too high.

 

 

200 students for a 9 am class in spite of a midterm on day 3; perhaps they’ve not read the syllabus.

Began with the ultimatum game framed in terms of a seller making a take or leave it offer to the buyer. The game allows one to make two points at the very beginning of class.

1) The price seller chooses depends on their model of how the buyer will behave. One can draw this point out by asking sellers to explain how they came by their offers. Best offers to discuss are the really low ones (i.e. give most of the surplus to the buyer) and the offers that split the difference.

2) Under the assumption that `more money is better than less’, point out that the seller captures most of the gains from trade. Why? The ability to make a credible take or leave it offer.

This makes for a smooth transition into the model of quasi-linear preferences. Some toy examples of how buyers make choices based on surplus. Emphasize it captures idea that buyers make trade-offs (pay more if you get more; if its priced low enough its good enough). Someone will ask about budget constraints. A good question, ignore budget for now and come back to it later in the semester.

Next, point out that buyers do not share the same reservation price (RP) for a good or service. Introduce demand curve as vehicle for summarizing variation in RPs. Emphasize that demand curve tells you demand as you change your price holding other prices fixed.

Onto monopoly with constant unit costs and limited to a uniform price. Emphasize that monopoly in our context does not mean absence of competition, only that competition keeps price fixed as we change ours. Reason for such an assumption is to understand first how buyers respond to one sellers price changes.

How does monopoly choose profit maximizing price? Trade-off between margin and volume. Simple monopoly pricing exercise. Answer by itself is uninteresting. Want to know what profit maximizing depends upon.

Introduce elasticity of demand, its meaning and derivation. Then, a table of how profit and elasticity vary with price in the toy example introduce earlier. Point out how elasticity rises as price rises. Demand starts to drop off faster than margin rises. Explain why we don’t stop where elasticity is 1. Useful place to point out that here a small price increase is revenue neutral but total costs fall. So, uniform price is doing things: determining how much is captured from buyers and controlling total production costs. Table also illustrates that elasticity of demand matters for choosing price.

Segue into the markup formula. Explain why we should expect some kind of inverse relationship between markup and elasticity. Do derivation of markup formula with constant unit costs.

Now to something interesting to make the point that what has come before is very useful: author vs. publisher, who would prefer a higher price for the book? You’ll get all possible answers which is perfect. Start with how revenue is different from profit (authors get percentage revenue). This difference means their interests are not aligned. So, they should pick different prices. But which will be larger? Enter markup formula. Author wants price where elasticity is 1. Publisher wants to price where elasticity is bigger than 1. So, publisher wants higher price. Wait, what about e-books? Then, author and publisher want same price because unit costs are zero.

This is the perfect opportunity to introduce the Amazon letter to authors telling them that elasticity of demand for e-books at the current $14.99 price is about 2.4. Well above 1. Clearly, all parties should agree to lower the price of e-books. But what about traditional books? Surely lower e-book price will cause some readers to switch from the traditional to the e-book. Shouldn’t we look at the loss in profit from that as well? Capital point, but make life simple. Suppose we have only e-books. Notice, under the agency model where Amazon gets a percentage of revenue, everyone’s incentives appear to be aligned.
Is Amazon correct in its argument that dropping the e-book price will benefit me the author? As expressed in their letter, no. To say that the elasticity of demand for my book at the current price is 2.4 means that if I drop my price 1%, demand will rise 2.4% HOLDING OTHER PRICES FIXED. However, Amazon is not taking about dropping the price of my book alone. They are urging a drop in the price of ALL books. It may well be that a drop in price for all e-books will result in an increase in total revenues for the e-book category. This is good for Amazon. However, it is not at all clear that it is good for me. Rustling of papers, and creaking of seats is a sign that time is up.

In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.

Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some {\mu\in [0,1]} then your prediction about the outcome tomorrow will be the expectation of {\mu}. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.

The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.

Read the rest of this entry »

About a year ago, I chanced to remark upon the state of Intermediate Micro within the hearing of my colleagues. It was remarkable, I said, that the nature of the course had not changed in half a century. What is more, the order in which topics were presented was mistaken and the exercises on a par with Vogon poetry, which I reproduce below for comparison:

“Oh freddled gruntbuggly,
Thy micturations are to me
As plurdled gabbleblotchits on a lurgid bee.
Groop, I implore thee, my foonting turlingdromes,
And hooptiously drangle me with crinkly bindlewurdles,
Or I will rend thee in the gobberwarts
With my blurglecruncheon, see if I don’t!”

The mistake was not to think these things, or even say them. It was to utter them within earshot of one’s colleagues. For this carelessness, my chair very kindly gave me the chance to put the world to rights. Thus trapped, I obliged. I begin next week. By the way, according to Alvin Roth, when an ancient like myself chooses to teach intermediate micro-economics it is a sure sign of senility.

What do I intend to do differently? First, re order the sequence of topics. Begin with monopoly, followed by imperfect competition, consumer theory, perfect competition, externalities and close with Coase.

Why monopoly first? Two reasons. First it involves single variable calculus rather than multivariable calculus and the lagrangean. Second, student enter the class thinking that firms `do things’ like set prices. The traditional sequence begins with a world where no one does anything. Undergraduates are not yet like the white queen, willing to believe 6 impossible things before breakfast.

But doesn’t one need preferences to do monopoly? Yes, but quasi-linear will suffice. Easy to communicate and easy to accept, upto a point. Someone will ask about budget constraints and one may remark that this is an excellent question whose answer will be discussed later in the course when we come to consumer theory. In this way consumer theory is set up to be an answer to a challenge that the students have identified.

What about producer theory? Covered under monopoly, avoiding needless duplication.

Orwell’s review of Penguin books is in the news today courtesy of Amazon vs Hachette. You can read here about that here. I wish, however, to draw your attention to an example that Orwell makes in his review:

It is, of course, a great mistake to imagine that cheap books are good for the book trade. Actually it is just the other way around. If you have, for instance, five shillings to spend and the normal price of a book is half-a-crown, you are quite likely to spend your whole five shillings on two books. But if books are sixpence each you are not going to buy ten of them, because you don’t want as many as ten; your saturation-point will have been reached long before that. Probably you will buy three sixpenny books and spend the rest of your five shillings on seats at the ‘movies’. Hence the cheaper the books become, the less money is spent on books.

Milton Friedman in his textbook Price Theory, as an exercise, asks readers to analyze the passage. He does not explicitly say what he is looking for, but I would guess this: what can you say about the preferences for such a statement to be true. Its a delightful question. A budget line is given and a point that maximizes utility on the budget lie is identified. Now the price of one of the goods falls, and another utility maximizing point is identified. What kind of utility function would exhibit such behavior?
By the way, there are 60 pence to a shilling and a half a crown is six pennies.

The news of Stanley Reiter’s passing arrived over the weekend. Born in a turbulent age long since passed, he lived a life few of us could replicate. He saw service in WW2 (having lied about his age), and survived the Battle of the Bulge. On the wings of the GI bill he went through City College, which  in those days, was the gate through which many outsiders passed on their way to the intellectual aristocracy.

But in the importance and noise of to-morrow
When the brokers are roaring like beasts on the floor of the Bourse

Perhaps  a minute to recall to what Stan left behind.

Stan, is well known of his important contributions to mechanism design in collaboration with Hurwicz and Mount. The most well known example of this is the notion of the size of the message space of a mechanism. Nisan and Segal pointed out the connection between this and the notion of communication complexity. Stan would have been delighted to learn about the connection between this and extension complexity.

Stan was in fact half a century ahead of the curve in his interest in the intersection of algorithms and economics. He was one of the first scholars to tackle the job shop problem. He proposed a simple index policy that was subsequently implemented and reported on in Business Week: “Computer Planning Unsnarls the Job Shop,” April 2, 1966, pp. 60-61.

In 1965, with G. Sherman, he proposed a local-search algorithm for the TSP (“Discrete optimizing”, SIAM Journal on Applied Mathematics 13, 864-889, 1965). Their algorithm was able to produce a tour at least as good as the tours that were reported in earlier papers. The ideas were extended with Don Rice  to a local search heuristic for  non-concave mixed integer programs along with a computation study of performance.

Stan was also remarkable as a builder. At Purdue, he developed a lively school of economic theory attracting the likes of Afriat, Kamien, Sonnenschein, Ledyard and Vernon Smith. He convinced them all to come telling them Purdue was just like New York! Then, to Northwestern to build two groups one in the Economics department and another (in collaboration with Mort Kamien) in the business school.

Join 119 other followers

Follow

Get every new post delivered to your Inbox.

Join 119 other followers

%d bloggers like this: