You are currently browsing the monthly archive for September 2014.
I spent these two classes going over two-part tariffs. Were this just the algebra, it would be overkill. The novelty, if any, was to tie the whole business to how one should price in a razor & blade business (engines and spare parts, kindle and ebooks etc). The basic 2-part model sets a high fixed fee (which one can associate with the durable) and sells each unit of the consumable at marginal cost. The analysis offers an opportunity to remind them of the problem of regulating the monopolist charging a uniform price.
The conclusion of the basic 2-part model suggests charging a high price for razors and a low price for blades. This seems to run counter to the prevailing wisdom. Its an opportunity to solicit reasons for why the conclusion of the model might be wrong headed. We ran through a litany of possibilities: heterogenous preferences (opportunity to do a heavy vs light user calculation), hold up (one student observed that we can trust Amazon to keep the price of ebooks low otherwise we would switch to pirated versions!), liquidity constraints, competition. Tied this to Gillete’s history expounded in a paper by Randall Pick (see an earlier post ) and then onto Amazon’s pricing of the kindle and ebooks (see this post). This allowed for a discussion of the wholesale model vs agency model of pricing which the students had been asked to work out in the homework’s (nice application of basic monopoly pricing exercises!).
The `take-away’ I tried to emphasize was how models help us formulate questions (rather than simply provide prescriptions), which in turn gives us greater insight into what might be going on.
This post describes the main theorem in my new paper with Nabil. Scroll down for open questions following this theorem. The theorem asserts that a Bayesian agent in a stationary environment will learn to make predictions as if he knew the data generating process, so that the as time goes by structural uncertainty dissipates. The standard example is when the sequence of outcomes is i.i.d. with an unknown parameter. As times goes by the agent learns the parameter.
The formulation of `learning to make predictions’ goes through merging, which traces back to Blackwell and Dubins. I will not give Blackwell and Dubin’s definition in this post but a weaker definition, suggested by Kalai and Lehrer.
A Bayesian agent observes an infinite sequence of outcomes from a finite set . Let
represent the agent’s belief about the future outcomes. Suppose that before observing every day’s outcome the agent makes a probabilistic prediction about it. I denote by
the element in
which represents the agent’s prediction about the outcome of day
just after he observed the outcomes
of previous days. In the following definition it is instructive to think about
as the true data generating process, i.e., the process that generates the sequence of outcomes, which may be different from the agent’s belief.
Definition 1 (Kalai and Lehrer) Let
. Then
merges with
if for
-almost every realization
it holds that
Assume now that the agent’s belief is stationary, and let
be its ergodic decomposition. Recall that in this decomposition
ranges over ergodic beliefs and
represents structural uncertainty. Does the agent learn to make predictions ? Using the definition of merging we can ask, does
merges with
? The answer, perhaps surprisingly, is no. I gave an example in my previous post.
Let me now move to a weaker definition of merging, that was first suggested by Lehrer and Smorodinsky. This definition requires the agent to make correct predictions in almost every period.
Definition 2 Let
. Then
weakly merges with
if
-almost every realization
it holds that
for a set
of periods of density
.
The definition of weak merging is natural: patient agents whose belief weakly merges with the true data generating process will make almost optimal decisions. Kalai, Lehrer and Smorodinsky discuss these notions of mergings and also their relationship with Dawid’s idea of calibration.
I am now in a position to state the theorem I have been talking about for two months:
Theorem 3 Let
be stationary, and let
be its ergodic decomposition. Then
weakly merges with
for
-almost every
.
In words: An agent who has some structural uncertainty about the data generating process will learn to make predictions in most periods as if he knew the data generating process.
Finally, here are the promised open questions. They deal with the two qualification in the theorem. The first question is about the “-almost every
” in the theorem. As Larry Wasserman mentioned this is unsatisfactory in some senses. So,
Question 1 Does there exists a stationary
(equivalently a belief
over ergodic beliefs) such that
weakly merges with
for every ergodic distribution
?
The second question is about strengthening weak merging to merging. We already know that this cannot be done for arbitrary belief over ergodic processes, but what if
is concentrated on some natural family of processes, for example hidden markov processes with a bounded number of hidden states ? Here is the simplest setup for which I don’t know the answer.
Question 2 The outcome of the stock market at every day is either U or D (up or down). An agent believes that this outcome is a stochastic function of an unobserved (hidden) state of the economy which can be either G or B (good or bad): When the hidden state is B the outcome is U with probability
(and D with probability
), and when the state is G the outcome is U with probability
. The hidden state changes according to a markov process with transition probability
,
. The parameter is
and the agent has some prior
over the parameter. Does the agent’s belief about outcomes merge with the truth for
-almost every
?.
On day 6, went the through the standard 2 period durables good problem, carefully working out the demand curve in each period. Did this to emphasize later how this problem is just like the problem of a multi-product monopolist with substitutes. Then, onto a discussion of JC Penny. In retrospect, not the best of examples. Doubt they shop at JC Penny, or follow the business section of the paper. One student gave a good summary of events as background to rest of class. Textbooks would have been better.
Subsequently, multi-product monopolist; substitute and complement. Emphasized this meant each product could not be priced in isolation of the other. Now the puzzle. Why would a seller introduce a substitute to itself? Recalling discussion of durables good monopolist, this seems like lunacy. A bright spark suggested that the substitute product might appeal to a segment that one is not currently selling to. Yes, but wouldn’t that cannibalize sales from existing product? Time for a model! Before getting to model, formally introduced price discrimination.
Day 7, talked briefly about homework and role of mathematics in economic analysis. Recalled the question of regulating the monopolist. Lowering price benefits consumers but harms seller. Do the benefits of customers exceed harm done to seller? Blah, blah cannot settle the issue. Need a model and have to analyze it to come to a conclusion. While we represent the world (or at least a part of it) mathematically, it does not follow that every mathematical object corresponds to something in reality. Made this point by pointing them to the homework question with demand curve having a constant elasticity of 1. Profit maximizing price is infinity, which is clearly silly. Differentiating and setting to zero is not a substitute for thinking.
Went on to focus on versioning and bundling. Versioning provides natural setting to talk about cannibalization and catering to new segment. Went through a model to show how the competing forces play out. Then to bundling.
Discussion of reasons to bundle that do not involve price discrimination. Then a model and its analysis. Motivated it by asking whether they would prefer to have ala carte programming from cable providers. In the model, unbundling results in higher prices which surprises them and was a good note to end on.
On day 5, unhappy with the way I covered regulation of monopolist earlier, went over it again. To put some flesh on the bone, I asked at conclusion of the analysis if they would favor regulating the price of drug on which the seller had a patent? Some discomfort with the idea. A number suggested the need to provide incentives to invest in R&D. In response I asked why not compensate them for their R&D? Ask for the R&D costs and pay them that plus something extra if we want to cover opportunity cost. Some discussion of how one would monitor and verify these costs. At which point someone piped in that if R&D costs were difficult to monitor, why not have the Government just do the R&D? Now we really are on the road to socialized medicine. Some appeals to the efficiency of competitive markets which I put on hold with the promise that we would return to this issue later on in the semester.
Thus far class had been limited to a uniform price monopolist. Pivoted to discussing a multi-product monopolist by way of a small example of a durables good monopolist selling over two periods. Had the class act out out the role of buyers and me the seller cutting price over time. It provided an opportunity to discuss the role of commitment and tie it back to the ultimatum game played Day 1. On day 6 will revisit this with a discussion of JC Penny, which will allow one to get to next item on the agenda: price discrimination.
Day 3 was a `midterm’ testing them on calculus prerequisites. Day 4, began with double marginalization. Analyzed the case when the upstream firm dictates wholesale price to the downstream firm. Subsequently, asked the class to consider the possibility that downstream firm dictates price to upstream firm. In this case `double marginalization’ disappears. Connected this pack to the power take it or leave it offers discussed day 1 and related this to Amazon vs Hachette. Concluded this portion with discussion of two part tariffs as alternative to merger to `solve’ double marginalization.
Double marginalization was followed by computing total consumer surplus by integrating the inverse demand function. Ended on optimal regulation of monopolist, showing that pricing at marginal cost maximizes producer plus consumer surplus. Brief discussion of incentives to be a monopolist if such regulation was in place. Then, asked the class to consider regulating a monopsonist and whether a minimum wage would be a good idea.
Day 2 was devoted to marginal this, that and the other. I began by asking if a monopolist (with constant unit costs) who suffers an increase in its unit costs should pass along the full unit cost increase to their buyers? To make it more piquant, I asked them to assume a literal monopolist, i.e., sole seller. Some said maybe, because it depends on elasticity of demand. Others said, yes, what choice do buyers have? Alert ones said no, because you must be at an inelastic portion of the demand curve (thank you, markup formula). They will indeed increase the price but the increase is tempered by the high elasticity at the current profit maximizing price. Profit will go down. This example illustrates how both the demand side and cost side interact to influence profits. In day 1 we focused on how the demand side affected price, in day 2 we focus on the cost side.
To motivate the notion of marginal cost, I ask how they would define cost per unit to convey the idea that this is an ambiguous concept. A possible candidate is average cost but ist not helpful maing decisions about whether to increase of decrease output. For this, what we want is marginal cost. Define marginal cost, and onto constant, decreasing and increasing returns to scale and discussion of technologies that would satisfy each of these. Solving quadratics is a good example. The time to solve each is the marginal cost. If you have decreasing returns to scale in solving quadratics, a wit suggested, correctly, that one should give up mathematics.
Next, where do cost functions come from? Opportunity to introduce capital and labor and production function. Cost function is minimum cost way of combining K and L to produce a target quantity. Numerical example with Cobb-Douglas. Without explicitly mentioning isoquants and level curves, solved problem graphically (draw feasible region, move objective function hyperplane) as well as algebraically. Discussed impact of price change of inputs on mix used to produce target volume. Marginal productivity of labor, capital and marginal rate of technical substitution. Eyes glazing over. Why am I wasting time with this stuff? This is reading aloud. Never again.
Onto marginal revenue. By this time they should have realized the word marginal means derivative. Thankfully, they don’t ask why a new word is needed to describe something that already has a label: derivative. Marginal revenue should get their goat. Its a derivative of revenue, but with respect to what? Price or quantity? The term gives no clue. Furthermore, marginal revenue sounds like price. The result? Some students set price equal to marginal cost to maximize profit because thats what the slogan marginal revenue = marginal cost means. To compound matters, we then say the area under the marginal revenue curve is revenue. If marginal revenue is the derivative wrt quantity then integrating it should return the revenue. Does this really deserve comment? Perhaps watching paint dry would be more exciting. Wish I had the courage to dispense with the word `marginal’ altogether. Perhaps next year. Imagine the shock of my colleagues when the phrase `marginal blank’ is greeted with puzzled looks.
They’ve been very patient. Before class ends there should be a payoff. Show that marginal revenue = marginal cost is a necessary condition for profit maximization and is sufficient when we have decreasing returns to scale. This seems like small beer. What happens when we have increasing returns to scale? Why does this break down? Some pictures, of why the slogan is no longer sufficient and a discussion of how this relates to pricing for firms with increasing returns like a producer of an app who must rent server space and gets a quantity discount.
One more word about organ selling before I return to my comfort zone and talk about Brownian motion in Lie groups. Selling living human organs is repugnant, in part because the sellers cause damage to their bodies out of desperation. But what about allowing your relatives to sell what’s left of you when you’re gone ? I think this should be uncontroversial. And there are side advantages too, in addition to increasing the number of transplantations. For example, it will encourage you to quit smoking.
Over to you, Walter.
Something funny happened when I started watching Al Roth’s lecture and looked at the paper: I realized that what I always assumed is the meaning of `repugnant transactions’ is not exactly the phenomena that Roth talks about. What I thought `repugnant transaction’ means is a situation of `two rights makes a wrong’: it’s totally awesome that Xanders is willing to donate his extra kidney to Zordiac, and it’s really nice of Zordiac to donate money to Xanders, but these two nobles acts done together in exchange for each other is imoral and should be outlawed. Roth however defines `repugnant transaction’ more broadly as any transaction that some people want to engage in and others don’t think they should.
Consider the opening example of his paper: laws against selling horse meat in restaurants. Here what is repugnant is not the exchange but the good itself. It’s not two rights makes wrong. It’s just wrong. We outlaw the exchange simply because of constitutional reasons or because it’s impossible to enforce a ban on eating — people will simply order take away and perform the crime of eating at their homes.
So let me offer a distinction between `repugnant exchanges’, where the good itself is fine but buying and selling it is repugnant and `repugnant good/services’ where the good or service are what is repugnant, even if for whatever reason what we actually outlaw is the transaction. Most of the examples that Roth gives fall into the `repugnant good/service’ category rather than `repugnant exchange’. Such is the case of buying and selling recreational drugs, endangered species, imported cultural property.
Are there any examples for `repugnant exchanges’ in addition to selling human organs ? Well, there is `renting’ organs, as in surrogate motherhood. Anything else ? An interesting example is lending money with interest, which used to be repugnant in the West (we got over it already): The very idea of lending money was never considered repugnant. What was repugnant is doing it for payment in terms of interest.
Finally, there is prostitution, which is illegal in the US. Repugnant service or repugnant exchange ? depends on your reasoning. Anti-prostitution laws have an unlikely coalition of supporters. There are the religious moralists, for whom the service (extra-marriage sexual intercourse) is what makes the transaction repugnant. They go for prostitution just because that’s what they can outlaw in the US. (They go further in Iran.) But there are also feminists and liberals who view prostitution as exploitation, as I view selling human organs. They find the exchange repugnant even if they have no problem with the service itself.
Note that the example of prostitution shows the difficulty in the distinction I make between `repugnant good/service’ and `repugnant exchange’: It relies on unobservable reasoning. Just by knowing the laws and customs of a society you don’t know to which category a forbidden transaction belongs. Moreover, since different people may have different reasoning, the category is sometimes not uniquely defined. But I still think it’s a useful distinction.
In my salad days, school masters would assign boys returning from the summer hols an essay: `What I did during the summer’. Yes, masters and boys. I served a portion of my youth in a `misbegotten penal colony upon a wind blasted heath’. The only females present were master’s wives, matrons and the French mistress. No, not that kind, the kind that offers instruction in French. As you can see, to the lascivious minds of boys, there was no end to the double entendres. However, I digress.
Over the summer Thanh Nguyen and myself completed a paper about stable matchings. The abstract is reproduced below.
The National Resident Matching program strives for a stable matching of medical students to teaching hospitals. With the presence of couples, stable matchings need not exist. For any student preferences, we show that each instance of a stable matching problem has a `nearby’ instance with a stable matching. The nearby instance is obtained by perturbing the capacities of the hospitals. Specifically, given a reported capacity
for each hospital
, we find a redistribution of the slot capacities
satisfying
for all hospitals
and
, such that a stable matching exists with respect to
. Our approach is general and applies to other type of complementarities, as well as matchings with side constraints and contracts.
In other words, with the addition of at most 9 additional slots, one can guarantee the existence of a stable matchings. This is independent of the size of the market or doctors preferences (it does assume responsive preferences on the part of hospitals). The key tool is Scarf’s lemma which is a wonderful device for converting results about cardinal matching problems into results about ordinal matching problems. For more on this, consult the paper by Kiralyi and Pap, who should be credited with a formulation of Scarf’s lemma that makes its usefulness evident.
Here is Al Roth’s talk in the Lindau Meeting on Economic Sciences about repugnant transactions, which I guess is the technical term for the discomfort I feel at the idea of people donating their extra kidney to those who need it in return to, you know, money.
Before he was a Nobel Laureate Roth was a Nancy L. Schwartz Memorial Lecturer. His talk was about kidney exchanges — these are exchanges between several pairs of donor+recipient involving no money but only kidneys — and he started with a survey of the audience: who is in favor of allowing selling and buying of kidneys in the free market ? (I am glad I didn’t raise my hand. The next question was about selling and buying of living hearts.) I remember noticing that there was a correlation between raised hands and seniority: For whatever reason, seniors were more likely to be in favor of the free market than juniors.
In the dinner after the talk I ended up in a table of juniors & spouses and we got to discuss our objection to the idea of letting Bob sell his Kidney to Alice, so that Bob can afford to send his daughter to college, and in doing so save Alice’s small child from orphanhood. Turned out we agreed on the policy but for different reasons. I don’t remember which was my reason. I still find both of them convincing, though less so simultaneously.
Reason I: The market price would be too low. Hungry people will compete selling their organs for a bowl of red pottage out of desperation. The slippery slope leads to poor people being harvested for their body parts.
Reason II: The market price would be too high. Only the 0.01 % will be able to afford it. The slippery slope leads to a small aristocracy who live forever by regenerating their bodies.
As I said, both (somewhat) convincing. And please don’t ask me what would be the fair price, that is neither too low nor too high.
Recent Comments