You are currently browsing the category archive for the ‘Uncategorized’ category.

Introduced externalities. The usual examples, pollution, good manners and flatulance. However, I also emphasized an externality we had dealt with all semester: when I buy a particular Picasso it prevents you from doing so, exerting a negative externality on you. I did this to point out that the problem with externalities is not their existence, but whether they are `priced’ into the market or not. For many of the examples of goods and services that we discussed in class, the externality is priced in and we get the efficient allocation.

What happens when the externality is not `priced in’? The hoary example of two firms, one upstream from the other with the upstream firm releasing a pollutant into the river (That lowers its costs but raises the costs of the downstream firm) was introduced and we went through the possibilities: regulation, taxation, merger/ nationalization and tradeable property rights.

Discussed pros and cons of each. Property rights (i.e. Coase), consumed a larger portion of the time; how would you define them, how would one ensure a perfectly competitive market in the trade of such rights? Nudged them towards the question of whether one can construct a perfectly competitive market for any property right.

To fix ideas, asked them to consider how a competitive market for the right to emit carbon might work. Factories can, at some expense lower carbon emissions. We each of us value a reduction in carbon (but not necessarily identically). Suppose we hand out permits to factories (recall, Coase says initial allocation of property rights is irrelevant) and have people buy the permits up to reduce carbon. Assuming carbon reduction is a public good (non-excludable and non-rivalrous), we have a classic public goods problem. Strategic behavior kills the market.

Some discussion of whether reducing carbon is a public good. The air we breathe (there are oxygen tanks)? Fireworks? Education? National Defense? Wanted to highlight that nailing down an example that fit the definition perfectly was hard. There are `degrees’. Had thought that Education would generate more of a discussion given the media attention it receives, it did not.

Concluded with an in depth discussion of electricity markets as it provides a wonderful vehicle to discuss efficiency, externalities as well as entry and exit in one package. It also provides a backdoor way into a discussion of net neutrality that seemed to generate some interest. As an aside I asked them whether perfectly competitively markets paid agents what they were worth? How should one measure an agents economic worth? Nudged them towards marginal product. Gave an example where Walrasian prices did not give each agent his marginal product (where the core does not contain the Vickrey outcome). So, was Michael Jordan overpaid or underpaid?
With respect to entry and exit I showed that the zero profit condition many had seen in earlier econ classes did not produce efficient outcomes. The textbook treatment assumes all potential entrants have the same technologies. What if the entrants have different technologies? For example, solar vs coal. Do we get the efficient mix of technologies? Assuming a competitive market that sets the Walrasian price for power, I showed them examples where we do not get the efficient mix of technologies.

Two governments did not survive this week: the Swedish and the Israeli. Here, in Israel, people are interested in the effect of the coming elections on the financial market. The Marker, the most important national daily economics newspaper, published an article on this issue. The chief economist of the second largest investment house, which handles about 30 billion USD, is quoted as saying (my own translation)

“Past experience shows that most of the time, during six months after elections the stock market was at a higher level than before the elections,” emphasized Zbezinsky (the chief economist, ES). The Meitav-Dash investment house checked the performance of the TA-25 Index (the index of the largest 25 companies in the Israeli stock exchange, ES) in the last six elections. They compared the index starting from 6 months before elections up to six months after elections, and the result was that the average return is positive and equals 6%.

To support this claim, a nice graph is added:

index25

Even without understanding Hebrew, you can see the number 25 at the title, which refers to the TA-25 index, the six colored lines in the graph, where the x-axis measures the time difference from elections (in months), and the year in which each elections took place. Does this graph support the claim of the chief economist? Is his claim relevant or interesting? Some points that came up to a non-economist like me are:

  1. Six data points, this is all the guy has. And from this he concludes that “most of the time” the market increased. Well, he is right; the index increased four times and decreased only twice.
  2. Election is due 17-March-2015, which means three and a half months. In particular, taking as a baseline 6 months before election is useless; this baseline is well into the past.
  3. Some of the colored lines seem to fluctuate, suggesting that some external events, unrelated to elections, may have had an impact on the stock market, like the Intifada in 2001 or the consequences of the Lebanon war before the 2009 elections. It might be a good idea to check whether some of these events are expected to occur in the coming nine months and a half.
  4. It will also be nice to compare the performance around elections to the performance in between elections. Maybe 6% is the usual performance of the TA-25, maybe it is usually higher, and maybe it is usually lower.

I am sure that the readers will be able to find additional points that make the chief economist statement irrelevant, while others may find points that support his statement. I shudder to the thought that this guy is in charge of some of my retirement funds.

Economists, I told my class, are the most empathetic and tolerant of people. Empathetic, as they learnt from game theory, because they strive to see the world through the eyes of others. Tolerant, because they never question anyone’s preferences. If I had the  talent I’d have broken into song with a version of `Why Can’t a Woman be More Like a Man’ :

Psychologists are irrational, that’s all there is to that!
Their heads are full of cotton, hay, and rags!
They’re nothing but exasperating, irritating,
vacillating, calculating, agitating,
Maddening and infuriating lags!

Why can’t a psychologist be more like an economist?

Back to earth with preference orderings. Avoided  the word rational to describe the restrictions placed on preference orderings, used `consistency’ instead. More neutral and conveys the idea that inconsistency makes prediction hard rather that suggesting a Wooster like IQ. Emphasized that utility functions were simply a succinct representation of consistent preferences and had no meaning beyond that.

In a bow to tradition went over the equi-marginal principle, a holdover from the days when economics students were ignorant of multivariable calculus. Won’t do that again. Should be banished from the textbooks.

Now for some meat: the income and substitution (I&S) effect. Had been warned this was tricky. `No shirt Sherlock,’ my students might say. One has to be careful about the set up.

Suppose price vector p and income I. Before I actually purchase anything, I contemplate what I might purchase to maximize my utility. Call that x.
Again, before I purchase x, the price of good 1 rises. Again, I contemplate what I might consume. Call it z. The textbook discussion of the income and substitution effect is about the difference between x and z.

As described, the agent has not purchased x or z. Why this petty foggery? Suppose I actually purchase $x$ before the price increase. If the price of good 1 goes up, I can resell it. This is both a change in price and income, something not covered by the I&S effect.

The issue is resale of good 1. Thus, an example of an I&S effect using housing should distinguish between owning vs. renting. To be safe one might want to stick to consumables. To observe the income effect, we would need a consumable that sucks up a `largish’ fraction of income. A possibility is low income consumer who spends a large fraction on food.

Nicolas Copernicus’s de Revolutionibus, in which he advocated his theory that Earth revolved ar ound the sun, was first printed just before Copernicus’ death in 1543. It therefore fell on one Andreas Osiander to write the introduction. Here is a passage from the introduction:

[An astronomer] will adopt whatever suppositions enable [celestial] motions to be computed correctly from the principles of geometry for the future as well as for the past…. These hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough.

In other words, the purpose of the astronomer’s study is to capture the observed phenomena — to provide an analytic framework by which we can explain and predict what we see when we look at the sky. It turns out that it is more convenient to capture the phenomena by assuming that Earth revolved around the sun than by assuming, as the Greek astronomers did, a geo-centric epicyclical planet motion. Therefore let’s calculate the right time for Easter by making this assumption. As astronomers, we shouldn’t care whether this is actually true.

Whether or not Copernicus would have endorsed this approach is disputable. What is certain is that his book was at least initially accepted by the Catholic Church whose astronomers have used Copernicus’ model to develop the Gregorian Calendar. (Notice I said the word model btw, which is probably anachronistic but, I think, appropriately captures Osiander’s view). The person who caused the scandal was Galileo Galilei, who famously declared that if earth behaves as if it moves around the sun then, well, it moves around the sun. Yet it moves. It’s not a model, it’s reality. Physicists’ subject matter is the nature, not models about nature.

What about economists ? Econ theorists at least don’t usually claim that the components of their modeling of economic agents (think utilities, beliefs, discount factors, ambiguity aversions) correspond to any real elements of the physical world or of the cognitive process that the agent performs. When we say that Adam’s utility from apple is log(c) we don’t mean that Adam knows anything about logs. We mean — wait for it — that he behaves as if this is his utility, or, as Osiander would have put it, this utility provides a calculus consistent with the observations, and that alone is enough.

The contrast between theoretical economists’ `as if’ approach and physicists’ `and yet it moves’ approach is not as sharp as I would like it to be. First, from the physics side, modern interpretations of quantum physics view it, and by extension the entire physics enterprise, as nothing more than a computational tool to produce predictions. On the other hand, from the economics side, while I think it is still customary to pay lip service to the `as if’ orthodoxy at least in decision theory classes, I don’t often hear it in seminars. And when neuro-economists claim to localize the decision making process in the brain they seem to view the components of the model as more than just mathematical constructions.

Yep, I am advertising another paper. Stay tuned :)

I spent these two classes going over two-part tariffs. Were this just the algebra, it would be overkill. The novelty, if any, was to tie the whole business to how one should price  in a razor & blade business (engines and spare parts, kindle and ebooks etc). The basic 2-part model sets a high fixed fee (which one can associate with the durable) and sells each unit of the consumable at marginal cost. The analysis offers an opportunity to remind them of the problem of regulating the monopolist charging a uniform price.

The conclusion of the basic 2-part model  suggests charging a high price for razors and a low price for blades. This seems to run counter to the prevailing wisdom. Its an opportunity to solicit reasons for why the conclusion of the model might be wrong headed. We ran through a litany of possibilities: heterogenous preferences (opportunity to do a heavy vs light user calculation), hold up (one student observed that we can trust Amazon to keep the price of ebooks low otherwise we would switch to pirated versions!), liquidity constraints, competition. Tied this to Gillete’s history expounded in a paper by Randall Pick (see an earlier post ) and then onto Amazon’s pricing of the kindle and ebooks (see this post). This allowed for a discussion of the wholesale model vs agency model of pricing which the students had been asked to work out in the homework’s (nice application of basic monopoly pricing exercises!).

The `take-away’ I tried to emphasize was how models help us formulate questions (rather than simply provide prescriptions), which in turn gives us greater insight into what might be going on.

One more word about organ selling before I return to my comfort zone and talk about Brownian motion in Lie groups. Selling living human organs is repugnant, in part because the sellers cause damage to their bodies out of desperation. But what about allowing your relatives to sell what’s left of you when you’re gone ? I think this should be uncontroversial. And there are side advantages too, in addition to increasing the number of transplantations. For example, it will encourage you to quit smoking.

Over to you, Walter.

200 students for a 9 am class in spite of a midterm on day 3; perhaps they’ve not read the syllabus.

Began with the ultimatum game framed in terms of a seller making a take or leave it offer to the buyer. The game allows one to make two points at the very beginning of class.

1) The price seller chooses depends on their model of how the buyer will behave. One can draw this point out by asking sellers to explain how they came by their offers. Best offers to discuss are the really low ones (i.e. give most of the surplus to the buyer) and the offers that split the difference.

2) Under the assumption that `more money is better than less’, point out that the seller captures most of the gains from trade. Why? The ability to make a credible take or leave it offer.

This makes for a smooth transition into the model of quasi-linear preferences. Some toy examples of how buyers make choices based on surplus. Emphasize it captures idea that buyers make trade-offs (pay more if you get more; if its priced low enough its good enough). Someone will ask about budget constraints. A good question, ignore budget for now and come back to it later in the semester.

Next, point out that buyers do not share the same reservation price (RP) for a good or service. Introduce demand curve as vehicle for summarizing variation in RPs. Emphasize that demand curve tells you demand as you change your price holding other prices fixed.

Onto monopoly with constant unit costs and limited to a uniform price. Emphasize that monopoly in our context does not mean absence of competition, only that competition keeps price fixed as we change ours. Reason for such an assumption is to understand first how buyers respond to one sellers price changes.

How does monopoly choose profit maximizing price? Trade-off between margin and volume. Simple monopoly pricing exercise. Answer by itself is uninteresting. Want to know what profit maximizing depends upon.

Introduce elasticity of demand, its meaning and derivation. Then, a table of how profit and elasticity vary with price in the toy example introduce earlier. Point out how elasticity rises as price rises. Demand starts to drop off faster than margin rises. Explain why we don’t stop where elasticity is 1. Useful place to point out that here a small price increase is revenue neutral but total costs fall. So, uniform price is doing things: determining how much is captured from buyers and controlling total production costs. Table also illustrates that elasticity of demand matters for choosing price.

Segue into the markup formula. Explain why we should expect some kind of inverse relationship between markup and elasticity. Do derivation of markup formula with constant unit costs.

Now to something interesting to make the point that what has come before is very useful: author vs. publisher, who would prefer a higher price for the book? You’ll get all possible answers which is perfect. Start with how revenue is different from profit (authors get percentage revenue). This difference means their interests are not aligned. So, they should pick different prices. But which will be larger? Enter markup formula. Author wants price where elasticity is 1. Publisher wants to price where elasticity is bigger than 1. So, publisher wants higher price. Wait, what about e-books? Then, author and publisher want same price because unit costs are zero.

This is the perfect opportunity to introduce the Amazon letter to authors telling them that elasticity of demand for e-books at the current $14.99 price is about 2.4. Well above 1. Clearly, all parties should agree to lower the price of e-books. But what about traditional books? Surely lower e-book price will cause some readers to switch from the traditional to the e-book. Shouldn’t we look at the loss in profit from that as well? Capital point, but make life simple. Suppose we have only e-books. Notice, under the agency model where Amazon gets a percentage of revenue, everyone’s incentives appear to be aligned.
Is Amazon correct in its argument that dropping the e-book price will benefit me the author? As expressed in their letter, no. To say that the elasticity of demand for my book at the current price is 2.4 means that if I drop my price 1%, demand will rise 2.4% HOLDING OTHER PRICES FIXED. However, Amazon is not taking about dropping the price of my book alone. They are urging a drop in the price of ALL books. It may well be that a drop in price for all e-books will result in an increase in total revenues for the e-book category. This is good for Amazon. However, it is not at all clear that it is good for me. Rustling of papers, and creaking of seats is a sign that time is up.

In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.

Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some {\mu\in [0,1]} then your prediction about the outcome tomorrow will be the expectation of {\mu}. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.

The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.

Every day there is a probability {1/2} for eruption of war (W). If no war erupts then the outcome is either bad economy (B) or good economy (G) and is a function of the number of peaceful days since the last war. The function from the number of peaceful days to outcome is an unknown parameter of the process. Thus, a parameter is a function {\theta:\{1,2,\dots\}\rightarrow\{\text{B},\text{G}\}}. I am going to compare the predictions about the future made by two agents: Roxana, who knows {\theta} and Ursula, who faces some uncertainty about {\theta} represented by a uniform belief over the set of all parameters. Both Roxana and Ursula don’t know the future outcomes and since both of them are rational decision makeres, they both use Bayes’ rule to form beliefs about the unknown future given what they have seen in the past.

Consider first Roxana. In the terminology I introduced in previous posts, she faces no structural uncertainty. After a period of {k} consecutive peaceful days Roxana believes that with probability {1/2} the outcome tomorrow will be W and with probability {1/2} the outcome tomorrow will be {\theta(k)}.

Now consider Ursula. While she does not initially know {\theta}, as times goes by she learns it. What do I mean here by learning ? Well, suppose Ursula starts observing the outcomes and she sees G,B,W,B,G,…. From this information Ursula she deduces that {\theta(1)=\text{B}}, so that if a peaceful day follows a war then it has a bad economy. Next time a war pops up Ursula will know to make a prediction about the outcome tomorrow which is as accurate as Roxana’s prediction. Similarly Ursula can deduce that {\theta(2)=\text{G}}. This way Ursula gradually deduces the values of {\theta(k)} while she observes the process. However, and this is the punch line, for every {k\in\{1,2,3,\dots\}} there will be a time when Ursula observes {k} consecutive peaceful day for the first time and at this day her prediction about the next outcome will be {1/2} for war, {1/4} for good economy and {1/4} for bad economy. Thus there will always be infinitely many occasions in which Ursula’s prediction differ from Roxana.

So, Ursula does learn the parameter in the sense that she gradually deduce more and more values of {\theta(k)}. However, because at every point in time she may require a different value of {\theta(k)} — This is the difference between the stationary environment and the i.i.d. environment ! — there may happen infinitely many times in which she has not yet been able to deduce the value of the parameter which she needs in order to make a prediction about the outcome tomorrow.

You may notice that Ursula does succeed in making predictions most of the times. In fact, the situations when she fails become more and more rare, after observing longer and longer blocks of peaceful days. Indeed, Nabil and I formalize this idea and show that this is the case in every stationary environment with structural uncertainty: the observer makes predictions approximately as if he knew the parameter in almost every day. For that, we use a weak notion of merging which was suggested by Lehrer and Smorodinsky. If you are interested then this is a good time to look at our paper.

Finally, the example given above is our adaptation to an example that appeared first in a paper by Boris Yakovlevich Ryabko. Ryabko’s paper is part of a relatively large literature about non-Bayesian predictions in stationary environment. I will explain the relationship between that literature and our paper in another post.

The news of Stanley Reiter’s passing arrived over the weekend. Born in a turbulent age long since passed, he lived a life few of us could replicate. He saw service in WW2 (having lied about his age), and survived the Battle of the Bulge. On the wings of the GI bill he went through City College, which  in those days, was the gate through which many outsiders passed on their way to the intellectual aristocracy.

But in the importance and noise of to-morrow
When the brokers are roaring like beasts on the floor of the Bourse

Perhaps  a minute to recall to what Stan left behind.

Stan, is well known of his important contributions to mechanism design in collaboration with Hurwicz and Mount. The most well known example of this is the notion of the size of the message space of a mechanism. Nisan and Segal pointed out the connection between this and the notion of communication complexity. Stan would have been delighted to learn about the connection between this and extension complexity.

Stan was in fact half a century ahead of the curve in his interest in the intersection of algorithms and economics. He was one of the first scholars to tackle the job shop problem. He proposed a simple index policy that was subsequently implemented and reported on in Business Week: “Computer Planning Unsnarls the Job Shop,” April 2, 1966, pp. 60-61.

In 1965, with G. Sherman, he proposed a local-search algorithm for the TSP (“Discrete optimizing”, SIAM Journal on Applied Mathematics 13, 864-889, 1965). Their algorithm was able to produce a tour at least as good as the tours that were reported in earlier papers. The ideas were extended with Don Rice  to a local search heuristic for  non-concave mixed integer programs along with a computation study of performance.

Stan was also remarkable as a builder. At Purdue, he developed a lively school of economic theory attracting the likes of Afriat, Kamien, Sonnenschein, Ledyard and Vernon Smith. He convinced them all to come telling them Purdue was just like New York! Then, to Northwestern to build two groups one in the Economics department and another (in collaboration with Mort Kamien) in the business school.

Abraham Neyman and Sergiu Hart are two of the prominent mathematical game theorists to date. Neyman contributed immensely to the study of the Shapley value, stochastic games, and repeated games and complexity. Hart contributed significantly to the study of correlated equilibrium and adaptive processes leading to it, value theory, and formation of coalitions.
Both Abraham and Sergiu will be 66 next year. To celebrate this rare occasion, the Center for the Study of Rationality at the Hebrew University of Jerusalem organizes two conferences, one in honor of each of them. The conference in honor of Abraham will be held on June 16–19, 2015, and the conference in honor of Sergiu will follow on June 21–24, 2015.
Mark the dates and reserve tickets.

Follow

Get every new post delivered to your Inbox.

Join 133 other followers