You are currently browsing the category archive for the ‘Uncategorized’ category.

Nicolas Copernicus’s de Revolutionibus, in which he advocated his theory that Earth revolved ar ound the sun, was first printed just before Copernicus’ death in 1543. It therefore fell on one Andreas Osiander to write the introduction. Here is a passage from the introduction:

[An astronomer] will adopt whatever suppositions enable [celestial] motions to be computed correctly from the principles of geometry for the future as well as for the past…. These hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough.

In other words, the purpose of the astronomer’s study is to capture the observed phenomena — to provide an analytic framework by which we can explain and predict what we see when we look at the sky. It turns out that it is more convenient to capture the phenomena by assuming that Earth revolved around the sun than by assuming, as the Greek astronomers did, a geo-centric epicyclical planet motion. Therefore let’s calculate the right time for Easter by making this assumption. As astronomers, we shouldn’t care whether this is actually true.

Whether or not Copernicus would have endorsed this approach is disputable. What is certain is that his book was at least initially accepted by the Catholic Church whose astronomers have used Copernicus’ model to develop the Gregorian Calendar. (Notice I said the word model btw, which is probably anachronistic but, I think, appropriately captures Osiander’s view). The person who caused the scandal was Galileo Galilei, who famously declared that if earth behaves as if it moves around the sun then, well, it moves around the sun. Yet it moves. It’s not a model, it’s reality. Physicists’ subject matter is the nature, not models about nature.

What about economists ? Econ theorists at least don’t usually claim that the components of their modeling of economic agents (think utilities, beliefs, discount factors, ambiguity aversions) correspond to any real elements of the physical world or of the cognitive process that the agent performs. When we say that Adam’s utility from apple is log(c) we don’t mean that Adam knows anything about logs. We mean — wait for it — that he behaves as if this is his utility, or, as Osiander would have put it, this utility provides a calculus consistent with the observations, and that alone is enough.

The contrast between theoretical economists’ `as if’ approach and physicists’ `and yet it moves’ approach is not as sharp as I would like it to be. First, from the physics side, modern interpretations of quantum physics view it, and by extension the entire physics enterprise, as nothing more than a computational tool to produce predictions. On the other hand, from the economics side, while I think it is still customary to pay lip service to the `as if’ orthodoxy at least in decision theory classes, I don’t often hear it in seminars. And when neuro-economists claim to localize the decision making process in the brain they seem to view the components of the model as more than just mathematical constructions.

Yep, I am advertising another paper. Stay tuned :)

I spent these two classes going over two-part tariffs. Were this just the algebra, it would be overkill. The novelty, if any, was to tie the whole business to how one should price  in a razor & blade business (engines and spare parts, kindle and ebooks etc). The basic 2-part model sets a high fixed fee (which one can associate with the durable) and sells each unit of the consumable at marginal cost. The analysis offers an opportunity to remind them of the problem of regulating the monopolist charging a uniform price.

The conclusion of the basic 2-part model  suggests charging a high price for razors and a low price for blades. This seems to run counter to the prevailing wisdom. Its an opportunity to solicit reasons for why the conclusion of the model might be wrong headed. We ran through a litany of possibilities: heterogenous preferences (opportunity to do a heavy vs light user calculation), hold up (one student observed that we can trust Amazon to keep the price of ebooks low otherwise we would switch to pirated versions!), liquidity constraints, competition. Tied this to Gillete’s history expounded in a paper by Randall Pick (see an earlier post ) and then onto Amazon’s pricing of the kindle and ebooks (see this post). This allowed for a discussion of the wholesale model vs agency model of pricing which the students had been asked to work out in the homework’s (nice application of basic monopoly pricing exercises!).

The `take-away’ I tried to emphasize was how models help us formulate questions (rather than simply provide prescriptions), which in turn gives us greater insight into what might be going on.

One more word about organ selling before I return to my comfort zone and talk about Brownian motion in Lie groups. Selling living human organs is repugnant, in part because the sellers cause damage to their bodies out of desperation. But what about allowing your relatives to sell what’s left of you when you’re gone ? I think this should be uncontroversial. And there are side advantages too, in addition to increasing the number of transplantations. For example, it will encourage you to quit smoking.

Over to you, Walter.

200 students for a 9 am class in spite of a midterm on day 3; perhaps they’ve not read the syllabus.

Began with the ultimatum game framed in terms of a seller making a take or leave it offer to the buyer. The game allows one to make two points at the very beginning of class.

1) The price seller chooses depends on their model of how the buyer will behave. One can draw this point out by asking sellers to explain how they came by their offers. Best offers to discuss are the really low ones (i.e. give most of the surplus to the buyer) and the offers that split the difference.

2) Under the assumption that `more money is better than less’, point out that the seller captures most of the gains from trade. Why? The ability to make a credible take or leave it offer.

This makes for a smooth transition into the model of quasi-linear preferences. Some toy examples of how buyers make choices based on surplus. Emphasize it captures idea that buyers make trade-offs (pay more if you get more; if its priced low enough its good enough). Someone will ask about budget constraints. A good question, ignore budget for now and come back to it later in the semester.

Next, point out that buyers do not share the same reservation price (RP) for a good or service. Introduce demand curve as vehicle for summarizing variation in RPs. Emphasize that demand curve tells you demand as you change your price holding other prices fixed.

Onto monopoly with constant unit costs and limited to a uniform price. Emphasize that monopoly in our context does not mean absence of competition, only that competition keeps price fixed as we change ours. Reason for such an assumption is to understand first how buyers respond to one sellers price changes.

How does monopoly choose profit maximizing price? Trade-off between margin and volume. Simple monopoly pricing exercise. Answer by itself is uninteresting. Want to know what profit maximizing depends upon.

Introduce elasticity of demand, its meaning and derivation. Then, a table of how profit and elasticity vary with price in the toy example introduce earlier. Point out how elasticity rises as price rises. Demand starts to drop off faster than margin rises. Explain why we don’t stop where elasticity is 1. Useful place to point out that here a small price increase is revenue neutral but total costs fall. So, uniform price is doing things: determining how much is captured from buyers and controlling total production costs. Table also illustrates that elasticity of demand matters for choosing price.

Segue into the markup formula. Explain why we should expect some kind of inverse relationship between markup and elasticity. Do derivation of markup formula with constant unit costs.

Now to something interesting to make the point that what has come before is very useful: author vs. publisher, who would prefer a higher price for the book? You’ll get all possible answers which is perfect. Start with how revenue is different from profit (authors get percentage revenue). This difference means their interests are not aligned. So, they should pick different prices. But which will be larger? Enter markup formula. Author wants price where elasticity is 1. Publisher wants to price where elasticity is bigger than 1. So, publisher wants higher price. Wait, what about e-books? Then, author and publisher want same price because unit costs are zero.

This is the perfect opportunity to introduce the Amazon letter to authors telling them that elasticity of demand for e-books at the current $14.99 price is about 2.4. Well above 1. Clearly, all parties should agree to lower the price of e-books. But what about traditional books? Surely lower e-book price will cause some readers to switch from the traditional to the e-book. Shouldn’t we look at the loss in profit from that as well? Capital point, but make life simple. Suppose we have only e-books. Notice, under the agency model where Amazon gets a percentage of revenue, everyone’s incentives appear to be aligned.
Is Amazon correct in its argument that dropping the e-book price will benefit me the author? As expressed in their letter, no. To say that the elasticity of demand for my book at the current price is 2.4 means that if I drop my price 1%, demand will rise 2.4% HOLDING OTHER PRICES FIXED. However, Amazon is not taking about dropping the price of my book alone. They are urging a drop in the price of ALL books. It may well be that a drop in price for all e-books will result in an increase in total revenues for the e-book category. This is good for Amazon. However, it is not at all clear that it is good for me. Rustling of papers, and creaking of seats is a sign that time is up.

In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.

Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some {\mu\in [0,1]} then your prediction about the outcome tomorrow will be the expectation of {\mu}. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.

The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.

Every day there is a probability {1/2} for eruption of war (W). If no war erupts then the outcome is either bad economy (B) or good economy (G) and is a function of the number of peaceful days since the last war. The function from the number of peaceful days to outcome is an unknown parameter of the process. Thus, a parameter is a function {\theta:\{1,2,\dots\}\rightarrow\{\text{B},\text{G}\}}. I am going to compare the predictions about the future made by two agents: Roxana, who knows {\theta} and Ursula, who faces some uncertainty about {\theta} represented by a uniform belief over the set of all parameters. Both Roxana and Ursula don’t know the future outcomes and since both of them are rational decision makeres, they both use Bayes’ rule to form beliefs about the unknown future given what they have seen in the past.

Consider first Roxana. In the terminology I introduced in previous posts, she faces no structural uncertainty. After a period of {k} consecutive peaceful days Roxana believes that with probability {1/2} the outcome tomorrow will be W and with probability {1/2} the outcome tomorrow will be {\theta(k)}.

Now consider Ursula. While she does not initially know {\theta}, as times goes by she learns it. What do I mean here by learning ? Well, suppose Ursula starts observing the outcomes and she sees G,B,W,B,G,…. From this information Ursula she deduces that {\theta(1)=\text{B}}, so that if a peaceful day follows a war then it has a bad economy. Next time a war pops up Ursula will know to make a prediction about the outcome tomorrow which is as accurate as Roxana’s prediction. Similarly Ursula can deduce that {\theta(2)=\text{G}}. This way Ursula gradually deduces the values of {\theta(k)} while she observes the process. However, and this is the punch line, for every {k\in\{1,2,3,\dots\}} there will be a time when Ursula observes {k} consecutive peaceful day for the first time and at this day her prediction about the next outcome will be {1/2} for war, {1/4} for good economy and {1/4} for bad economy. Thus there will always be infinitely many occasions in which Ursula’s prediction differ from Roxana.

So, Ursula does learn the parameter in the sense that she gradually deduce more and more values of {\theta(k)}. However, because at every point in time she may require a different value of {\theta(k)} — This is the difference between the stationary environment and the i.i.d. environment ! — there may happen infinitely many times in which she has not yet been able to deduce the value of the parameter which she needs in order to make a prediction about the outcome tomorrow.

You may notice that Ursula does succeed in making predictions most of the times. In fact, the situations when she fails become more and more rare, after observing longer and longer blocks of peaceful days. Indeed, Nabil and I formalize this idea and show that this is the case in every stationary environment with structural uncertainty: the observer makes predictions approximately as if he knew the parameter in almost every day. For that, we use a weak notion of merging which was suggested by Lehrer and Smorodinsky. If you are interested then this is a good time to look at our paper.

Finally, the example given above is our adaptation to an example that appeared first in a paper by Boris Yakovlevich Ryabko. Ryabko’s paper is part of a relatively large literature about non-Bayesian predictions in stationary environment. I will explain the relationship between that literature and our paper in another post.

The news of Stanley Reiter’s passing arrived over the weekend. Born in a turbulent age long since passed, he lived a life few of us could replicate. He saw service in WW2 (having lied about his age), and survived the Battle of the Bulge. On the wings of the GI bill he went through City College, which  in those days, was the gate through which many outsiders passed on their way to the intellectual aristocracy.

But in the importance and noise of to-morrow
When the brokers are roaring like beasts on the floor of the Bourse

Perhaps  a minute to recall to what Stan left behind.

Stan, is well known of his important contributions to mechanism design in collaboration with Hurwicz and Mount. The most well known example of this is the notion of the size of the message space of a mechanism. Nisan and Segal pointed out the connection between this and the notion of communication complexity. Stan would have been delighted to learn about the connection between this and extension complexity.

Stan was in fact half a century ahead of the curve in his interest in the intersection of algorithms and economics. He was one of the first scholars to tackle the job shop problem. He proposed a simple index policy that was subsequently implemented and reported on in Business Week: “Computer Planning Unsnarls the Job Shop,” April 2, 1966, pp. 60-61.

In 1965, with G. Sherman, he proposed a local-search algorithm for the TSP (“Discrete optimizing”, SIAM Journal on Applied Mathematics 13, 864-889, 1965). Their algorithm was able to produce a tour at least as good as the tours that were reported in earlier papers. The ideas were extended with Don Rice  to a local search heuristic for  non-concave mixed integer programs along with a computation study of performance.

Stan was also remarkable as a builder. At Purdue, he developed a lively school of economic theory attracting the likes of Afriat, Kamien, Sonnenschein, Ledyard and Vernon Smith. He convinced them all to come telling them Purdue was just like New York! Then, to Northwestern to build two groups one in the Economics department and another (in collaboration with Mort Kamien) in the business school.

Four agents are observing infinite streams of outcomes in {\{S,F\}}. None of them knows the future outcomes and as good Bayesianists they represent their beliefs about unknowns as probability distributions:

  • Agent 1 believes that outcomes are i.i.d. with probability {1/2} of success.
  • Agent 2 believes that outcomes are i.i.d. with probability {\theta} of success. She does not know {\theta}; She believes that {\theta} is either {2/3} or {1/3}, and attaches probability {1/2} to each possibility.
  • Agent 3 believes that outcomes follow a markov process: every day’s outcome equals yesterday’s outcome with probability {3/4}.
  • Agent 4 believes that outcomes follow a markov process: every day’s outcome equals yesterday’s outcome with probability {\theta}. She does not know {\theta}; Her belief about {\theta} is the uniform distribution over {[0,1]}.

I denote by {\mu_1,\dots,\mu_4\in\Delta\left(\{S,F\}^\mathbb{N}\right)} the agents’ beliefs about future outcomes.

We have an intuition that Agents 2 and 4 are in a different situations from Agents 1 and 3, in the sense that are uncertain about some fundamental properties of the stochastic process they are facing. I will say that they have `structural uncertainty’. The purpose of this post is to formalize this intuition. More explicitly, I am looking for a property of a belief {\mu} over {\Omega} that will distinguish between beliefs that reflect some structural uncertainty and beliefs that don’t. This property is ergodicity.

 

Read the rest of this entry »

Abraham Neyman and Sergiu Hart are two of the prominent mathematical game theorists to date. Neyman contributed immensely to the study of the Shapley value, stochastic games, and repeated games and complexity. Hart contributed significantly to the study of correlated equilibrium and adaptive processes leading to it, value theory, and formation of coalitions.
Both Abraham and Sergiu will be 66 next year. To celebrate this rare occasion, the Center for the Study of Rationality at the Hebrew University of Jerusalem organizes two conferences, one in honor of each of them. The conference in honor of Abraham will be held on June 16–19, 2015, and the conference in honor of Sergiu will follow on June 21–24, 2015.
Mark the dates and reserve tickets.

You may have heard about ResearchGate, the so called facebook of scientists. Yes, another social network. Its structure is actually more similar to twitter: each user is a node and you can create directed edges from yourself to other users. Since I finally got rid of my facebook account (I am a Bellwether. In five years all the cool guys will not be on facebook), I decided to try ResearchGate. I wanted a stable platform to upload my preferable versions of my papers so that they will be the first to pop up on google. Also, I figured if I am returning to blogging then I need stuff to bitch about. ResearchGate only partially fulfill the first goal, but it does pretty well with the second.

Read the rest of this entry »

Last week I wrote a post about two issues with Elsevier’s e-system, which is the system that all journals run by Elsevier, including Games and Economic Behavior and Journal of Mathematical Economics, use for handling submissions: the fact that sometimes reviewers can see the blinded comments that other reviewers wrote to the editor, and the user agreement that allows Elsevier to change its terms without notifying the users.

After I corresponded with the editors of Games and Economic Behavior and Journal of Mathematical Economics and with the Economics Editor of Elsevier, the reason for the privacy breach became clear: the e-system allows each editor to choose whether the blinded comments of one referee to the author and the blinded comments of one referee to the editor will be seen by other reviewers. For each type of blinded comments the editor can decide whether to show it to all reviewers or not. Each editor makes his or her own choice. I guess that often editors are not aware of this option, and they do not know what was the choice that the previous editor, or the one before him, made.

Apparently, the configuration of Games and Economic Behavior was to allow reviewers to see only the blinded comments to the author, while the configuration of Journal of Mathematical Economics was to allow reviewers to see both types of blinded comments. Once the source of the problem became clear, Atsushi Kajii, the editor of Journal of Mathematical Economics decided to change the configuration, so that the blinded comments of reviewers to the editor will not be seen by other reviewers. I guess that in few days this change will become effective. Elsevier also promised to notify all of its journals, in which the configuration was like that of JME, about this privacy issue, and let the editors decide whether they want to keep this configuration or change it. In case this configuration remains, they will add a warning that warns the referee that the blinded comments can be read by other reviewers.

I am happy that the privacy breach came to a good end, and that in the future the e-system will keep the privacy the referees.

Regarding the second issue, Elsevier is not willing to change its user agreement. Reading the user agreements of other publishers, like Springer and INFORMS, shows that user agreements can be reasonable, and not all publishers keep the right to change the user agreement without notifying the users. The Economics Editor of Elsevier wrote: “This clause is not unreasonable as the user can choose to discontinue the services at any time.” As I already wrote in the previous post, I choose to discontinue the service.

 

 

 

Recent Comments

Join 120 other followers

Follow

Get every new post delivered to your Inbox.

Join 120 other followers

%d bloggers like this: