You are currently browsing the monthly archive for October 2010.

Sorry to disappoint: I don’t mean inappropriate in the Jeff Ely sense, but in the overmathematical sense. This post is intended for people who have had an intro econ PhD sequence, at least as far as expected utility theory.

Nabil was showing me a question he asks the MBAs about their preference between two gambles with the same mean and variance. There is no right answer; the idea is just to show them that distinct distributions can have the same mean and variance. With some normalization, the gambles boil down to: A: (1, .5;-1, .5) vs. B:(2, .125; 0, .75; -2, .125). You could think of the units as thousands or ten-thousands to make it more interesting. When Nabil showed me the problem, I said, only half-joking, “I don’t know, I’d have to think about it; I’ve never decided whether I’m kurtosis-averse.” Indeed (as my discussion will confirm), neither gamble second-order stochastically dominates the other, i.e. being risk-averse (having a concave utility function) doesn’t tell you which to choose. I decided to see what would be preferred for various CARA or CRRA utilities, and discovered the following:

Theorem: Let A and B be two bounded gambles which are symmetric about the same mean. If each moment of B is at least as big as each moment of A, A is weakly preferred to B by every CARA or CRRA utility function. If at least one inequality is strict, the preference is strict.

Proof: Note that symmetry means all odd moments are 0. Expanding any CARA or CRRA utility as a power series centered at the mean, we find that all even coefficients are negative. These series converge absolutely and uniformly in the range of definition, so linearity of expectation applies. Q.E.D.

That is, people with such utility functions are kurtosis-averse (and 6th-moment-averse, and 8th-moment-averse…) So any CARA or CRRA person prefers A to B above. Apparently most MBAs also pick A; my sense is that a small probability of a large risk tends to loom large in one’s mind. I admit to a similar psychological bias; I would force myself to overcome it if there were a good reason, but if I can support the decision with any CARA or CRRA function, that sounds all right to me.

So what kind of function chooses B? By my claim above there is a concave function that does, and indeed:

Claim: Let A and B be gambles with the same mean. Normalize this mean to 0, and suppose ${E[|A|]>E[|B|]}$. Then any concave, piecewise linear function with unique kink at 0 will prefer B to A.

Proof: Simple calculation is left to the reader.

Such piecewise linear functions are often used in a simplified version of prospect theory. So, that part of prospect theory tends to select B, but the overweighting of small probabilities tends to select A. Florian Herold has a paper about disentangling these aspects of prospect theory. I’m getting too tired to think about how it applies; perhaps he would like to comment.

Bottom line: I’ve always thought that CRRA sounded pretty reasonable. Today I learned that if I want to stick with this, I’m kurtosis-averse, and also…what should we call the 6th moment? Sextosis-averse? (nod to Jeff.)

P.S.: An interesting not-so-technical question about reference points: On the actual homework, the mean was 5 and not 0. I know when I looked at this, my psychological reference point instantly became 5, and my feelings about gains and losses went accordingly. Would this also be true of MBAs? Or would they not be so quick to recognize symmetry and “translate” their expectations?

I’ve been sitting in on our introductory Decision Science course for MBAs, which I’m planning to teach for the first time next year. One recent topic was the “flaw of averages” (a nice catchphrase due to Sam Savage, introduced into our course by Nabil Al-Najjar.) In mathematical terms, letting X be an exogenous random variable, a be a decision parameter, and f be any function not linear in X, this says

${{\rm argmax}_a E[f(a,X)] \neq {\rm argmax}_a f(a,E[X])}$

In plain English, this just means that when you make a decision, do not assume that uncertainty will always take on its average, or “expected,” value. This leads me to a related point, and to my own catchphrase (see title) which I hope the students will find useful. “Expected Value” is an awful piece of terminology, as judged by its (very weak) relationship to the English word “expected.” I’m certainly not one of the first to point this out: It is possible, of course, for X to never come close to its “expected” value. The real question is why the term persists, when we have the perfectly clear term “average value” available. No better reason than our QWERTY keyboard, I suppose; once everyone is used to writing E for expectation, it’s hard to shake. Anyway, I think I’ll be showing a slide with the motto in the title next year; hopefully the students will find that memorable. It will be tempting, I suppose, to reinforce this with a brief clip from Monty Python: “No one expects the Spanish Inquisition!”

In Israel, to win the weekly Lotto lottery one has to correctly guess 6 numbers out of 37, and in addition to guess one number out of 8. On September 21 the numbers that came out were 13, 14, 26, 32, 33, 36, and the seventh number was 2. On October 16, very surprisingly, the very same 6 numbers were chosen, and the seventh number was 8. The country was left open mouthed. How can such a thing happen? Within one month, the same six numbers were chosen. A statistics consultant to the lottery organizer said that this event is statistically very rare. Nobody thought to doubt this statement. But I was not convinced.

One of the pleasures of  quaerere verum in the groves of academe is what one learns from one’s students. Three weeks ago, in a discussion about farmers and futures, Thomas Blank, a student in my class related that in the US there was no futures market in onions. Neither neglect or laziness was the cause but legislation. I was, of course, gob stopped. What offence could the doughty and flavorful onion have caused to merit such a fate? One can eliminate an explanation based on special features of the onion, otherwise no legislation would be needed. Perhaps, it is a `Murikan thang’. Apparently not.  As I learnt from Thomas, in January 2008, the Indian National Commodity and Derivative Exchange decided not to pursue onion futures trading. The exchange’s director observed that the onion is “difficult to store……..’

The prohibition on an Onions futures market, as these things often do, an amusing history. It begins in the 50’s when a futures market in onions existed. Legend has it that a New York onion grower and Chicago distributor of the same, conspired to rig the market to drive onion prices down. So successful were they, that farmers paid dealers to dispose of their stock (negative prices!). The dealers dumped the onions and sold the burlap sacks in which they arrived. Onion farmers in Michigan agitated, congressman Gerald Ford took up the cause and in 1958, public law 85-839 was born.

No contract for the sale of onions for future delivery shall be made on or subject to the rules of any board of trade in the United States.

The prohibition on onion futures brings to mind Alvin Roth‘s paper on repugnance. However, it does not easily fit amongst the variety of examples that he lists. First, trade in onions itself is not repugnant because it is allowed. Bilateral futures contracts with companies like ConAgra are not unusual. So, the type of contract under consideration is not repugnant. The law prohibits a centralized exchange. Why? Once on the books, it is easy to see why there may be no political will to remove it. But, why is it on the books in the first place?

David Jacks suggests that it is an aversion to speculators and middlemen that explains the prohibition. Speculators are seen as information monopolists and thus despised. Jacks notes that amongst those first against the wall come the revolution, speculators would have been high on Lenin’s list.  Lincoln, as well, thought that speculators deserved to have their heads shot off (it follows by Beck’s theorem, of course,  that Obama  is a Leninist).

Tonight at the department party we were continuing the debate on the usefulness of game theory. Part of the argument was about whether the math has any utility, or if you could get just as much from verbal arguments about strategy as in Schelling’s work. Rakesh had some good positive examples in pricing and auctions which perhaps he’ll write up here if he has time. Here’s one that came to my mind just now. You may not consider poker “real world” but bear with me; I think it illustrates a general point.

Consider the simplified model of poker in von Neumann-Morgenstern. In one of the versions, there is a result that the unique equilibrium strategy is to bet with the best 30% and worst 10% of hands. Now, nothing nearly so simple is optimal in real poker, and furthermore all poker players, even the least mathematical, know that they should bluff sometimes. So what was gained by this exercise? Well, we have a qualitative recommendation to bluff only with your very worst hands. This is far from obvious, and I never thought of it before doing the mathematical exercise. True, I can translate it into a verbal argument, as follows: bad hands and medium hands become equal if you bet with them, since they both lose whenever (or almost whenever) you get called. But medium hands are significantly better than bad hands when you check, since they may win the pot in a showdown. So you bluff with bad hands, not of course because they are better to bluff with, but because they are worse to check with. Note that this logic only applies fully when you are last to speak in the last round of betting. In earlier rounds, “semi-bluffs” where you hope for a fold but have chances if called are a common part of good strategy, and are more common than pure bluffs.

Enough about poker; here is the broader point of the example. Solving the problem mathematically imposes a discipline on our reasoning process which forces us to discover an important qualitative insight we could easily have missed otherwise. True, I am sure some strong poker players came to this insight intuitively over the years without formal study of Bayesian games, but many players surely missed it. The examples Rakesh was discussing seem similar to me. Yes, once you hear certain insights described in words, you may decide we never needed the math. But this is much too facile, akin to thinking that every problem is easy once you’ve seen the answer. Any illuminating chain of reasoning can be missed as easily as found, and formal models can channel our reasoning in the right direction.

Jeff and Eran drew our attention to Ariel’s afterward to the new print of von-Nuemann and Morgenstern’s book, where he wonders about the usefullness of game theory to the “prediction of behavior in strategic situations” and to “improve performance in real-life strategic situations”. I must say that I disagree with Ariel and Eran: I believe that Game Theory does improve the world (when properly applied), and it can improve performance in real-life strategic situations.

Some interactions in life are complex, some are pretty trivial. Game theory is not advanced enough to handle complex situations, but it can manage simple situations. This is similar to analyzing, e.g., water flow in pipes. One can analyze the way water flows in a pipe, but the theory cannot handle flows in dented pipes. Physics has advanced enough to allow running simulations to analyze flows in dented pipes; Economics and Psycology have not done the same progress, so we will have to wait until we can run simulations on human behavior.

As I wrote in a previous post, game theory teaches us insights, like “think strategically”, or “the belief of the other player about the states of nature may differ from your belief”. These insights are the pearls of the theory, and they can help us when facing strategic interactions.

Story 1: I used to give popular talks on game theory. My father, who has 12 years of formal education and runs a printing press, attended one of them. In this talk I told the audience that one should think strategically in a strategic interaction, and put himself in the shoes of the other player. Few days later my father had to print a newspaper for a new client who he did not know. My father, as a careful manager, asked the client to pay for the whole work before the printing machine starts running. The client agreed. Few minutes before the job is scheduled to go into the printing machine my father got a phonecall from the printing press: the client paid only 80% of the amount, he said that he will pay the rest after the job is done. The first reaction of my father was to cancel the job: the cleint did not keep the payment arragement. Then he thought about his game theorist son, and about what his son told him: put yourself in the shoes of the other player. He did. And then he realized that if he were the client, he would be reluctant to pay all the sum up-front: this is the first time he works with this printing press, and he does not know whether they do a good job or a job on time. He decided to give Game Theory a chance, and told his workers to print the job. The end was happy, and the rest of the money was paid after the job was done.

Story 2: In the last several years of her life, my grandmother spent most of her time on the couch, watching TV, reading, solving crosswords. One day she asked me to buy for her few crosswords booklets. I did. Then she asked me how much it cost, because she wanted to pay for these booklets. I told her that it was nothing, a present from me to her. These booklets cost about $20, nothing as compared to the amount you spend on the kids, and anyway my income was higher than her income. She insisted. I thought what she would do if I do not tell her, and I realized that she would never ask again for these booklets or other things she needs, and then she will suffer from it. I told her it was$20 and everyone was happy.

One can dismiss these stories; after all, they involve very simple interactions. One may say that the reasoning is more psychological than game theoretic. Maybe, but I reached these insights knowing game theory and being ignorant of psychology. My conclusion from these and other similar stories is that game theoretic thinking does improve the world.

Browsing through Keynes’ “A Treatise on Probability,” I came across a pretty nugget which Keynes credits to Laplace. Suppose you want to make a fair decision via coin flip, but are afraid the coin is slightly biased. Flip two coins (or the same coin twice), and call it “heads” if the flips match, tails otherwise. This procedure is practically guaranteed to have very small bias. In fact, if we call ${b_i = P_i(H)-P_i(T)}$ the bias of flip ${i}$, a quick calculation shows that the bias of the double flip is ${b_1b_2}$, so that a 1% bias would become a near-negligible .01%.

I noticed that we can extend this; consider using ${n}$ flips, and calling the outcome “heads” if the number of tails is even, “tails” if it is odd. An easy induction shows that the bias of this procedure is ${b_1b_2...b_n}$, which of course goes to 0 very quickly even if each coin is quite biased. Here also is a nice direct calculation: consider expanding the product

${b_1b_2 \ldots b_n = (P_1(H) - P_1(T)) \cdots (P_n(H) - P_n(T))}$

The magnitude of each of the ${2^n}$ terms is the probability of a certain sequence of flips; the sign is positive or negative according to whether the number of tails is even or odd. Done.

I can hardly believe it of such a simple observation, but this actually feels novel to me personally (not to intellectual history, obviously.) Not surprising exactly, but novel. I suppose examples such as the following are very intuitive: The last (binary) digit of a large integer such as “number of voters for candidate X in a national election” is uniformly random, even if we know nothing about the underlying processes determining each vote, other than independence (or just independence of a large subset of the votes.)

Via Jeff, from Ariel Rubinstein’s afterword (pdf) for a reprinting of von-Neumann and Morgenstern’s book in which Ariel states his opinion about the usefulness of game theory for real-life strategic interactions (emphasis mine)

According to this opinion, Game theory does not have normative implications and its empirical significance is very limited.  Game theory is viewed as a cousin of logic.  Logic does not allow us to screen out true statements from false ones and does not help us distinguish right from wrong.  Game theory does not tell us which action is preferable or predict what other people will do.  If game theory is nevertheless useful or practical, it is only indirectly so.   In any case, the burden of proof is on those who use game theory to make policy recommendations, not on those who doubt the practical value of game theory in the first place.

And, by the way, I sometimes wonder why people are so obsessed in looking for “usefulness” in economics generally and game theory in particular.  Should academic research be judged by its usefulness ?

Readers of this blog can hopefully guess my view on this issue. I don’t view game theory as a mathematical or logical exercise (I am not sure that’s Ariel’s view either), but I have never found it useful in my own interactions with fellow human beings. As Rubinstein says, the burden of proof is on those who use game theory to make policy recommendations, and I have never seen such a proof: I have never came across any example in which a theorem or a definition or an insight from game theory turned out to be useful in policy recommendation or in predicting human behavior in strategic situations. But that doesn’t say much since I have little patience to look into such proclaimed proofs and I usually just shrug them off without studying them carefully. The reason is that even if there were situations in which game theory would turn out to be useful in this sense, it wouldn’t make game theory more exciting for me.

Which brings me to Rubistein’s question about judgding academic research. To be sure, some academic enterprises have practical usefulness, sometimes even in a way that was not originally foreseen. The applicability of number theory to encryption protocols is a wonderful example. But that’s not the reason why prime numbers are so fascinating, nor is building bridges the reason we are curious about the laws of the universe. Similarly, while I can see several reasons to be driven to study game theory, I doubt if any of us has done so to improve their performance in strategic situations. So why do so many game theorist feel the need to justify their interest in game theory by appealing to real life applicability ?

Btw, my feeling is that most of our seniors don’t agree with Ariel here. You can see this in the round table discussions in conferences. While not everyone actually claim that game theory is useful for policy making right now, the premise is always that this is our ultimate goal. But I believe Ariel’s position is relatively popular among juniors. Read into this what you will.

Noam Nisan has a very nice post on the Braess Paradox and the different issues that it raises for economists and for computer scientist. A recommended reading.

The game theory group at Tel Aviv University organizes an international conference on Game Theory. The conference will include few talks; the plan is to have about 6 talks a day (total of about 18 talks), so that people hear the best of Game Theory and are not overwhelmed with too many talks. We will also have one day dedicated to an excursion to Jerusalem. For more details, go to the following link.