You are currently browsing the monthly archive for June 2010.
At dinner with a visiting speaker a few years ago, I happened to mention that beginners rarely take enough risks in the opening moves of a backgammon game. I think a normal person, even one who hasn’t played much backgammon, would have an idea what I meant by that. But academics have very particular ideas about terms that have taken on specific technical meanings in their discipline, and so the visitor said “Risky? That doesn’t make any sense: it’s a game with only two outcomes.” For purposes of this discussion let’s pretend he was right about the two outcomes, although this isn’t quite true. My purpose, after acknowledging that his statement makes sense under a certain formal view of risk, is to argue that we should be flexible enough to talk about a different notion of “risky move” which under some circumstances is a much better match for common parlance.
When economists talk about risk, we talk about uncertain monetary outcomes and an individual’s “risk attitude” as represented by a utility function. The shape of the function determines how willing the individual is to accept risk. For instance, we ask students questions such as “How much would Bob pay to avoid a 10% chance of losing $10,000?” and this depends on Bob’s utility function. If a game has only two outcomes, though, risk attitude becomes irrelevant. You should simply make whatever move gives a higher probability of winning. Note that your utility function would certainly enter into a decision of how much to bet on a game, or whether to play at all, but once the game starts you are simply “in it to win it.”
Mallesh Pai offers the following explanation. My responses in quotes.
I think there’s an additional reason why ebooks are priced at a loss on kindle. It’s not the one you argue, but it’s worth mentioning- for additional entertainment, I’ll make the claim that it’s the primary reason the pricing is as your describe.
The razor-blade analogy is a bad one since even I really need only 1 blade at a time, my “lock in” (if we must be formal, lock in is how much my RP for a competing product decreases because of my owning/ using this one) is only to the tune I don’t want to walk over to the store and buy a new blade. Since any inventory of blades deplete over time, even if I buy, say, an 8-pack of blades, I’ll be free to buy the latest in razor technology (Gillette ProFusion Glide) in under 3 months time. With Kindles (or iPods), this is not true. Every time I buy a ‘blade’ equivalent (e-book, MP3), I’m increasing my own lock in to the platform. A hypothetical colleague, let’s call him N, having bought 30-40 e-books on Kindle, is less willing to switch to an iPad, simply because switching would require him to recreate his book ecosystem on the ipad (not just buying the books which is an expense but also recreating things like his annotations on on those books, which is almost impossible). In effect therefore, subsidizing book sales is simply buying future profits once you have a large user base that you can extract from as a walled garden.
interesting point……however it would be in the interest of the book seller to have device independence for the e-book, and I think this would be technologically feasible. The annotations are different matter. I would guess that this lock in effect would apply to a >small fraction of book buyers……OK, perhaps they buy in largest volume………..
Book sellers object to this because they’re wary of Amazon becoming the next iTunes, i.e. the dominant conduit to access end-users, which gives (gave) Amazon (itunes) pricing power over them. As amazon grows more dominant in this space, it has been flexing its muscle in places: for example it dropped a (small) publisher from its physical store earlier this year for not joining it’s e-book store, or offering authors who own the copyrights to their books a higher fraction of revenues if they make the e-book `fully’ available to amazon. Most of the bone hasn’t been about the prices, it’s been about Amazon retaining the rights to price the books at the rates they like (something they already have for all paper books they sell).
but publishers could deal with this via device independence………..
The price fall post the entry of the Nook is predatory pricing. The reason they’re so focused on this is that I think they realize that in the end, profits will require having a locked-in user base like this. If I’m just buying a book, I can comparison shop over multiple websites. By offering things like Amazon Prime (unlimited free 2 day shipping for a low annual fee), I’m more likely to buy from Amazon than say B&N, but not necessarily by much. Once I have a Kindle and 30 books on it (a la the hypothetical N) I don’t comparison shop any more, and Amazon is now a monopolist seller to me: prices can drift upward over time making Amazon more money, especially since they don’t have to maintain a physical infrastructure like their physical warehouse to sell e-books.
but why not do this at the beginning before the nook, iPad etc……..if getting the large installed base of users is the real goal, then shouldn’t they have been more aggressive when they faced no competition!
I think all the pricing strategies in the current tech industry should be understood as the majors attempting to funnel users into their walled garden:
1. Apple: most blatant example, and the one people are most familiar with. sell gadgets that perpetuate the monopoly of the iTunes conduit (now serving apps and books). They’re the furthest along on extracting profits frm their walled garden (restrictive terms with developers, music labels) and even have pricing power on their iXxx gadget line since the walled garden is so attractive to users.
2. Google: release everything else for free give or take, to keep people using Google search, and keep collecting data for better search and targetting (for better ads). Their foray into Android is partly to prevent Apple from having control of the mobile space, to the detriment of GOOG’s ability to collect data or serve ads.
3. MSFT: mostly lost on how to get in on the consumer space (waiting for windows phone 7). they’ve locked in the corporate space by pulling the same trick- corporations’ legacy apps are mostly on microsoft platforms, and hence corps are unwilling to switch to anything else (say linux or google docs) because that would require them to have to recreate their legacy apps built on top.
4. Facebook: trying to use the Facebook Like button that you see on a lot of websites to recreate a sort of private web… would give them access to targeting info for ads that no-one else has…
Lakshman Krishnamurthi and I have been wondering about the pricing of e-books and e-readers. Before Apple’s entry into the e-book market, Amazon was selling almost all their e-books for $9.99. In many cases, this was below the price Amazon paid to the publisher (according to the NYT of May 31st, 2009 about $13 to $14). Why? Second, why did publishers object to a model where Amazon sets the price of the book particularly when it, Amazon, was prepared to incur a loss to make the sale?
Lets begin with Amazon’s pricing pre iPad. The Kindle and e-books are a razor and blade business. So, shouldn’t Amazon be making its profits in books and not the Kindle? Perhaps Amazon sought to increase the installed base of Kindle readers. But one could have achieved that by dropping the price of the Kindle.Lakshman reasons that Amazon used the lack of competition in e-readers to set a reference price for the device. Admittedly, the Kindle was not the first e-reader, but it did create the greatest awareness of the product and pushed for content as well.
He points to how Apple launched the iPod in 2001. It started with a high price and sequentially skimmed the market with a range of products. Made the hardware expensive, and the software (songs) cheap; the opposite of the razor/blade strategy. To quote Lakshman directly:
Apple got away with this because of lack of competition and the significant differentiation of the iPod. Also, they did not lose any money on songs sold through iTunes. If Apple had started with a lower price on the iPod, would have sold more iPods initially, but the prices of subsequent iPods would have come down more than they did, and most likely would have made less money because in total would not have sold a lot more songs.
So, Lakshman is arguing that if there is variation in willingness to pay for the device (iPod, e-reader), then the seller may have an incentive to deviate from the razor and blades model to price discriminate on the device. But this does not explain why Amazon would choose to lose money on the books. Lakshman speculates this might have been an attempt to force publishers to lower the price of e-books to Amazon. I don’t see why.
Turning to the publishers, if they received the same revenue from an e-book that they did from a traditional book, why should it matter if Amazon chooses to price at $9.99? This could possibly harm retailers of hard back books, but why should publishers care about this? Or, perhaps $9.99 e-books reduce a publishers ability to price discriminate (hardback vs. paperback)?
Now consider Apples entry. It has resulted in a dramatic drop in the price of e-readers. In addition, Apple offered publishers an agency model: publishers set the price and Apple keeps a percentage of the selling price. Amazon has been forced to switch to the same model. The result is higher prices for e-books. Are publishers made better off?
Some arithmetic will be useful. Currently, under the non-agency model the publisher would make, say, $15 per book. Assuming a 30% fee to Apple or Amazon, the publisher would have to price the book at about $21.50 to recover the same revenue after paying the fee. Thus, the price of the book goes up at least $6. So, a $120 reduction in the price of an e-reader would be wiped out after purchasing 20 books! Alternatively, perhaps the publisher would like to sell e-books at a price lower than hardback but above paperbacks. Say the publisher sets a price of $13 for the e-book. Under the agency model, the publisher pays Amazon or Apple $3.90. This leaves the publisher with $13-$3.90 = $9.10, less than what they make now per e-book. The publisher come out ahead only if e-book unit costs are smaller or there is a compensating increase in volume.
Now, let us step back from the details to ask the question we always tell our students to ask: who values what? So as to avoid the confusion caused by competition it is useful to imagine a world consisting of a single publisher, a single e-reader producer and a single reader. Suppose, to begin with this reader is a `heavy reader’. Meaning, that before the advent of the e-book, they read copiously, purchasing books at about $20 a pop. In short, the e-book will not change their book buying habits. The value they attach to the e-reader stems from the convenience and portability it provides. The producer of e-readers can capture no more value than what this heavy reader assigns to these features. The publisher can capture no more than the value the heavy reader assigns to the books the publisher puts out. In this world, there is no tension between publisher and producer of e-books. Now suppose a second type of reader, one whose book consumption habits would change with an e-book. This is a reader who buys a book that they wouldn’t otherwise because of the ease with which a purchase is possible using an e-book. Think of read once books……like Blink. You’ve heard people talking about it. It seems interesting (it isn’t, but that is another story) but not so interesting that one would make a special trip to the bookstore for it. For this second type of reader, value comes from the combination of e-reader and publisher. This is the source of the tension between e-reader provider and publisher. How to capture this value and divide it between them is the issue.
The oldest University based Business schools are now well over a century old. Yet, they are still viewed as less central to the University’s mission than say, the study of the Law. Furthermore, every discussion of the purpose and significance of B-schools is tied to the MBA degree. Suggesting that the MBA degree is the sole raison d’etre of B-schools.
Unsurprisingly, I believe that B-schools have a significance independent of the MBA. The reason is simple. Trade is the wellspring of civilization. The struggle for commercial supremacy is as much a spur to advancement as the lust for political power and dominion over nature. Why a University should privilege the study of the last of these over the first makes not a whit of sense.
To those who lament the disciplinary silos that Universities have become, B-schools represent a remarkably successful model of interdisciplinary activity. In many ways, B-schools are to be emulated. In fact, they are. Look at Law Schools and Engineering schools.
There are two roads that lead from the local highschool to Adam’s home, both pass through some forest. Every day Adam has to choose which way to use. But Adam should be careful: Bill, the school bully, loves to ambush Adam at the forest and bully him. If Adam chooses the road where Bill lurks, well, poor Adam. If he chooses the other road, he gets home safely.
Up to now, all Adam can do is toss a coin when he leaves school to decide which road to choose. Indeed, if Adam uses one road more often, Bill could learn that, and wait on that road, thereby having more opportunities to bully Adam.
But not all is dark. Eve finishes school one hour before Adam, and goes home directly after school. If Bill meets her on his way, he forgets Adam and follows her. If Adam knew which way Eve took, he could take the same road and be safe. But he does not.
If Eve chooses her way randomly, all Adam could do is again toss a coin: if he happens to choose the same road as Eve, or if Bill happens to wait on the other road – he is safe. But if he happened to choose the road where Bill lurks, and Eve chose the other road, well, then unlucky Adam. So if Eve’s choices are independent, the probability that Adam will meet Bill decreases to 1/4.
We could describe the situation as the following three player game, where Bill chooses a row, Adam chooses a column, and Eve chooses a matrix.
Indeed, if Eve chooses the left road, and Bill chooses the left road, then Bill will follow Eve, and Adam will be safe (payoff 0) whichever road he chooses. But if Eve chooses the left road, and Bill chooses the right road, then Adam will be be bullied if he chooses the right road.
The situation would have been simple if Eve chose her road by a toss of a fair coin. But she does not. With probability p Eve chooses the same road she used yesterday, and with probability 1-p she chooses the other road (with p>1/2). Does this changes the strategic analysis?
Bill knows which way Eve took yesterday: either he met her, and then he knows that she took the same road he did, or he did not meet her, and then he knows that she took the other road. This means that he has probabilistic information about which road she will take today.
Adam, on the other hand, knows which road Eve took yesterday only if he met Bill: in that case he knows that Eve took the other road. If he did not meet Bill yesterday, then either Bill waited yesterday along the other road, or Bill waited along the same road that he took but Eve fortunately chose that road as well.
If Bill wants to minimize Adam’s payoff, his optimal strategy seems to be simple: he should choose each day the road that Eve did not choose on the previous day. Indeed, if he matches Eve he will not be able to bully Adam, so it is better to mismatch her. Though this strategy seems optimal it is not clear that it is indeed the case. This strategy reveals to Adam which road Eve took today, and so it increases the probability that he will choose tomorrow the same road as Eve will choose tomorrow. For some values of p it may be better for Bill to randomize.
Let’s change the assumptions on the information of Bill and Adam. Suppose that (a) Bill knows which way Eve is going to choose, and (b) when he gets home, Adam knows which way Bill chose, and he remembers his own choice, but he does not remember whether he was bullied or not. The calculation of the optimal strategy for both Adam and Bill (and the calculation of the value) is much more difficult. Why? If Bill always mismatches Eve, then because Adam knows which way Bill chose he can deduce which way Eve chose, and therefore he knows the way that she is likely to choose tomorrow. Because Bill mismatches Eve, he is better off taking the same road Eve chose yesterday, so Adam’s payoff is -(1-p), again, assuming p>1/2. If Bill ignores his information and plays randomly, then Adam’s payoff is -1/4, which is lower than -(1-p) for a range of p’s. The question is, then, what is the optimal way of Bill to use his information.
This question turns out to be not trivial, and for p close to 1 it is still open. I like games with incomplete information. I like games where the state variable changes (here the state variable is the road that Eve chooses). But they are so difficult to analyze. Anyone can give a shoulder?
Since this is a game theory blog, I’ll stoop to a brief plug: I’m playing today in the round of 16 at the US bridge championship which started on Friday. It determines the US rep for worlds. The hands are broadcast live on the website Bridge Base Online, in case any casual bridge players are interested. It sometimes draws as many as 10,000 spectators; not quite World Cup, but exciting anyway :-). At this moment my team has a small lead on some former world champs in a match which started yesterday and concludes today; we are still underdogs, I would say. Play starts at 10 but I reenter the fray at 3:45 — a team has 6 members of which 4 play at a time. More information is at usbf.org.
To watch, go to Bridge Base website, click “play bridge now,” create a free account. Once logged in click on “live vugraph.”
Final update: my team was ultimately eliminated in the semifinals after scoring two significant upsets to advance that far. In fact, the 4 semifinalists were originally ranked 1,2,3 and 20. One of our successful hands was covered in the Times.
Yisrael Aumann is 80. To celebrate this occasion, the Center for Rationality at the Hebrew University of Jerusalem organized a two-day feast, where most of Aumann’s students present papers. I would like to write about the work that Itai Arieli, the 14th and youngest student of Aumann, presented, which is a joint work with Yakov Babichenko, a Ph.D. student of Sergiu Hart.
Consider a multiplayer repeated game. Ask yourself: does there exist a simple decentralized algorithm that, if employed by the players, ensures that the long-run average payoff converge to the Pareto frontier of the set of feasible and individually rational payoffs.
Plainly, any feasible payoff is a convex combination of the payoffs in the matrix, and therefore for any desired target payoff one can construct an algorithm that repeats actions in a proper order and achieves the desired payoff as a long-run average payoff. But this algorithm must count where we are in the cycle, and therefore it is considered not simple. We seek a simpler algorithm, with a small memory.
This is the algorithm proposed by Arieli and Babichenko. Suppose that every player has an aspiration level that is stage dependent: at stage n player i would like to receive a payoff at least a(i,n). If his stage payoff at stage n is above his aspiration level at that stage – he is satisfied. Otherwise he is not satisfied.
Fix a natural number k. The algorithm is played in stages, and in the algorithm the players in fact play k copies of the game.
In stage 1, each player randomly selects k actions, one for each copy of the game. The same action can be chosen several times. Calculate the average stage payoff of each player in all the k games. Set the aspiration level of each player to 0 (assuming all payoffs are non-negative).
If the average stage-payoff of player i is above his aspiration level for stage 1 (which is 0), his aspiration level for the second stage is epsilon plus his aspiration level for the first stage, and he will play in the second stage in each copy of the game the same action that he played in that copy at the first stage.
If the average stage-payoff of player i is below his aspiration level for stage 1, in stage 2 he will randmoly choose k new actions for the k copies of the game, and his aspiration level will be his average payoff in all k copies of the game in all stages up to the current stage.
This procedure is repeated ad infinitum.
Arieli and Babichenko proved that, if k is large, the long-run average payoff will be epsilon-close to the Pareto frontier of the set of feasible and individually rational payoffs.
The argument, modulo technical issues, is as follows: because k is large, the set of payoffs that can be supported as the average stage-payoff of k action profiles is epsilon dense in the set of feasible payoffs. As long as the current average stage-payoff is not high for all players, at least one player randomly chooses a new set of k actions. Because the set of actions is finite, there are finite number of k copies of action profiles, and therefore in a bounded time the players will choose, by luck, k action profiles whose corresponding average stage-payoff is above the current aspiration level of all players. The players will then play these action profiles repeatedly, while the aspiration levels increase by epsilon at each stage. When the aspiration level exceeds the current average stage-payoff of at least one player, the players will get into a new phase in which they randomly choose k action profiles, until a new average stage-payoff that exceed the aspiration level is found. This way the average stage-payoff is bound to increase in the long run, and stay around the Pareto frontier.
In a sense, what is done is an exhaustive search, where the aspiration levels keep track of the best payoff vector that was found so far. This algorithm is simple, and decentralized, because each player only cares about his average payoff and about his aspiration level, but it is not efficient: it would take the players an enourmous amount of time to converge to the Pareto frontier.
As is well known, there are simple, efficient and decentralized algorithms that ensure that the play of the players converge to the set of correlated equilibria (for example, the no-regret mechanisms that were developed by Hart and Mas-Colell). Hart and Mansour proved that the only decentralized algorithm that ensures convergence to the set of Nash equilibria ia exhaustive search (more precisely, all decentralized algorithms that converge to the set of Nash equilibria do it in exponential time (or slower), as exhaustive search). A natural question is whether there is an efficient decentralized algorithm that ensures convergence to the Parto frontier of the set of feasible and individual rational payoffs. I hope that by Aumann’s 85 birthday we will have an answer to this question.
Imagine a collection of N matrices numbered from 1 to N; all the matrices have the same size. In each entry of every matrix are written two elements: a payoff that Bill pays to Anne, and the number of the payoff matrix that Anne and Bill will play in the following day. The game is played as follows: At every day Anne and Bill face one of the matrices; Anne chooses a row, Bill chooses a column; this way an entry in the matrix is chosen. This entry determines that the amount that Bill pays to Anne and the matrix that the two will face tomorrow; the amount is deducted from Bill’s bank account and is added to Anne’s bank account.
The game that I just described is a stochastic game. Suppose that Anne would like to maximize the limit of her average payoff, and Bill wants to minimize this quantity.
If Anne and Bill observe the matrix that they face, then we are in the model of stochastic games studied by Mertens and Neyman, who proved that the value exists, it is equal to the limit of the discounted values, and they actually provided explicit epsilon-optimal strategies (which, unfortunately, cannot be computed efficiently).
Now suppose that Anne and Bill play in the dark: they do not observe the matrix that they face. They also do not observe each other choices or the amounts in their bank account. Nothing. Complete darkness. All that each of them knows is the structure of the game, and his/her past choices.
When payoffs are discounted the value exists. Indeed, the discounted payoff is a continuous function over the space of mixed strategies, and it is bilinear. A standard fixed point argument would deliver the existence of the value. But the payoffs are undiscounted.
Not surprisingly, the answer in general is negative. The value need not exist. To see this, we will present the game “choose the largest integer” as a stochastic game played in the dark. So, suppose that there are three matrices, O, A and B, all have two rows and two columns (in fact, there will be three additional matrices that will be absorbing: once they are reached, the play never leaves them). The matrix O is the initial matrix, the matrix A corresponds to “Anne chose a number smaller than Bill” (Bill may have chosen infinity), and the matrix B corresponds to “Bill chose a number smaller than Anne” (Anne may have chosen infinity).
In the matrix O the payoff is 0 whatever Anne and Bill choose. An entry with an asterisk, for example, the entry (B,R) in matrix O, means that if this entry is chosen, the payoff is 0, and the payoff in all subsequent stages is 0 as well: the game moves to an absorbing matrix with payoff 0.
Let us verify that this game corresponds to the game “choose the largest integer. As long as the players choose (T,L) the plays remains in matrix O. If Anne chooses B before Bill chooses R, the play moves to matrix A, where Anne’s choices do not affect the payoff or the transitions. Hence a strategy of Anne reduces to the determination of the first time she chooses B. Similarly, a strategy of Bill reduces to the determination of the first time he chooses R. If Anne chooses B before Bill chooses R, the long-run average payoff is -1: Bill wins; If Anne chooses B after Bill chooses R, the long-run average payoff is 1: Anne wins; If they choose B and R at the same time, the long-run average payoff is 0: the outcome is a draw; If Anne chooses B in a finite time and Bill never chooses R, the long-run average payoff is 1: Anne wins; If Bill chooses R in a finite time and Anne never chooses B, the long-run average payoff is -1: Anne wins. Thus, this game is indeed the game of “choosing the largest integer”, which does not have a value.
But sometimes the value does exist. Two examples of classes of games where the value exists were given here. Roughly, suppose that the action sets of Anne and Bill coincide: the matrices are square matrices. Suppose also that the transitions are as follows: states are divided into two groups: matching states and non-matching states; the initial state is a matching state. In matching states, as long as Anne and Bill choose the same action, the play remains remain in the set of matching states; once Anne and Bill choose different actions, the play moves to a non-matching state. From non-matching states, the play moves to the initial state (which is a matching state). In such games, a player knows that either the other player matched him, and then he knows the other player’s action, and therefore the identity of the current state, or he knows that the other players did not match him, but then in the following stage the play will return to the initial state.
This is interesting, but there are very simple games that do not fall into the description from the previous paragraph, and therefore we do not know whether their value exists. For example, suppose that there are two matrices, and transitions among the two are general. Does the value exist? Anyone has a clue?