During last night’s NBA finals game, the announcer stated that Manu Ginobli has a better career winning percentage than the other members of the Spurs’ “Big Three” (Duncan, Parker, Ginobli.) This means, I presume, that the Spurs have won a higher percentage of games in which Ginobli plays than games in which Duncan plays, or games in which Parker plays. If so, this is a Really Stupid Statistic. I will leave the reason “as an exercise” and assume a commenter can explain why; if not, I’ll post tomorrow.
My colleague James Schummer was kind enough to point me to the following news item, which quotes George Lucas as predicting that one day a cinema ticket will cost $150. Mr. Lucas and (and Mr. Spielberg) believe that the movie industry will, over time, concentrate on big budget productions and these will cost more to see in the cinema. Indeed, one senior equity analyst at Morningstar is quoted as saying:
differentiated pricing according to the movie’s budget makes economic sense.
I think this is the sentence that caught my colleague’s eye.
He and I teach a class on pricing. In it we emphasize the simple point that fixed costs are irrelevant for deciding the price. If I could repeat this FACT in large friendly letters like the warning on the Hitchiker’s Guide to the Galaxy, I would. So, the idea that a movie’s budget should determine its price is not merely wrong but risible.
It is based on a proposal by Merrifield and Saari. This is not the only proposal for revamping peer review based on mechanism design ideas. See Robinson. Solomon provides a brief summary of other proposals to upend peer review and how they fared.
Merrifeld & Saari assert that an ideal peer review process must solve the following problems ( quote from the paper):
a) Some incentive should be put in place to reduce the pressure toward ever more applications.
b) The workload of assessment should be shared equitably around the community.
c) The burden on each individual should be maintained at a reasonable level so that it is physically
possible to do a fair and thorough job.
d) There should be some positive incentive to do the task well.
e) The ultimate ranked list of applications should be as objective a representation of the
community’s perception of their relative scientific merits as possible.
Their proposed mechanism (with extraneous but important practical details omitted) is as follows:
1) There are N agents each of whom submits a proposal.
2) Each agent receives m < N proposals to review (not their own).
3) Each agent compiles a ranked list of the m she receives, placing them in the order in which she thinks the community would rank them, not her personal preferences.
4) The N ranked lists are combined to produce an optimized global list ranking all N applications.
5) Failure to carry out this refereeing duty by a set deadline leads to automatic rejection of one's proposal.
6) Individual rankings are compared to the positions of the same m applications in the globally-optimized list. If both lists appear in approximately the same order, then proposer is rewarded by having his proposal moved a modest number of places up the final list relative to those from proposers who have not matched the community’s view as well.
What is missing from the paper or the summary on the NSF site is a clear specification of an environment in which such a mechanism is any sense the best such mechanism satisfying (a-e). In this sense it is not a `mechanism design' approach to peer review. One could dismiss this as pettiffogging, but it would be mistaken. To illustrate, lets posit an environment. Suppose N=3 and m=2. So each agent reviews the proposals by the other two. Suppose also each proposal is either good or bad, and conditional on its state anyone reviewing it receives a signal of its quality. So, good proposals generate a high quality signal with probability 1 and bad proposals generate a low quality signal with probability 1. The signals are independent across reviewers. Finally, the cost of effort to review a proposal is zero.
Proposers read the proposals assigned to them, and report their signals. If two proposers disagree on the same proposal, both are shot (and extreme version of item (6) above). Thus, truthfully reporting one's signal is an equilibrium. However, it is not the only equilibrium. Randomizing one's report would also be an equilibrium……..which may obtain if there is a non-negligble cost of effort. Now, Merrifield & Saari might argue that the environment I've set up presumes there are objectively good proposals which is ruled out by them. They write `……it should be borne in mind that there is no objective right answer in this kind of peer review process.' I would argue, this is semantic. The `true' quality is simply the commonly known community perception of quality.
There may a way a round the multiplicity of equilibria problem here using an idea explicated by David Rahman in his aptly titled paper which I render in latin: Quis custodiet ipsos custodes? What is it? Insert proposals with known quality into the review, i.e., test the agents.
I have also assumed the cost of effort is zero, and one goal of the Merrifield & Saari proposal is to encourage effort because it is costly. However, this suggests a trade-off between costly effort and information revealed. Might Merrifield and Saari’s proposal encourage too much effort?
By the by, why exclude agents reviewing their own proposals? Presumably item (6) will discourage grossly inflated rankings of one’s own proposals. It does bring to mind David Lodge’s parlour game `Humiliation’ in Changing Places. Players name classics of literature that they have not read, the winner being the one who confesses the most embarrassing omission. (In the book an untenured professor desperate to impress his colleagues admits to skipping Hamlet and for this lacunae is subsequently denied tenure.)
Step away, now, from the proposal and focus on the desidarata (a-e) that Merrifield and Saari identify. Some of it has to do with the moral hazard problem. Effort is costly, how can I ensure that you exerted effort? The interesting twist in the present context, is that there is no obvious signal of effort that can be relied upon. Thus, any mechanism that meets criteria (a-e) must simultaneously elicit private information, and induce the right level of effort without the injection of outside resources. I think, this has to be impossible. So, there is an impossibility result lurking here to be formulated and proved.
Yisrael Aumann wrote “‘This is the book for which the world has been waiting for decades.”
Eric Maskin added “There are quite a few good textbooks on game theory now, but for rigor and breadth this one stands out.”
Ehud Kalai thinks that “Without any sacrifice on the depth or the clarity of the exposition, this book is amazing in its breadth of coverage of the important ideas of game theory.”
Peyton Young goes further and writes “This textbook provides an exceptionally clear and comprehensive introduction to both cooperative and noncooperative game theory.”
Covering both noncooperative and cooperative games, this comprehensive introduction to game theory also includes some advanced chapters on auctions, games with incomplete information, games with vector payoffs, stable matchings and the bargaining set. Mathematically oriented, the book presents every theorem alongside a proof. The material is presented clearly and every concept is illustrated with concrete examples from a broad range of disciplines. With numerous exercises the book is a thorough and extensive guide to game theory from undergraduate through graduate courses in economics, mathematics, computer science, engineering and life sciences to being an authoritative reference for researchers.
This book is the outcome of 8 years of hard work (and its almost 1000 pages attest for that). It was born in Paris, in February 2004, around Sylvain Sorin’s dinner table, where Shmuel Zamir and your humble servant had a great dinner with Sylvain and his wife. Michael Maschler joined the team several months later, and each month the book grew thicker and thicker. I use the book to teach 4 different courses (two undergrad, two grad), and since it contains so many exercises, there is no reason to spend much time on writing exams.
The book is only one click away. Don’t miss it!
There is a debate underway about how effective Israel’s iron dome system is in protecting populated areas from missile attacks. On the pro side it is argued that somewhere between 85% to 90% of incoming missiles are destroyed. The con side argues that the proportion is much smaller, 40% or less. A large part of the difference comes from how one defines `destroy’. Perhaps a better term would be intercept. It is possible that about 90% of incoming missiles are intercepted. However, a missile once intercepted may not have its warhead disabled, making at least one of the fragments that falls to ground (in a populated area) dangerous.
While nailing down the actual numbers may be interesting, it strikes me as irrelevant. Suppose that any incoming missile has a 90% chance of being intercepted and destroyed (which is the claim of the builder of the iron dome technology). If the attacker launches N missiles and iron dome is deployed, the probability (assuming independence) not a single one making it through is (0.9)^N. Thus, the probability of at least one missile making it through the `dome’ is 1 – (0.9)^N. If N is large, this is large. For example, for N = 10, the probability that at least one missile makes its way through is at least 60% (thanks to anonymous below for correction). Thus, as long as the attacker has access to large quantities of missiles, it can be sure to get missiles through the dome.
Israel is a parliamentray democracy; our president has but ceremonial role, and the prime minister (and his government) is the one who actually makes all important decisions. After elections, each party recommends to the president a candidate for prime minister, and the person who got most recommendations is asked to form a government. To this end, he/she should form a coalition with at least 61 parliament members out of the total of 120.
In the last elections, taking place on 22-January-2013, results where as follows:
Likkud (secular right), the party of the last prime minister Benyamin Netanyahu, got 31 out of 120 parliament members.
Two ultra orthodox parties, which were part of the last government, got together 18 seats.
An orthodox right party got 12 seats.
Three secular centrist parties got 19 + 7 + 2 = 28 seats.
Five secular left parties got together 32 seats.
The last prime minister, Benyamin Netanyahu, was recommended by most of the parties to be the new prime minister as well, and so was asked to form a coalition. The five left parties cannot be part of a coalition because they share an opposite point of view than that of Netanyahu. Still, Netanyahu has several possible coalitions, and his most preferred coalition was with the two ultra orthodox parties and the secular centrist party. As coalitional game theory (and past experience) tells us, he should retain most of the power. Unfortunately for him, the largest secular centrist party and the orthodox right party realized this, and they formed an alliance: either both are part of the coalition, or both are out of it (and they want to ultra orthodox parties out of the government). Since together they have 31 seats, and the five left parties that will anyway be out of the coalition have 32 seats, this means that they became a veto player. Thus, even though Netanyahu is supposed to be the prime minister, these two parties will determine the shape of the coalition.
The coalition is yet to be formed (it took Netanyahu 28 days to realize that the alliance between the two parties is for real and unbreakable), and of course it is yet to be seen how the next government will function. Yet the power of coalitions in coalitional games, and the motivation of various amalgamation axioms, is demonstrated nicely by the current negotiations.
In the course of a pleasant dinner, conversation turned to dictatorship and the organization of markets. At this point, Roger Myerson, remarked upon the absence of inter-species trade. He was not, of course, referring to trade with alien beings from another planet (who would have discovered correlated equilibrium before Nash equilibrium). Rather, the absence of trade with, say, monkeys. Adam Smith, went further and denied the possibility of trade between animals.
Nobody ever saw a dog make a fair and deliberate exchange of one bone for another with another dog. Nobody ever saw one animal by its gestures and natural cries signify to another, this is mine, that yours………
There is a long history of interactions between men and monkeys of various kinds. Monkeys have been marauders of crops, domestic companions, religious symbols and commodities (meat). The interactions between the species seems to fall into one of three categories: pure conflict (keeping out marauders), long run relationships (pets) or exploitation (used in labs and as entertainment). However, no examples of what one might call trade in the sense of voluntary arms length transactions involving barter. For example, why don’t villagers `pay’ bands of roving monkeys to not pillage?
There is evidence to indicating they would be capable of understanding such transactions. Gomes and Boesch, for example, suggest that monkeys trade meat for sex. Then, there is Keith Chen’s monkey study which suggests that one could teach (some) monkeys about money. From the Jesuit traveller Jose de Acosta in the 1500′s we have the following charming account:
I sawe one [monkey] in Carthagene [Cartagena] in the Governour’s house, so taught, as the things he did seemed incredible: they sent him to the Taverne for wine, putting the pot in one hand, and the money in the other; and they could not possibly gette the money out of his hand, before he had his pot full of wine.
A little known fact about Canada is that it is the world’s largest producer of famous Americans. Recall, for example, John Kenneth Galbraith, Wayne Gretzky, William Shatner, Michael J. Fox, Malcolm Gladwell, Shirley Tilghman and Keanu Reeves. Some have suggested that Obama is a Canadian, leading to a split in the birther movement between the original birthers and the neo-birthers. Originalists believe that Obama was sired in a Kenyan village, imbued with an anti-colonial mindset, leavened by Saul Alinsky radicalism and smuggled into the US with the intent of turning the US into an Islamic caliphate. The neo-birthers believe that this beggars belief. It is simpler, they say, to believe that Obama is Canadian.
Nate Silver, needs no introduction. While I should have read his book by now, I have not. From my student Kane Sweeney, I learn I should have. Kane, if I may ape Alvin Roth, is a student on the job market paper this year with a nice paper on the design of healthcare exchanges. Given the imminent roll out of these things I would have expected a deluge of market design papers on the subject. Kane’s is the only one I’m aware of. But, I digress (in a good cause).
Returning to Silver, he writes in his book:
One of the most important tests of a forecast — I would argue that it is the single most important one — is called calibration. Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated
Many years ago, Dean Foster and myself wrote a paper called Asymptotic Calibration. In another plug for a student, see this post. An aside to Kevin: the `algebraically tedious’ bit will come back to haunt you! I digress again. Returning to the point I want to make; one interpretation of our paper is that calibration is perhaps not such a good test. This is because, as we show, given sufficient time, anyone can generate probability forecasts that are close to calibrated. We do mean anyone, including those who know nothing about the weather. See Eran Shmaya’s earlier posts on the literature around this.
One of the differences, often commented upon, between economists and computer scientists is the publication culture. Economists publish far fewer and longer papers for journals. Computer scientists publish many, smaller papers for conference proceedings. The journals (the top ones anyway) are heavily refereed while, the top conference proceedings are less so. Economics papers have long introductions that justify the importance of what is to come as well as (usually) carefully laying out the differences between the current paper and what has come before. It is not unusual for some readers to cry: don’t bore us get to the chorus. Computer science papers have short introductions with modest attempts at justifying what is to come. It is not unusual to hear that an economics paper is well written. Rarely, have I heard that of a computer science paper. Economists sometimes sneer at the lack of heft in CS papers, while Computer Scientists refer caustically to the bloat of ECON papers. CS papers are sometimes just wrong, etc. etc.
If one accepts these differences as more than caricature, but true, do they matter? We have two different ways for organizing the incentives for knowledge production. One rewards large contributions written up for journals with exacting (some would say idiosyncratic) standards and tastes. The other rewards the accumulation of many smaller contributions that appear in competitive proceedings that are, perhaps, more `democratic’ in their tastes. Is there any reason to suppose that one produces fewer important advances than the other? In the CS model, for example, ideas, even small ones, are disseminated quickly, publicly and evaluated by the community. Good ideas, even ones that appear in papers with mistakes, are identified and developed rapidly by the `collective’. An example is Broder’s paper on approximating the permanent. On the ECON side, much of this effort is borne by a smaller set of individuals and some of it in private in the sense of folk results and intuitions. Is there a model out there that would shed light on this?