You are currently browsing the monthly archive for July 2010.
Two-player zero-sum games are easy to solve: one can write down a linear program whose solution is the value of the game and the optimal strategies of the players. This program is polynomial in the size of the game, and therefore the whole procedure of solving a two-player zero-sum game is polynomial. Similarly, computing a correlated equilibrium in a multi-player game can be done using a linear program, and is therefore as easy as taking a nap (when the kids do not have fever).
Solving a two-player non-zero-sum game, on the other hand, is a much more difficult task. Gilboa and Zemel proved that the questions “does there exist a Nash equilibrium where all players receive a payoff at least r” and “is there a unique Nash equilibrium” are difficult, NP-hard for computer scientists. The standard proof for the existence of a Nash equilibrium uses the unconstructive Brouwer’s fixed point theorem, and various procedures can be used to approximate a Nash equilibrium.
One reason to attend the conference at Stony Brook was to hear the talk of Noah Stein from MIT. In his talk, Stein presented a recent paper he wrote with Pablo Parrilo and Asuman Ozdaglar. In this paper the trio shows how to prove the existence of a Nash equilibrium using Hart and Schmeidler’s proof for the existence of a correlated equilibrium. Now, this proof for the existence of Nash equilibrium does not use Brouwer’s fixed point theorem or any advanced topological tool, and therefore it is interesting and worth reading. The main idea of the proof, if I got it right, is to show that a Nash equilibrium of the game can be approximated by a special kind of correlated equilibrium of an auxiliary game; the auxiliary game is an ingenious composition of k copies of the original game, and the approximation of the Nash equilibrium improves as k increases. A variation on the proof of Hart and Schmeidler exnsures that the special type of correlated equilibrium exists.
It was evident that the audience at the talk was interested: the talk took almost twice the time allotted to it, and many people were surprised by the proof. When you have time, read this paper. This is what I am going to do on my way to Erice for the workshop on Stochastic Methods in Game Theory.
Following Jeff, a couple of links to posts about Blackwell’s life and research: Jake Abernethy, Rajiv Sethi, Anand Sarwate, Kevin Bryan, jesús Fernández-Villaverde (spanish), Andrew Gelman
Rajiv:
It takes a particular kind of strength to manage such a productive research career while tolerating the stresses and strains of personal insult, and carrying the aspirations of so many on one’s shoulders. Blackwell was more than a brilliant mathematician, he was also a human being of extraordinary personal fortitude.
Anand:
I’ll always remember what he told me when I handed him a draft of my thesis. “The best thing about Bayesians is that they’re always right.”
I will have more to say about the Stony Brook conference, but first a word about David Blackwell, who passed away last week. We game theorists know Blackwell for several seminal contributions. Blackwell’s approachability theorem is at the heart of Aumann and Maschler’s result about repeated games with incomplete information which Eilon mentions below, and also of the calibration results which I mentioned in my presentation in Stony Brook (Alas, I was too nervous and forgot to mention Blackwell as I intended too). Blackwell’s theory of comparison of experiments has been influential in the game-theoretic study of value of information, and Olivier presented a two-person game analogue for Blackwell’s theorem in his talk. Another seminal contribution of Blackwell, together with Lester Dubins, is the theorem about merging of opinions, which is the major tool in the Ehuds’ theory of Bayesian learning in repeated games. And then there are his contributions to the theory of infinite games with Borel payoffs (now known as Blackwell games) and Blackwell and Fergurson’s solution to the Big Match game.
At the British Open golf tournament today, conditions are much tougher in the afternoon than they were in the morning, as a heavy wind has kicked up. Players switch starting times between the Thursday and Friday rounds, so if the pattern had been the same each day things would even out, but it happens the conditions were roughly the same all day Thursday. So the players on the course right now are at a distinct disadvantage. Would we say that this is “unfair”?
I wouldn’t be surprised to hear someone use that word, but I think as the words are commonly used we would call this “unlucky” rather than “unfair.” Now, naturally, substantial luck factors are considered undesirable in a competition, but not nearly as undesirable as rules that favor some competitors a priori, for instance if higher-ranked golfers were allowed to choose their tee times. Luck vs. unfairness is analogous to the error/bias distinction in statistics, where bias is considered more serious (but excessive error can also render a model useless.)
Within the realm of luck, I think competitors are most willing to accept luck that is clearly an “act of God,” as here. That is, when the tee times were posted last week, thanks to the Thursday/Friday flip everyone had equal expectation. So the luck can’t be traced to lots drawn by the organizers, but solely to random weather patterns. The form of “luck” in sports that draws the most ire, if luck it is, is poor officiating. Now, I happen to think that most officials are reasonably unbiased and make mostly random errors, but just as in statistics one can never prove lack of bias (it’s the null hypothesis!) So competitors and fans are left with no conclusive evidence as to whether they have suffered from error or bias. In such cases, I think there is a cognitive bias towards, well, perceiving bias. This is part of a general tendency to overexplain and overinfer from random events.
Shmuel Zamir is 70, and the international conference at Stony Brook dedicated today to this event. Shmuel’s past and present co-authors presented papers that are related to his work. Bob Aumann described some of Shmuel’s contribution to game theory, starting from his Ph.D. thesis on the rate of convergence of the value of n-stage repeated games with incomplete information on one side to the value of the corresponding infinitely repeated game. That is, we know from the work of Aumann and Maschler (1967, 1995) that the value v_n of the n-stage repeated game with incomplete information on one side converges to a limit v_*. We also know that v_* is the value of the same game that is infinitely repeated. The question that was open back then, when Shmuel was young, is the rate of convergence: what is the order of magnitude of the difference v_* – v_n. It was known that this difference is bounded by O(1/sqrt(n)), but it was not known whether this is the best bound. The first part of Shmuel’s thesis solves this problem, and proves that indeed O(1/sqrt(n)) is the best bound, and that for some classes of games better bounds are possible. Shmuel told me that back then he solves the value of 8-stage games, just to get the feeling of the problem. As someone who studied 3-stage games, I am truly impressed by this confession.
We also heard about the work that Shmuel did with Jean-Francois Mertens: the existence of the limit of the value of n-stage repeated games with incomplete information on both sides, and the existence of the universal belief space. I will dedicate another post to the universal belief space and to recent results relating to it.
Shmuel’s contributions to auction theory and to experimental game theory were not left behind either.
Undoubtedly, Shmuel Zamir’s contributions are fundamental, and the research of many among us, me included, was affected by his work. Mazal Tov, Shmuel. May the next 70 years be as fruitful as the last ones!
I am always wary about applying game theory to real life situations. Plainly we game theorists have many insights about strategic interactions, and our thinking is tuned to strategic analysis, but life is complex, and it is difficult to properly model a given interaction; we often forget to take into acount important aspects that change the whole picture.
An example came up in a recently published interview in an Israeli daily newspaper. Among other things, the interview described a suggestion by a game theorist to solve the problem of missiles shot into Israel from Lebanon and Gaza. According to the interview, the game theorist suggested that Israel builds an automatic machine that, whenever a missile is shot into Israel, waits half an hour and then shoots back a missile; the half an hour is enough time to let the civilians, who live at the place where the first missile was launched from, leave their homes. As we all know from games in extensive form and threat strategies, it is important that the machine is automatic. Such an automatic machine implements a threat strategy, and therefore it should deter deviations.
Let us leave aside moral aspects of this solution, and concentrate on the modeling itself. Is this indeed a two-player game? Will the threat strategy work? Well, if Israel and the Palestinians were the only sides in the business, then the answer to these questions may have been positive. But this is not the case. There are other players in this game: Hizbollah, Iran, US, and many other Arab and European countries. Leaving them outside the model misses an important aspect of the interaction. For example, Hizbollah and Iran may react to Israel’s threat strategy. That is, they may themselves threat Israel that if it implements its threat strategy, they will implement a threat strategy of their own, so that in fact Israel do not choose between “do nothing” and “react with a missile”, but between “do nothing” and “start a full-fledged war”.
Suppose that one week after Israel sets its automatic machine, Hizbollah sets its own automatic machine: if Israel shoots a missile into Gaza, Hizbollah’s machine shoots ten missiles into Israel. And then Israel will build another machine to threat Hizbollah. And then Iran will build a machine. World War I started because of such automatic machines. If we let countries build automatic machines that react to the other side’s agressiveness, we will end up with World War III that will start because some small Palestinian organization shot a missile into Israel (in the hope of starting a full-fledged war).
I may have missed something in the game theorist suggestion, and the journalist may have missed something in his report. But what the newspaper readers get from the interview is that game theory says that the simple solution of automatic machine will work. I fear that game theory does not say that. And I fear that contrary to what the game theorist perceived, it will not work.
On this subject I have nothing to say. I do predict that within a year there will be a paper on precisely this topic if James does move. Eventually it will find its way into a book with a one word title whose suffix will be `onomics.’ Since the advent of Freakonomics I have seen:
Superfreakonomics (a sequel, which makes one wonder what the sequel to it will be called)
Emotinomics
Soccernomics
Hook-onomics
Invento-nomics
Gooleplex-Onomics
Part One: Least Unique-Bid Auctions
In recent years a new auction method became widely used on the internet. This method is called Least Unique-Bid Auction, and it goes as follows. An object is offered for sale, say an iPhone or a car. The bids are made in cents. Each participant can make as many bids as he or she wishes, paying 50 cents for each bid. So if I bid on the numbers 1, 2, 5 and 12 (all in units of cents), I pay 2 dollars for the participation. Once the time runs out and all bidders placed their bids, one counts the number of bidders who bid on each number. The winning bid is the minimal number that was bid by a single bidder. This bidder is the winner, and he pays his bid (in cents). So, if Anne bid on the numbers {1,2,3,6,12}, Bill bid on the numbers {1,2,3,4,5,6,7}, and Catherine bid on the numbers {3,4,5,7,13, 14,15,16}, then the number 12 wins, and Anne gets the object. The auctioneer gets 2.5 dollars from Anne for her 5 bids plus 12 cents which is her winning bid, he gets 3.5 dollars from Bill, and 4 dollars from Catherine.
In practice the auction is dynamic and not static; for each bid that you place, you know at each time period whether (a) it is currently the winner (no one else bid the same number, and there is no lower number that was bid by a single bidder), (b) it is a potential winner (no one else bid the same number, but there is at least one lower number that was bid by a single bidder), or (c) not a potential winner (somebody else also bid on that number).
One must admit that this type of auction is ingenious. The selling price is extremely low, usually less than 2% of the object’s market value, so people are drawn to participate. The names of the winners are listed on the site; there are recurring names. So people realize that there are strategies that profit. One can actually think of such strategies: for example, randomly choose numbers, and when you find a potential winner bid on all numbers below it, trying to kill all lower potential winning numbers. So why not participate and make money?
Bottom line: the auctioneer makes a lot of money.
Part Two: LUBA and the court
A couple of months ago a lawyer called me. He wants to sue a company that operates a certain type of LUBA in Israel on a class action, on the ground that this is a lottery, a gambling game, and not an ability game. By law, in Israel only the state can run lotteries. He asked me to provide an expert opinion that LUBA is a lottery. I apologized. I cannot do that. I believe that strategic bidders can make money in LUBA, and therefore, just as black-jack, LUBA is not a lottery. In fact, I write a paper with Marco Scarsini and Nicolas Vieille arguing that LUBA is not a lottery. Having said that, I believe that LUBA is worse that a lottery: in a lottery, all participants have the same probability of winning. This is not the case with LUBA: the presence of strategic bidders essentially kills the chance of non-strategic bidders, or partially strategic bidders, to win the auction.
Part Three: Moral
So LUBA may not be illegal, but it seems that there is something wrong with it. I discussed this issue with Zvika Neeman yesterday. Just like a pyramid scheme or day trading by non professionals, LUBA is a method to get money out of people who may not realize the strategic complexity of the situation that they face.
Part Four: Complexity Fraud
Merriam-Webster defines fraud as:
“a : Intentional perversion of truth in order to induce another to part with something of value or to surrender a legal right. b : An act of deceiving or misrepresenting.”
Dictionary.com defines it as:
“Deceit, trickery, sharp practice, or breach of confidence, perpetrated for profit or to gain some unfair or dishonest advantage.”
There are many types of fraud. I argue that there should be one additional type: complexity fraud. Sometimes people are asked to participate in complex interactions, paying some amount for participation in the hope of getting a reward at the end. The rules are all set in advance, so nobody can later argue that he or she did not know the rules. But most people are not well versed in game theory, and we have all bounded computational capacity. Therefore, when the interaction is complex, people cannot analyze it and rationally decide whether they want to participate or not. People tend to be optimistic, they over-estimate their ability and smartness. If the strategic analysis of the method was explained to them, and if they were faced by statistics, they would turn away from the method. If this is the case, then hiding the strategic analysis and the complexity of the situation is, in my view, as deceptive as any other fraud.
I am not a lawyer, and I do not know what the court will think of my arguments. I hope that congressmen and parliament members worldwide will look into them, and change the law accordingly.
Recent Comments