You are currently browsing Eilon’s articles.

In the last decade journals adapted e-systems to control the review process of papers. Authors use them to submit papers, associate editors to select referees, referees to download the paper and to upload their reports,  once again authors to view the reports, and editors to know what goes on in their journal.

Unfortunately most e-systems are pain in the XXX. Registering to the e-system requires one to provide, for security reasons, his mother’s maiden name, so that the administrator of the journal can call one’s bank and get through; submitting a paper takes ages instead of simply e-mailing it to the editor as things used to be in the good old days; submitting a referee report requires one to choose between accept/minor revision/major revision/reject, but what if I one’s recommendation is “I do not know what your journal looks for, I would reject from Econometrica and accept to JET”?

For years I used to send my referee reports through e-mails and ignore the e-systems. This was easy, since as a referee the editors need me. But I am also an Associate Editor in some journals, and there I tried to get along with the e-systems.

I survived in the world of e-systems until Elsevier decided to consolidate all the accounts that we, authors, referees, and editors, have on its various journals. I simply could not get through the consolidation process. I did not understand why I need to provide the password of every single account that I have, and it is not enough to provide only one. I did not understand why I need to provide a security question, as if I enter the US and the immigration office at JFK looks at me suspiciously. Frustrated, I informed Felix Kubler, the previous editor-in-chief of JME, where I was an associate editor, that I am out of the game, and that if he wants me to handle a paper, I will gladly receive it through the good old e-mail.

But sometimes I am an author, and then nobody needs my help. Nevertheless, since I cannot use Elsevier’s e-system, I cannot submit papers to any of its journals. One option is to ask one of my dear coauthors to do it. Though this solution is tempting I (and I suspect that they as well) find it a little unfair. Therefore, when I wanted to submit a paper to GEB, I asked Jennifer Byrd, the managing editor of the journal, to upload it for me, and lo and behold, she was glad to do it.

I call all of you who cannot cope with the e-systems to abandon them as well. I do not see why we need to use horrible software that makes our life difficult. If editors will see that enough of us revert to e-mail, they will move back to this simple method.

Yisrael Aumann wrote “‘This is the book for which the world has been waiting for decades.”
Eric Maskin added “There are quite a few good textbooks on game theory now, but for rigor and breadth this one stands out.”
Ehud Kalai thinks that “Without any sacrifice on the depth or the clarity of the exposition, this book is amazing in its breadth of coverage of the important ideas of game theory.”
Peyton Young goes further and writes “This textbook provides an exceptionally clear and comprehensive introduction to both cooperative and noncooperative game theory.”

Jacket

Covering both noncooperative and cooperative games, this comprehensive introduction to game theory also includes some advanced chapters on auctions, games with incomplete information, games with vector payoffs, stable matchings and the bargaining set. Mathematically oriented, the book presents every theorem alongside a proof. The material is presented clearly and every concept is illustrated with concrete examples from a broad range of disciplines. With numerous exercises the book is a thorough and extensive guide to game theory from undergraduate through graduate courses in economics, mathematics, computer science, engineering and life sciences to being an authoritative reference for researchers.

This book is the outcome of 8 years of hard work (and its almost 1000 pages attest for that). It was born in Paris, in February 2004, around Sylvain Sorin’s dinner table, where Shmuel Zamir and your humble servant had a great dinner with Sylvain and his wife. Michael Maschler joined the team several months later, and each month the book grew thicker and thicker. I use the book to teach 4 different courses (two undergrad, two grad), and since it contains so many exercises, there is no reason to spend much time on writing exams.

The book is only one click away. Don’t miss it!

Israel is a parliamentray democracy; our president has but ceremonial role, and the prime minister (and his government) is the one who actually makes all important decisions. After elections, each party recommends to the president a candidate for prime minister, and the person who got most recommendations is asked to form a government. To this end, he/she should form a coalition with at least 61 parliament members out of the total of 120.
In the last elections, taking place on 22-January-2013, results where as follows:
Likkud (secular right), the party of the last prime minister Benyamin Netanyahu, got 31 out of 120 parliament members.
Two ultra orthodox parties, which were part of the last government, got together 18 seats.
An orthodox right party got 12 seats.
Three secular centrist parties got 19 + 7 + 2 = 28 seats.
Five secular left parties got together 32 seats.
The last prime minister, Benyamin Netanyahu, was recommended by most of the parties to be the new prime minister as well, and so was asked to form a coalition. The five left parties cannot be part of a coalition because they share an opposite point of view than that of Netanyahu. Still, Netanyahu has several possible coalitions, and his most preferred coalition was with the two ultra orthodox parties and the secular centrist party. As coalitional game theory (and past experience) tells us, he should retain most of the power. Unfortunately for him, the largest secular centrist party and the orthodox right party realized this, and they formed an alliance: either both are part of the coalition, or both are out of it (and they want to ultra orthodox parties out of the government). Since together they have 31 seats, and the five left parties that will anyway be out of the coalition have 32 seats, this means that they became a veto player. Thus, even though Netanyahu is supposed to be the prime minister, these two parties will determine the shape of the coalition.

The coalition is yet to be formed (it took Netanyahu 28 days to realize that the alliance between the two parties is for real and unbreakable), and of course it is yet to be seen how the next government will function. Yet the power of coalitions in coalitional games, and the motivation of various amalgamation axioms, is demonstrated nicely by the current negotiations.

A mixed strategy is a probability distribution over the set of pure strategies. When a player implements a mixed strategy, she chooses a pure strategy at the outset of the game, and follows that pure strategy all through the game.

A behavior strategy is a function that assigns a mixed action to each of the player’s information sets. When a player implements a behavior strategy, whenever the play reaches an information set the player chooses an action according to the mixed action that corresponds to that information set.

Thus, if the play crosses twice the same information set, a mixed strategy will choose the same action in both visits, while a behavior strategy will choose each time the action to play independently of past play.

The well known Kuhn’s Theorem states that in extensive-form games with perfect recall, the notions of mixed strategies and behavior strategies are the same: it is irrelevant whether the player makes her choices when the play visits the information set, or whether she makes these choices at the outset of the game, because the condition of perfect recall implies in particular that the play can cross each information set only once.

Kuhn’s Theorem holds for finite games: the depth of the game is finite, and there are finitely many actions in each decision nodes. The theorem can be extended to the case of infinite trees (countably many nodes), provided the number of actions in each decision nodes is finite.

In stopping problems the number of nodes is of the cardinality of the continuum, and therefore Kuhn’s Theorem does not apply. Stopping problems are a model that is used in mathematical finance: at every stage an agent receives payoff-relevant information, and she has to choose when to stop (when to sell stocks, or when to exercise options). The payoff is given by some stochastic process. Yuri Kifer asked me a question that boiled down to the following: in the model of stopping problems, are mixed strategies and behavior strategies equivalent? My first response was “sure”. Yuri wanted to see a proof. When I tried to write down the proof, a technical issue emerged. It required some time and the energy of three people to be able to provide a definite answer: “sure”.

Read the rest of this entry »

A quite amusing post I stumbled upon describes pricing algorithms used by book retailers at Amazon.com, and what happens when two retailers use such algorithms. I am pretty sure that economists can say much about the use of these algorithms, optimal pricing algorithms, whether the customer gains or loses when retailers use them, and other similar questions. Alas, I am a mathemtician, so all I can do is read the post and smile.

Symbolab is a new search engine that allows one to search for equations. A great idea for those of us who, for example, forgot how the model of “stochastic game” is called, but remember that in stochastic games one is interested in the discounted sum of payoffs. The search engine presumably searches through articles to look for the equation that you look for.

Excited about the prospect that forgetting words is no longer devastating, I immediately searched for the formula of the discounted sum of payoffs:

$\sum _{t=1}^{\infty }\left(\lambda \left(1-\lambda \right)^{t-1}r\left(s_t,a_t\right)\right)$

Unfortunately 0 results were found. I did not give up and change the payoff function from “r” to “c” and then to “u”. Maybe the engine prefers costs or utilities. When I searched for the equation with costs, I got one result; a sum related to convex sets which is not related whatsoever to discounted sum. When I searched for the equation with utilities the engine gave up. I got 174 results! but they were of various sums, none of them (at least, not the 30 top ones which I looked at) involved discounted sums. There were references to statistics, ergodic theory, and various other topics, but no economics or game theory.

Maybe stochastic games are not that common on the net. Let’s look for something simpler:

$u_i\left(\sigma \right)\ge \:u_i\left(\sigma _{-i},\sigma ‘_i\right)$

which is the condition that defines Nash equilibrium. 0 results again. I gave up.

Unfortunately, it seems that we will have to continue remembering names of concepts, and will have to consult with colleagues when we need results in topics we are not familiar with. Magic has not been found yet.

Le Monde tells us (in French) that researchers found 47-million-year-old fossils of nine mating pairs of turtles of species Allaeochelys crassesculpta in a lake in Germany. These fossils, which, according to the researchers, are the only fossils of mating pairs of animals to be found, taught the researchers a lot on this extinct species. But what caught my eye is a probabilistic statement made by one of the researchers:

“Des millions d’animaux vivent et meurent chaque année, et nombre d’entre eux se fossilisent par hasard, mais il n’y a vraiment aucune raison que ça arrive lorsque vous êtes en train de vous reproduire. Il est hautement improbable que les deux partenaires meurent en même temps, et les chances que les deux soient fossilisés à la fois sont encore plus maigres”, a expliqué Walter Joyce, de l’université allemande de Tübingen. Avec plusieurs couples, les probabilités s’amoindrissent encore.

Since the name Walter Joyce sounds English speaking, I searched for a version in English. I found this:

“No other vertebrates have ever been found like these, so these are truly exceptional fossils,” Joyce said. “The chances of both partners dying while mating are extremely low, and the chances of both partners being preserved as fossils afterward even lower. These fossils show that the fossil record has the potential to document even the most unlikely event if the conditions are right.”

The difference between the English version and the French version are slight yet important. The French reader learns that it is improbable that a pair of animals will die together while mating, and that it is even less probable that they will be fossilized together. The chances that this will happen to several mating pairs is even smaller. This is true if the death-while-mating+being-fossilized events of the different couples were independent. However, the article itself explains to the readers that here the events are dependent. During the mating process the turtles freeze, and sometimes sink to the bottom of the lake, where the water is poisonous. Thus, in this lake the probability of finding a fossil of a single turtle is less probable than finding the fossil of a mating pair of turtles, and the probability of finding several fossils of mating pairs is not necessarily lower than the probability of finding a single fossil of a mating pair… The English version does not correct this inaccuracy, but it does end with a note that can be interpreted as a correction of previous mistakes: “These fossils show that the fossil record has the potential to document even the most unlikely event if the conditions are right.”
Next week the semester starts, and I will get a new bunch of first-year undergrad students who are eager to study Introduction to Probability. I will ask them to read this article and spot the mistakes. I wonder how many will find what is wrong in this article.

In a previous post I described an infinite-horizon perfect-information game without a subgame-perfect equilibrium. Does there exist another refinement of Nash equilibrium that keeps the spirit of subgame perfection and is nonempty in every infinite-horizon perfect-information game?

One refinement of the concept of Nash equilibrium is that of trembling-hand equilibrium for extensive-form games, also called extensive-form perfect equilibrium. Call a strategy profile σ an extensive-form perfect equilibrium if there exists a sequence (σn) that satisfies the following properties:

  1. The sequence (σn) converges to σ.
  2. Under σn, each player in each of his decision nodes plays each action with probability at least 1/n.
  3. σn is a Nash equilibrium in the game Γn in which every player must play in each of his decision nodes each action with probability at least 1/n. (Note that the third requirement implies the second requirement).

Does every game admit such an extensive-form perfect equilibrium? The following example, to appear in a new paper of János Flesch, Jeroen Kuipers, Ayala Mashiah-Yaakovi, Gijs Schoenmakers, Eran Shmaya, your humble servant, and Koos Vrieze,  is a variant of the example I provided here, and it shows that this is not the case. The conclusion is that at present we do not have any refinement of the concept of Nash equilibrium for infinite-horizon games that keeps the spirit of subgame perfection and is always nonempty.

Read the rest of this entry »

I first met Jean-Francois Mertens while I was a graduate student at the Hebrew University of Jerusalem . He often visited the Center for Rationality at the Hebrew University and in one of those visits Abraham Neyman, my Ph.D. adviser, asked Mertens to sit with me and listen to my research.

Mertens has contributed to several areas in game theory.

His paper with Elon Kohlberg on stability of equilibria has opened a new research area.

His work with Shmuel Zamir on the universal belief space is fundamental and complements Harsanyi‘s work on belief hierarchies.

His study of the min-max value and max-min value in general repeated games had a huge influence on the research in this area. The Mertens conjecture on repeated games with signals, stating that in zero-sum repeated games with signals, whenever player 1 knows everything that player 2 knows (that is, the signal of player 1 contains the signal of player 2), then the limit of the n-stage game converges to the max-min value, has been the starting point of many works in this topic, and is still open, though recently some advances has been made to solve it.

His contribution to the Shapley value in games with continuum of players includes the extension of the diagonal formula, that historically applied to smooth nonatomic games, to a larger class of games that includes nondifferentiable and also discontinuous games.

For me, his most significant contribution is to the area of stochastic games, where he proved, together with Abraham Neyman, that every two-player zero-sum stochastic game has a uniform value. When Mertens and Neyman proved this result I was still in elementary school. The story I heard about the inception of this result is that in a workshop at CORE on repeated games in the Winter of 1978–1979, Sylvain Sorin was giving a talk on the Big Match, which was the first nontrivial two-player zero-sum undiscounted stochastic game that was solved. For Abraham Neyman, who was sitting in the audience, this was the first encounter with repeated games and stochastic games, and so he asked many questions, ranging from basic to ideas of extending the result to general stochastic games. Jean-Francois Mertens, who was sitting in the audience as well, patiently answered Neyman’s questions and explained why each route that Neyman suggested to extend the result is bound to fail. Like two swordmen, Neyman made another suggestion and Mertens fended it off. The result of this match was one of the most beautiful papers in game theory.

I met Jean-Francois Mertens at Toulouse in September last year, during the conference that Jerôme Renault organized. We had a dinner together. During the conference Mertens was very tired, and when he was back to Belgium he was diagnosed with a lung cancer in an advanced stage. He passed away on the night of July 17, 2012. We will all miss him. יהי זכרו ברוך

Every finite-horizon game in extensive form in which the number of actions in every node is finite has a subgame-perfect equilibrium. Indeed, consider a smallest subgame, that is, a subgame that contains no proper subgames. By Nash’s theorem this subgame has a Nash equilibrium in mixed strategies. Since this subgame contains no proper subgame, this Nash equilibrium is a subgame-perfect equilibrium of the subgame. We can then replace this whole subgame by a single node with terminal payoff that coincides with the equilibrium payoff of the subgame, and continue inductively to construct a subgame-perfect equilibrium (in mixed strategies) in the whole game.

When uncountably many actions are available at some nodes, a subgame-perfect equilibrium need not exist (Harris, Reny, and Robson, 1995).

What happens in infinite-horizon games? When horizon is infinite, a Nash equilibrium need not exist. Indeed, consider a one-player infinite-horizon game in which the player, Alice, has two possible actions, A and B, at every stage. Alice’s payoff is 1-(1/n), where n is the first stage in which she chooses the action B, if n is finite. If Alice never chooses the action B, her payoff is 0. In this game, Alice would like to choose the action B for the first time as late as possible, and, whenever she chooses B, she is better off waiting one more stage. Therefore Alice has no optimal strategy, and there is no equilibrium.

Plainly this toy-game does have an ε-equilibrium: choose B for the first time at stage 1/ε.

Does there exist a subgame-perfect ε-equilibrium in every infinite-horizon perfect-information extensive-form game? Apparently the answer is negative. The following example, to appear in a new paper of János Flesch, Jeroen Kuipers, Ayala Mashiah-Yaakovi, Gijs Schoenmakers, Eran Shmaya, your humble servant, and Koos Vrieze, proves this point.

Alice and Bob play an infinite-stage game. In each stage one of them is the active player who chooses the action (and the other does nothing). The active player has two actions, A and B, that correspond to the identity of the next active player. That is, if the current active player chooses action A, then Alice is the active player in the next stage, while if the current active player chooses action B, then Bob is the active player in the next stage.

What are the payoffs?

  • If there is a stage t such that, from stage t and on, Alice is the active player, then the payoff to Alice is -1 and Bob’s payoff is 2. In such a case we say that the game absorbs with Alice.
  • If there is a stage t such that, from stage t and on, Bob is the active player, then the payoff to Alice is -2 and Bob’s payoff is 1. In such a case we say that the game absorbs with Bob.
  • Otherwise the payoff to both is 0.

Let us argue that in this game there is no subgame-perfect ε-equilibrium, provided ε is sufficiently small. So fix ε>0 sufficiently small and suppose by contradiction that there is a subgame-perfect ε-equilibrium.

In any subgame in which Bob is the first active player he can guarantee 1 by always choosing B. In particular, Bob’s payoff in this subgame must be at least 1-ε.

Fix now a subgame in which Alice is the first active player. Can it be that the play absorbs with Alice with probability less than ε? If this happens, then, since Bob’s payoff is at least 1-ε, the probability that the play absorbs with Bob is at least 1-3ε. But then Alice’s payoff is way below -1, which she can guarantee by always playing A. Thus, in any subgame in which Alice is the first active player, play absorbs with Alice with probability at least ε.

By Levy’s 0-1 law this conclusion implies that with probability 1 the play absorbs with Alice. This in turn implies that play will not absorb with Bob: he will never start an infinite sequence of B’s. But then Alice has no incentive to play A: she will always play B and get 0.

This example shows that apart of the well-beloved concept of ε-equilibrium, we do not have a good concept of subgame-perfect ε-equilibrium in infinite-horizon games. In another post I will comment on the concept of trembling-hand equilibrium in infinite-horizon perfect-information games.

Join 103 other followers

Follow

Get every new post delivered to your Inbox.

Join 103 other followers

%d bloggers like this: