You are currently browsing the category archive for the ‘Game Theory’ category.

When discussing the allocation of indivisible objects, I point to randomization as a method. To emphasize it is not merely a theoretical nicety, but is used to allocate objects that are of high value I give the example of suicide lotteries. I first came across them in Marcus Clarke’s `For The Term of His Natural Life’. It counts as the first Australian novel. Its hero, an Englishman, Rufus Dawes is transported to Australia for a crime he did not commit. In the course of his tribulations, Dawes is shipped to a penal settlement on Norfolk Island, 800 miles east of Australia; even a prison needs a further prison. Robert Hughes, in his heart rending and eloquent account of the founding of Australia, called the `Fatal Shore’, describes Norfolk island in these words:

`….a place of breathtaking barbarity……. On Norfolk Island an Irishman named William Riley received 100 lashes for ”Singing a Song” (no doubt a rebel one) and 50 for asking a warder for a chew of tobacco. Deranged by cruelty and misery, some men would opt for a lifetime at the bottom of the carceral heap by blinding themselves; thus, they reasoned, they would be left alone.’

It is in this portion of his book, that Hughes recalls an eyewitness account of a suicide lottery of the type mentioned in Clarke’s novel. Here is Clarke’s succinct description of it:

The scheme of escape hit upon by the convict intellect was simply this. Three men being together, lots were drawn to determine whom should be murdered. The drawer of the longest straw was the “lucky” man. He was killed. The drawer of the next longest straw was the murderer. He was hanged. The unlucky one was the witness. He had, of course, an excellent chance of being hung also, but his doom was not so certain, and he therefore looked upon himself as unfortunate.

Clarke and Hughes deviate slightly upon the precise incentives that would drive participation in the scheme. As many of the convicts on Norfolk island were Irish, the scheme was concocted as a way to to circumvent the Catholic prohibition on suicide. Hughes suggests that, after the murder, witness and culprit would be shipped back to the mainland for trial. Conditions there were better, so for both there was brief respite and a greater opportunity for escape.

Its an arresting story, that one is loath to give up. But, one is compelled to ask, is it true? If yes, was it common? Tim Causer of King’s College London went back to look at the records and says the answers are `maybe’ and `no’. Here is his summing up:

`Capital offences committed with apparent suicidal intent are an important part of Norfolk Island’s history, but they need to be understood more fully. It should be recognised just how rare they were, that ‘suicide lotteries’ are embellishments upon actual cases of state-assisted suicide and repeating the myth only reinforces the sensationalised interpretation of Norfolk Island’s history.’

You can find the full account here.

I am not the right person to write about Lloyd Shapley. I think I only saw him once, in the first stony brook conference I attended. He reminded me of Doc Brown from Back to The Future, but I am not really sure why. Here are links to posts in The Economist and NYT following his death.

Shapley got the Nobel in 2012 and according to Robert Aumann deserved to get it right with Nash. Shapley himself however was not completely on board: “I consider myself a mathematician and the award is for economics. I never, never in my life took a course in economics.” If you are wondering what he means by “a mathematician” read the following quote, from the last paragraph of his stable matching paper with David Gale

The argument is carried out not in mathematical symbols but in ordinary English; there are no obscure or technical terms. Knowledge of calculus is not presupposed. In fact, one hardly needs to know how to count. Yet any mathematician will immediately recognize the argument as mathematical…

What, then, to raise the old question once more, is mathematics? The answer, it appears, is that any argument which is carried out with sufficient precision is mathematical

In the paper Gale and Shapley considered a problem of matching (or assignment as they called it) of applicants to colleges, where each applicant has his own preference over colleges and each college has its preference over applicants. Moreover, each college has a quota. Here is the definition of stability, taken from the original paper

Definition: An assignment of applicants to colleges will be called unstable if there are two applicants and who are assigned to colleges and , respectively, although prefers to and prefers to .

According to the Gale-Shapley algorithm, applicants apply to colleges sequentially following their preferences. A college with quota maintains a `waiting list’ of size with the top applicants that has applied to it so far, and rejects all other applicants. When an applicant is rejected from a college he applies to his next favorite college. Gale and Shapley proved that the algorithm terminates with a stable assignment.

One reason that the paper was so successful is that the Gale Shapley method is actually used in practice. (A famous example is the national resident program that assigns budding physicians to hospitals). From theoretical perspective my favorite follow-up is a paper of Dubins and Freedman “Machiavelli and the Gale-Shapley Algorithm” (1981): Suppose that some applicant, Machiavelli, decides to `cheat’ and apply to colleges in different order than his true ranking. Can Machiavelli improves his position in the assignment produced by the algorithm ? Dubins and Freedman prove that the answer to this question is no.

Shapley’s contribution to game theory is too vast to mention in a single post. Since I mainly want to say something about his mathematics let me mention Shapley-Folkman-Starr Lemma, a kind of discrete analogue of Lyapunov’s theorem on the range of non-atomic vector measures, and KKMS Lemma which I still don’t understand its meaning but it has something to do with fixed points and Yaron and I have used it in our paper about rental harmony.

I am going to talk in more details about stochasic games, introduced by Shapley in 1953, since this area has been flourishing recently with some really big developments. A (two-player, zero-sum) stochastic game is given by a finite set of states, finite set of actions for the players, a period payoff function , a distribution over for every state and actions , and a discount factor . At every period the system is at some state , players choose actions simultaneously and independently. Then the column player pays to the row player. The game then moves to a new state in the next period, randomized according to . Players evaluate their infinite stream of payoofs via the discount factor . The model is a generalization of the single player dynamic programming model which was studied by Blackwell and Bellman. Shapley proved that every zero-sum stochastic game admits a value, by imitating the familiar single player argument, which have been the joy and pride of macroeconomists ever since Lucas asset pricing model (think Bellman Equation and the contraction operators). Fink later proved using similar ideas that non-zero sum discounted stochastic games admit perfect markov equilibria.

A major question, following a similar question in the single player setup, is the limit behavior of the value and the optimal strategies when players become more patient (i.e., goes to ). Mertens and Neyman have proved that the limit exists, and moreover that for every there strategies which are -optimal for sufficiently large discount factor. Whether a similar result holds for Nash equilibrium in -player stochastic games is probably the most important open question in game theory. Another important question is whether the limit of the value exists for zero-sum games in which the state is not observed by both players. Bruno Zilloto has recently answered this question by providing a counter-example. I should probably warn that you need to know how to count and also some calculus to follow up this literature. Bruno Zilloto will give the Shapley Lecture in Games2016 in Maastricht. Congrats, Bruno ! and thanks to Shapley for leaving us with some much stuff to play with !

One of the highlight of last year’s Stony Brook conference was John Milnor’s talk about John Nash. The video is now available online. John Milnor, also an Abel Prize laureate, is familiar to many economists from his book “Topology from the differential approach”. He has known John Nash from the beginning of their careers in the common room in the math school at Princeton University. Watch him talk with clarity and humor about young Nash’s ambitions, creativity, courage, and contributions.

Here is a the handwritten letter which Nash wrote to the NSA in 1955 (pdf), fifteen years before Cook formalized the P/NP problem. In the letter Nash conjectures that for most encryption mechanisms, recovering the key from the cipher requires exponential amount of time. And here is what Nash had to say about proving this conjecture:

Credit for the game that bears his name is due to to Borel. It appears in a 1921 paper in French. An English translation (by Leonard Savage) may be found in a 1953 Econometrica.

The first appearance in print of a version of the game with Colonel Blotto’s name attached is, I believe, in the The Weekend Puzzle Book by Caliban (June 1924). Caliban was the pen name of Hubert Phillips one time head of Economics at the University of Bristol and a puzzle contributor to The New Statesman.

Blotto itself is a slang word for inebriation. It does not, apparently, derive from the word `blot’, meaning to absorb liquid. One account credits a French manufacturer of delivery tricycles (Blotto Freres, see the picture) that were infamous for their instability. This inspired Laurel and Hardy to title one of their movies Blotto. In it they get blotto on cold tea, thinking it whiskey.

Over time, the Colonel has been promoted. In 2006 to General and to Field Marshall in 2011.

Duppe and Weintraub date the birth of Economic Theory, at June 1949. It was the year in which Koopmans organized the Cowles Commission Activity Analysis Conference. It is also counted as conference Zero of the Mathematical Programming Symposium. I mention this because the connections between Economic Theory and Mathematical Programming and Operations Research had, at one time been very strong. The conference, for example, was conceived of by Tjalling Koopmans, Harold Kuhn, George Dantzig, Albert Tucker, Oskar Morgenstern, and Wassily Leontief with the support of the Rand corporation.

One of the last remaining links to this period who straddled, like a Colossus, both Economic Theory and Operations Research, Herbert Eli Scarf, passed away on November 15th, 2015.

Scarf came to Economics and Operations Research by way of Princeton’s mathematics department. Among his classmates was Gomory of the cutting plane method Milnor of topology fame and Shapley. Subsequently, he went on to Rand ( Dantzig, Bellman, Ford & Fulkerson). While there he met Samuel Karlin and Kenneth Arrow who introduced him to inventory theory. It was in this subject that Scarf made the first of many important contributions: the optimality of (S, s) polices. He would go on to establish equivalence of the core and competitive equilibrium (jointly with Debreu), identify a sufficient condition for non-emptiness of the core of a NTU game (now known as Scarf’s Lemma), anticipated the application of Groebner basis in integer programming (neighborhood systems) and of course his magnificent `Computation of Economic Equilibria’.

Exegi monumentum aere perennnius regalique situ pyramidum altius, quod non imber edax, non Aquilo impotens possit diruere aut innumerabilis annorum series et fuga temporum. Non omnis moriar…….

I have finished a monument more lasting than bronze and higher than the royal structure of the pyramids, which neither the destructive rain, nor wild North wind is able to destroy, nor the countless series of years and flight of ages. I will not wholly die………….

Three items on copyright and revenue all on the same day.

First, is Taylor Swift’s open letter to Apple upbraiding them for not paying royalties to artists for their music during the trial period of its new streaming music service. It caused the weenies at Apple to change their tune.

Second, a high court ruling in the UK which erased an earlier UK decision that made it lawful for users to copy purchased content for personal use. Related is freedom of panorama which permits the photographing of copyrighted buildings and sculptures in public places. Up for vote this summer before the European parliament is legislation that would restrict such rights.

The Swift letter echoes the points she made earlier when she pulled her wares from Spotify:

“In my opinion, the value of an album is, and will continue to be, based on the amount of heart and soul an artist has bled into a body of work, and the financial value that artists (and their labels) place on their music when it goes out into the marketplace.”

One of the more poetic renditions of the labor theory of value I’ve read. Here is another line from the same missive:

“Valuable things should be paid for.”

No. Its the added value of a good or service that commands a premium. Pearsall-Smith got this right when writing of the novelists of his age.

“The diction, the run of phrase of each of them seems quite undistinguishable from that of the others, each of whose pages might have been written by any one of his fellows.”

Thus, the question is whether the heart and soul each artist bleeds into their work serve to differentiate it in a way that matters from others. The effectiveness of music recommender systems suggests not.

Enough of `Swiftian’ logic and lets turn to the UK high court ruling. The Electronic Frontier Foundation complained that it contained more economic theory than common sense. An irritating remark as the level of theory barely exceeded that you would find in an intermediate micro-economics course. It makes me wonder whether the pundits at the EFFs ever went to college.

The ruling is a perfect example of how consistency can become a procrustean bed. The UK government had earlier made the duplication of copyrighted material for personal use legal. It claimed that its reasons for doing were consistent with an EU copyright directive that requires the copyright holder to be compensated for forgone revenues lost to copying. The Judge concluded that the government’s rulings were, in fact, inconsistent with the EU directive and overturned it making copying for personal use illegal.

The law, as Dickens said, is an ass (the quadruped not the posterior). So, lets focus on the economics. The ruling by the way quotes Varian’s 2005 piece in the Journal of Economic Perspectives as well as Boldrin and Levine.

Suppose I sell you a song in a medium which is costly to reproduce and transport. If you want to listen to the song both at home and in your office you must purchase two copies. Now, a sea change. The medium on which the song is transmitted changes. The cost of duplication and transport is now zero. Am I worse off? If I am, then under the EU directive I should be compensated for this loss.

With this sea change, you would buy one fewer copy. However, I, recognizing the sea change gives you the same benefit as buying two copies, can simply raise my price to account for this. The High court ruling called this pricing-in and the case turned upon whether the music seller, me in this example, could perfectly price-in. If not, then under the EU directive I am entitled (bizarre, I know) to compensation for lost profits.

If the sea change allows you to consume music in ways you previously could not (in the bathroom, in your car at night etc.) then it seems obvious that I could anticipate this and price-in. If the sea-change allows you to copy and distribute my music costlessly, then, I may be forced to sell my music at a discount or withhold it (see the Varian paper for intermediate cases). Whether I am harmed or not depends on whether you intend to use the sea change for personal use or to compete with me.

Interestingly, the discussion in the ruling as well as Varian’s paper ignores those who own the devices for transmitting, duplicating, storing and playing the music. Lets use the example in Varian pg. 11. You are willing to pay $20 for home use of a CD and $10 (actually 10 to break ties) for office use. The cost of copying is initially infinity.

The revenue maximizing price is clearly $20 for a CD, unless I could use a 2-part tariff. Now a third party develops a technology for copying CDs that is simple and convenient. Copying is now legal. Under the pricing-in story I should just charge $30 (assuming you have the technology). I’m better off and you are no worse off. However, we have ignored the owner of the copying technology. You, the music consumer have $30 to shell out. I can certainly capture $20 of it but to capture the remaining $10, I need the owner of the copying technology. Any simultaneous split of the $10 is a Nash equilibrium. The point is that the music and the technology that allows one to copy, format shift etc complements the music itself. That $10 is a joint gain to the owner of the song as well as the owner of the copying technology. One might argue that the owner of the copying technology is entitled to the full $10 as it is her innovation that allowed one to capture it. Hence, the copyright holder, me in this example, suffers no loss from the fact you can now copy my music.

See the notice on the Game Theory Society website.

Yanis Varoufakis, the Greek Finance minister writes in the Feb 16 NY Times:

Game theorists analyze negotiations as if they were split-a-pie games involving selfish players. Because I spent many years during my previous life as an academic researching game theory, some commentators rushed to presume that as Greece’s new finance minister I was busily devising bluffs, stratagems and outside options, struggling to improve upon a weak hand.

Is this a case of a theorist mugged by reality or someone who misunderstands theory? The second. The first sentence quoted proves it because its false. Patently so. Yes, there are split-a-pie models of negotiation but they are not the only models. What about models where the pie changes in size with investments made by the players (i.e. double marginalization)? Wait, this is precisely the situation that Varoufakis sees himself in:

…….table our proposals for regrowing Greece, explain why these are in Europe’s interest…….

He continues:

`If anything, my game-theory background convinced me that it would be pure folly to think of the current deliberations between Greece and our partners as a bargaining game to be won or lost via bluffs and tactical subterfuge.’

Bluff and subterfuge are not the only arrow in the Game Theorist’s quiver. Commitment is another. Wait! Here is Varoufakis trying to signal commitment:

Faithful to the principle that I have no right to bluff, my answer is: The lines that we have presented as red will not be crossed. Otherwise, they would not be truly red, but merely a bluff.

Talk is cheap but credible commitments are not. A `weak’ type sometimes has a strong incentive to claim they are committed to this much and no more. Thus, Varoufakis’ claim that he does not bluff rings hollow, because a liar would say as much. Perhaps Varoufakis should dust off his Schelling and bone up on his signaling as well as war of attrition games. Varoufakis may not bluff, but his negotiating partners think he does. Protestations to the contrary, appeals to justice, Kant and imperatives are simply insufficient.

He closes with this:

One may think that this retreat from game theory is motivated by some radical-left agenda. Not so. The major influence here is Immanuel Kant, the German philosopher who taught us that the rational and the free escape the empire of expediency by doing what is right.

Nobel sentiments, but Kant also reminded us that

“Out of the crooked timber of humanity, no straight thing was ever made.”

My advice to Varoufakis: more Game Theory, less metaphysics.

Finally, if too late. It was announced October 3rd. For more on Blackwell see this post or the special GEB issue.

In Harsanyi games with incomplete information, also known as Bayesian games, each player has a type. The type of the player describes all that he knows and believes about the situation he faces: who are the players, what are his and their available actions, what are his and their utility functions, and what are the beliefs of the other players about the situation.

Since the player’s type describes his knowledge and beliefs, a player always knows his own type. But a player need not know the other players’ types. Indeed, a chess player knows his own abilities, but he may not the level of his opponent: he may ascribe probability 1/3 to the event that his opponent is familiar with the Benko gambit, and probability 2/3 to the event that the opponent is not familiar with this opening.

In a Bayesian game, a chance move selects a vector of types, one type for each player, according to a known probability distribution *p* at the outset of the game. Each player learns his own type, but he does not know the types chosen for the other players. He does have a belief about the other players’ types, which is the conditional distribution of *p* given his own type.

The Bayesian game is an auxiliary construction. In reality there is no chance move that selects the player’s types: the knowledge and beliefs each player is equipped with determine his or her type. Bayesian games are merely a way to model the incomplete information each player has on the other players’ types. Thus, the true situation the players face is the situation after the vector of types was selected, which is called the *interim stage*. The situation before the vector of types is chosen, which is called* ex ante* *stage*, is the mathematical way that Harsanyi found to model the game.

Consider now the following Bayesian game, that depends on a real number *a* (which is in the unit interval; below, all additions and subtractions are modulo 1). There are two players; the type space of each player is the unit interval [0,1]. The types of the players are correlated: if player 1 has type x, then he believes that player 2’s type is either x of x+a (each with probability 1/2); if player 2 has type x, then he believes that player 1’s type is either x of x-a (each with probability 1/2). This belief structure can be described by a common prior distribution: the types of the two players are chosen according to the uniform distribution over the following set T (this is a variation of an example of Ehud Lehrer and Dov Samet):

If player 1’s type is *x*, then he believes that player 2 may be of type *x* or *x+a*. It follows that player 2 may believes that player 1’s type is *x-a*, *x*, or *x+a*. So player 2 may believe that player 1 believes that player 2’s type is *x-a*, *x*, *x+a* or *x+2a*. When the situation is a game, to decide how to play, player 1 needs to take all types of player 2 (and of himself) of the form {*x+na*, *n* is an integer}. This set of finite if *a* is a rational number, and countable if *a* is an irrational number. Denote by Zx the set of all pairs of type {(*x+na,x+na*), *n* is an integer} union with {(*x+na,x+(n+1)a*), *n* is an integer}. The set Zx is called the *minimal belief subspace* of player 1. In the interim stage, after his type was selected and told to him, player 1 knows that the type vector is in Zx, that only type vectors in Zx appear in the belief hierarchy of player 2, and therefore he can think about the situation as if the Bayesian game is restricted to Zx: a type vector in Zx was chosen according to the conditional distribution over Zx. To determine how to play, player 1 should find an equilibrium in the game restricted to Z.

The uniform distribution of the set T that appears in the figure above induces a probability distribution over Zx. When Zx is finite (= *a* is a rational number), this is the uniform distribution over a finite set. Alas, when Zx is countable (*a* is irrational) there is no uniform distribution over Zx. In particular, the interim stage is not well defined! Thus, even though the interim stage is the actual situation the players face, and even though they can describe their beliefs using a Harsanyi game with a larger type space, the situation they face cannot be described as a Harsanyi game if they take into account only the types that are possible according to their information.

It is interesting to note that one can find a Bayesian equilibrium in the game restricted to Zx, for every x. However, when one tries to “glue” these equilibria together, one might find out that the resulting pair of strategies over [0,1] is not measurable, and in particular an equilibrium in the Harsanyi game (over T) need not exist. This finding was first noted by Bob Simon.

Since indeed the true situation the players face is the interim stage, and the ex ante stage is merely an auxiliary construction, how come the ex ante stage does not define a proper game in the interim stage? If this is the case, is the auxiliary construction of Harsanyi game over T the correct one?

## Recent Comments