You are currently browsing the monthly archive for April 2010.

[Update 5/17:  Thank you for the many interesting comments!  Please see my follow-up posted today.]

In a professional basketball game, a player is disqualified (“fouls out”) if he is charged with 6 personal fouls.  Observers of the NBA know that the direct effect of fouling out actually has less impact than the indirect effect of “foul trouble.”  That is, if a player has a dangerous number of fouls, the coach will voluntarily bench him for part of the game, to lessen the chance of fouling out.  Coaches seem to roughly use the rule of thumb that a player with n fouls should sit until n/6 of the game has passed.  Allowing a player to play with 3 fouls in the first half is a particular taboo.  On rare occasions when this taboo is broken, the announcers will invariably say something like, “They’re taking a big risk here; you really don’t want him to get his 4th.”

Is the rule of thumb reasonable? No!  First let’s consider a simple baseline model:  Suppose I simply want to maximize the number of minutes my star player is in the game.  When should I risk putting him back in the game after his nth foul?  It’s a trick question: I shouldn’t bench him at all!  Those of you who haven’t been brainwashed by the conventional wisdom on “foul trouble” probably find this obvious.  The proof is simple: if he sits, the only thing that has changed when he gets back in is that there is less time left in the game, so his expected minutes have clearly gone down.  In fact, the new distribution on minutes is first-order stochastically dominated, being just a truncation of the alternative.  This assumes only that his likelihood of getting a foul is time-invariant, which seems reasonable.

OK, while I believe the above argument is very relevant, it oversimplified the objective function, which in practice is not simply to maximize minutes.  I’ll discuss caveats now, but please note, there is tremendous value in understanding the baseline case.  It teaches that we should pay attention to foul trouble only insofar as our objective is not to maximize minutes.  I am very comfortable asserting that coaches don’t understand this!

First caveat: players are more effective when rested.  In fact, top stars normally play about 40 of 48 minutes.  If it becomes likely that a player will be limited to 30-35 minutes by fouling out, we may be better off loading those minutes further towards the end of the game to maximize his efficiency.  Notice, though, that this doesn’t lead to anything resembling the n/6 rule of thumb.  It says we should put him back in, at the very latest, when he is fully rested, and this isn’t close to what is done in practice.  In fact players often sit so long the rest may have a negative impact, putting them “out of the flow of the game.”

Second caveat: maybe not all minutes are created equal.  It may be particularly important to have star players available at the end of the game.  On a practical level, the final minute certainly has more possessions than a typical minute, but it also has more fouls, so maybe those effects cancel out.  I think the primary issue is more psychological: there is a strong perception that you need to lean more on your superstars at the end of the game.  I think this issue is drastically overrated, partly because it’s easy to remember losing in the last minute when a key player has fouled out, but a more silent poison when you lose because you were down going into that minute having rested him too long.  By the way, my subjective sense is that the last possession is more similar to any other than conventional wisdom suggests: a wide-open John Paxson or Steve Kerr is a better bet than a double-teamed Michael Jordan any time in the game.  On a couple of major occasions, Jordan agreed.  This isn’t to underestimate the star’s importance in scoring and getting other players good shots, just to say that this is not necessarily more important in the final minutes.  You do often hear that a team will rise to the occasion when a star is injured or suspended, so even conventional wisdom wavers here.  Finally, note that the foul-trouble rule of thumb is applied also to players who aren’t the primary scorer, so that this argument wouldn’t seem to apply.  I will give coaches a little credit: they do sometimes seem to realize that they shouldn’t worry about foul trouble for bench players who often don’t play at the end anyway.

One more psychological caveat: a player who just picked up a foul he thinks is unfair may be distracted and not have his head in the game immediately afterward.  This may warrant a brief rest.

Final note: Conventional wisdom seems to regard foul management as a risk vs. safety decision.  You will constantly hear something like, “a big decision here, whether to risk putting Duncan back in with 4 fouls.”  This is completely the wrong lens for the problem, since the “risky”* strategy is, with the caveats mentioned, all upside!  Coaches dramatically underrate the “risk” of falling behind, or losing a lead, by sitting a star for too long.  To make it as stark as possible, observe that the coach is voluntarily imposing the penalty that he is trying to avoid, namely his player being taken out of the game!  The most egregious cases are when a player sits even though his team is significantly behind.  I almost feel as though the coach prefers the certainty of losing to the “risk” of the player fouling out.  There may be a “control fallacy” here: it just feels worse for the coach to have a player disqualified than to voluntarily bench him, even if the result is the same.  Also, there is a bit of an agency/perception problem: the coach is trying to maximize keeping his job as well as winning, which makes him lean towards orthodoxy.

There are well-documented cases in the last decade of sports moving towards a more quantitative approach, so maybe there is hope for basketball strategy to change.  The foul-trouble orthodoxy is deeply ingrained, and it would be a satisfying blow for rationality to see it overturned.

*Final outcomes are binary, so the classical sense of risk aversion, involving a concave utility function in money, doesn’t apply at all.  But there is also a sense of what I call “tactical risk”: a decision may affect the variance of some variable on which your probability of final success depends in a convex (or concave) way.  I might write an essay sometime on the different meanings of “risk.”  Anyway, here you presumably should be risk-averse in your star’s minutes if ahead, risk-loving if behind.  But this is rendered utterly moot by first-order stochastic dominance!

This happens a lot: I am reading a paper, as usual going directly to the results and skipping the introduction, related literature, discussion, preliminaries, formal model etc. And then there is some \alpha which I have no idea what it stands for. I would like to search for `\alpha’ in the pdf document, but if there is a way to do it then I have never heard about it.

So, imagine my delight when I heard of Springer’s LaTeX Search tool, which does something that I never even dared to wish — search in their database for an equation that contains a given latex code. Pretty awesome, isn’t it ?

I tried some arbitrary code

i\hbar\frac{\partial}{\partial t}\Psi=\hat H\Psi

(which translates to i\hbar\frac{\partial}{\partial t}\Psi=\hat H\Psi)
but apparently nobody has used this equation before.

So I tried something else: E=mc^2. Again no exact matches but this time there are a couple of similar results

Well, as Jeffrey Shallit said, it is, at least, a start.

Why do we study game theory? As a mathematician, my answer is that game theory is mathematically interesting; I am content as long as I can study interesting models, develop interesting techniques to solve problems, and prove difficult results.

But some of us are closer to the real world than I am, and they claim that game theory is related to real problems. However, as we all know hardly any interactive situation that we encounter in real life fits any model of game theory. The prisoner’s dilemma, which anyone who talks about game theory mentions, was already discussed in this blog, and people noted that the matrix description of the game does not match the actual interaction: Are there no consequences to the prisoners’ decisions? Does the matrix correctly identify the prisoners’ utilities? Are the utilities common knowledge? I am sure that anyone who reads this post will be able to raise more issues with the game representation of the prisoner’s dilemma.

Auctions are another widely cited application of game theory, where a solid theory has been developed. But take the simplest auction: a sealed-bid first-price auction with two bidders; the bidders fight over a contract to supply specific goods to some firm. You may actually encounter such auctions in practice. Now, suppose that you are bidder 1 (or that you are a consultant who was hired to assist bidder 1). Your private value is $1M: this is the minimum amount that you would ask for the contract. You believe that your private value is below the other bidder’s private value by 10%. Well, about 10%. Because the marketing people of the other bidder may have convinced the board of directors that their actual private value is lower than $1.1M. Maybe $1.05M, or even $1M. What is the probability that the private value of the opponent is $1.1M, or $1.05M, or $1M? Give me a break, nobody can say. And what does the other bidder think about your private value? And is the distribution of the private value common knowledge? As you see, the great theory that the books tell us about is not much of a help.

Having said this, I do think that game theory is useful. In fact, very useful. And personally, I use it daily. As I see it, in any given interaction, game theory identifies aspects that each participant should consider before choosing an action. The basic model of game theory tells us that we should identify the game: who are the players, what are their actions, what are their goals. The area of Bayesian games tells us that we should think about the beliefs of the other players, their beliefs about other players’ beliefs, and so on. The area of repeated games tells us that coordination can be achieved by means of threats. Sequential games draw our attention to commitments, subgame perfectness, reputation. Games played by automata help us understand the role of computational power. In short, as I see it, people who do “real-life” game theory develop models that provide insights concerning how to better understand various types of interactive situations. Personally, I like the insights that we as a group provide.

From the hub of the universe comes an initiative to identify `the world‟s hardest unsolved problems in economics, psychology, government, sociology, and other social sciences.’ A starting list was generated by the usual roster of  `big thinkers’. It is a sign of how late in the day it is that pygmies cast giant shadows.

It brought to mind the moon-ghetto metaphor from the late sixties: if we can put a man on the moon, why can’t we solve the problems of the ghetto (sounds  better with elvis playing in the background). Putting a man on the moon is a well defined problem in that we all agree on what constitutes a solution. Admittedly there are some details to be filled in. For example, does the man have to be alive and does he need to be returned as such to Earth? The `problem of the ghetto’ is a different kettle of fish. What exactly is the problem? Are there many or one? What constitutes a solution? As I sometimes say to students and colleagues interested in economics: there are no open problems in economics only issues.

Nevertheless, one can’t resist idle speculation. Designing the financial system to resist crashes? Nah. Voltaire’s observations on the occassion of Admiral Byng’s execution applies:

Dans ce pays-ci, il est bon de tuer de temps en temps un amiral pour encourager les autres.

A little blood-letting now and then is a good thing. Perhaps rather than resist crashes, limit the damage or make them more resilient.

Alvaro Sandroni suggested the following:

How can genocide be prevented? What are the best ways to minimize the odds of ecological disasters? How can chronic malnutrition be eradicated?

Worthy goals. Condign measures and letting nature take its course?

I like the suggestion of my colleague Tim Feddersen best, since it gets, I think, to the heart of the matter.

How should values conflicts be resolved? Economics can suggest good mechanisms when a value is specified and some mechanisms work well for a wide variety of values but when values conflict economics is silent. Now, perhaps, one can argue that the problem is unsolvable. But then we must confront the fact that all of us as individuals and collectively “solve” such problems regularly.

I wrote some time ago about Michael Rabin’s example of how a backward induction argument is killed when you require players’ plans to be computable. Since Rabin’s paper is not available online, I intended to post the proof at some point, but never actually got to do it. Richard Lipton’s post about Emil Post is a good inspiration to pay that debt since the concept of simple set, which was introduced by Post, has a role in Rabin’s example.

A set {S\subseteq\mathbb{N}} is simple if {S} is recursively enumerable and its complement {S^c} is infinite and contains no infinite recursively enumerable subset.

For Rabin’s purpose, what matters is that Post proved the existence of such a set. So Post provides us with a code of a computer program that enumerates over all the elements of {S} and such that no computer program can produce infinitely many elements that are not in {S}. By the way, Post introduces the concept of simple set while working on his eponymous problem which Richard mentions. In fact, if {S} is simple set then {S} is not many-one equivalent to a universal recursively enumerable set.

Now let {S} be a simple set and consider the following three-round game with perfect information. At each round a player announces a natural number (an action): First Alice announces {n}. Then Bob announces {m} and then Alice announces {t}. After the game is played, a winner is determined as follows: Bob wins if {m>n} and {m} is not the {t}-th element in the enumeration of {S}. Otherwise, Alice wins. Therefore, determining the winner is a task that can be effectively done using the code provided by Post.

Does any of the players have a strategy that guarantee winning regardless of how the opponent play ? The answer to this question depends on which mathematical creature you use to model the intuitive notion of `strategy’.

If by a strategy you mean just a function from histories to current action then Bob has a winning strategy: for every {n} that Alice might chose in the first round there exists some {m>n} such that {m\notin S}, since the complement of {S} is infinite.

But if you think of a strategy as a complete set of instructions, that is a plan or an algorithm that will tell you how to produce your next action given the past, then you want to model a strategy as a computable function. Now, for every computable function {f:\mathbb{N}\rightarrow\mathbb{N}} that Bob can chose, there exists some {m} such that either {f(m)\leq m} or {f(m)\in S} since otherwise {f(0),f(f(0)),f(f(f(0))),\dots} would have been a recursive enumeration of infinitely many elements of {S^c}. In particular, there exists some {t} such that if Alice plays {m} in the first stage and {t} in the third stage she wins against {f}. So, Bob has no winning strategy.

Every time I visit the holy land I am surprised by the fewness of mac users. I suppose one explanation is that, as much as it pains me to admit it, Microsoft does better job with Hebrew than Mac OS. Another explanation is that there is only one seller of apple products and prices are the same in all stores. Israelis are not necessarily reluctant to spend money, but they like to compare prices and to bargain. In the pc market there are sufficiently many manufacturers, stores and “special offers”’ to give all the buyers the utility from believing they got a better deal than their brothers. (I am an exception — I don’t care how much I pay, and I hate bargaining. I only need to know that everybody pays the same price. But I am a mac user)

These explanations notwithstanding, I still think the weak presence of macbooks in Israel is an anomaly. There are simply too many tech-savvies who do their aimless surfing and write their first novels on a pc. In fact, I am sometimes the only person sitting around in Arcaffe with macbook !

So suppose you are Apple and you  want to claim ownership of these customers, who by right and tradition belong to you. There is an easy and cheap way to get the attention of this lot, which brings me to the decision of the Israeli Ministry of Communications (Yea, there is such a thing. We have ministers for everything and also ministers for nothing) to confiscate iPads from travelers entering Israel. I have already read several commenters in Israeli websites suggesting the most obvious theory about this decision: It’s a marketing trick. Somebody in Apple has `accidentally’ forgotten to provide the necessary documents, or found some other way to convince the government that iPad does not comply with some standards, so that in a couple of weeks, when the ban is lifted just in time for iPad’s official release in Israel, it will be labeled as the cool gadget that frightens the dumb bureaucrats from the ministry of communication. Already in my next trip to the homeland in June everybody in Tel Aviv will be using iPad.

Every function f defined over the real numbers is the sum of a symmetric function g and an anti-symmetric function h. Simply define g(x) = (f(x) + f(-x))/2 and h(x) = (f(x) – f(-x))/2. It is easy to verify that f(x) = g(x) + h(x) for every x, that g(x) = g(-x), and that h(x) = -h(-x).

Similarly, every two player game G is the sum of a game with common interests G_c and a zero-sum game G_z. If u^1 and u^2 are the payoff functions of the two players in the game G, u^1_c and u^2_c are the payoff functions in the game with common interests G_c, and u^1_z and u^2_z are the payoff functions in the zero-sum game G_z, simply define:
u^1_c(a^1,a^2) = (u^1(a^1,a^2) + u^2(a^1,a^2))/2
u^2_c(a^1,a^2) = (u^1(a^1,a^2) + u^2(a^1,a^2))/2
u^1_z(a^1,a^2) = (u^1(a^1,a^2) – u^2(a^1,a^2))/2
u^2_z(a^1,a^2) = (-u^1(a^1,a^2) + u^2(a^1,a^2))/2
As for the functions f, g, and h of the first paragraph, one can easily verify that u^1 = u^1_c + u^1_z, that u^2 = u^2_c + u^2_z, that u^1_c = u^2_c, and that u^1_z = -u^2_z.

Adam Kalai and Ehud Kalai present this decomposition of games and study its implications here. In this paper they propose a new one-point solution concept for two-player games with transferable utility, which they call the COCO-value (for cooperative-competitive value). The coco-value is defined as follows: the players play the pair of actions that maximizes the sum of their utilities: that is, the pair of actions that is optimal for both in the game G_c with common interests. But the game G that the players actually play is not a game of common interests, so that some transfer of utility has to be made. The game G_z measures the difference in utility between the players; if its value is positive, it means in some sense that player 1 receives in G more than player 2. The value of G_z is then the amount that player 2 pays to player 1. Thus, the coco-value of the game G is a pair of payments, one for each player: each gets the maximal payoff in G_c, and in addition player 2 pays to player 1 the value of G_z.

The two Kalai’s go on and provide axiomatization for their solution concept, as well as its implementation as an equilibrium in a properly defined game.

I like the paper and the intuition that underlies the concept. As a person who likes cooperation, I think that it presents an interesting solution concept for people like me, who prefer cooperation to competition, yet take into account the fact that the interests of the players are not the same.

Happily, few follow-up questions bother me: what about games with more than two players? Do we need to take into account two-player games when defining the coco-value of a three player game? And what about stochastic games? What is the natural extension of the solution concept when states change? Will it involve strategy-proofness?

Blackwell and Girshick about the concept of strategy:

Imagine that you are to play the White pieces in a single game of  chess, and that you discover you are unable to be present for the occasion. There is available a deputy, who will represent you on  the occasion, and who will carry out your instructions exactly,  but who is absolutely unable to make any decisions of his own volition. Thus, in order to guarantee that your deputy will be able to conduct the White pieces throughout the game, your instructions to him must envisage every possible circumstance in which he may be required to move, and must specify, for each such circumstance, what his choice is to be. Any such complete set of instructions constitutes what we shall call a strategy.

Now think about an infinite game, like repeated prisoner’s dilemma. If we take the idea about strategy as set of instructions seriously then not every function from past histories to an action is something we would like to call a strategy, because not every  function can be described by a set of instructions ! This should be clear even before we formalize what instructions mean, simply because the set of possible `instructions’ is countable, as every such instructions is just a sentence in English.

So what should be the formal definition of a strategy in these games, a definition that will capture the intuition of a complete set of instructions that specify  what your choice is to be for each possible circumstances? You know what I think.

Join 118 other followers

Follow

Get every new post delivered to your Inbox.

Join 118 other followers

%d bloggers like this: