You are currently browsing the monthly archive for May 2010.

In chess, the 50-move rule, introduced in the 1500s, is designed to prevent a player from senselessly prolonging a game where he has a small advantage but no real hope of winning.  It says that if 50 moves pass without a capture or pawn move, either player may insist on a draw.  For some time it was thought that all winnable positions could be won or simplified within 50 moves, so that the 50-move rule would not change the value of any position given proper play.  Then in the 20th century some positions were discovered that can be eventually won, but not in 50 moves.  The rule was briefly modified to allow more moves when such a position arose, but eventually this was considered too complicated and the traditional rule returned in 1992 (thanks to Wikipedia for the history.)

I propose a modification to the rule:  If Bob wants a draw under the 50-move rule but Ann thinks she can win, she may insist on an extension of 50 moves. There is a cost, however:  If Ann fails to win the game, she scores a loss rather than a draw.

The idea is to avoid an outcome I think most chess players would find contrary to the spirit of the game: Ann knows she can win in a few more moves, but the rule kicks in and it’s a draw.  When the extension is invoked (which would surely be quite rare), we know it’s not a waste of time — someone will win the game.  I don’t know what the wisest length is for the extension, but 50 seems reasonable.   It’s doubtful any human being would have the confidence to gamble on the extension if victory isn’t coming within 50 moves, anyway.  Of course, you could let the extension length depend on the position, but this would again result in an unwieldy rulebook, which was the objection to the variable-move rule which was briefly in effect.

By the way, the world record, established by computer search in 2008, for moves necessary to win a position with optimal play is an amazing 517!

This blog is ordinarily read by a small network in the economic theory community, so it was a jolt to see a thousand hits in a day for my “Foul Trouble” post shortly after we were linked by Tyler Cowen’s “Marginal Revolution” blog.  The post was also featured by columnists at NBC Sports and ESPN.  I even got a kind note from an executive VP of the Houston Rockets.  The Rockets were featured in a NYT magazine article by Michael Lewis last year as the “Moneyball” team of the NBA – that is, in recent years they have focused heavily on analytics.  The Rockets executive said that he enjoyed my article, but no word on to what extent they agree with me — trade secrets?  I’ll continue my thoughts here, but possibly they are way ahead of me.

As economists know, the point of a simple model is not to be “correct” but to serve as a framework for analysis.  As many comments noted, I only discussed some of the possible adjustments to the basic theory, and for those I did mention I only offered informal estimates as to whether the adjustment is small.  Does the baseline recommendation to ignore foul trouble have “so many caveats, it’s useless” as one comment complained?  I don’t think so, and, still not claiming exhaustiveness, will address two major classes of objections in more detail here.  One can be rebutted on theoretical grounds; the other has theoretical merit and we have to make some estimates to quantify it.

One class of objections has to do with tactical adjustments that either team makes when a player with foul trouble is in the game.   The endangered player may be more careful than usual, and the opponents may go out of their way to create contact with him.  While true, these observations don’t hold up as counterarguments.  Why?  I’ll explain this with game-theory jargon and then without.  The baseline argument says that if you bench the player your payoff gets worse for any fixed strategy-pair.  In a zero-sum game, this guarantees that the value of the game goes down, which is clear if you look at the definition of minmax.  That is, after tactical adjustments you’ll still be worse off than originally.

The more intuitive explanation of this goes as follows: the baseline argument implies that if the player just ignores his foul trouble, the team is better off letting him play.  If he *correctly* takes more care to avoid fouling, the team must be *even better* off.  Now, we’re dealing with human beings, so we must acknowledge the possibility that he *incorrectly*, or perhaps selfishly, plays too contact-shy.  Well, then he needs a kick in the pants: “Look, I want to trust you and leave you in with foul trouble, but if you’re giving up layups I have to put your butt on the bench.”  If this doesn’t work, well, sure, you might have to sit him.  There is a human element.  But I like Jeff’s comment on this: If he knows he’s going to sit with 3, won’t he be timid when he has 2?  Also, maybe some players normally foul more than optimal and play *better* D with foul trouble.  (Suggested by Ken Nelson .)   The human element can cut both ways.  In the Michael Lewis article, the Rockets’ GM refers to a foul as the worst outcome of a defensive play, percentagewise.  A slight exaggeration (dunks are worse), but trying to avoid fouls can’t be all that bad.  A similar chain of reasoning applies to the other team going after your man; I’ll leave that argument to the reader.

Now, a stronger argument, which I dealt with imperfectly in my initial post, has to do with not all minutes being created equal.  The best reason behind this has to do with clock management.  Some stars are much better than ordinary players at getting off a decent shot quickly, which can be important not only for the trailing team, but for the leading team if they want to burn most of the shot clock and then get something off.  (Thanks to the comments, beginning with Jeff,  for pointing this out.)  Now, while I think all this is true, I want to make some caveats to this caveat:

1.  Clock management should apply almost exclusively to the last 2-3 minutes.  Here, my sentiments seem to be backed up by many coaches and ex-coach color commentators: when a team goes into a shell, even with a significant lead such as 10 with 5 to play, they are making a huge mistake.  Burning an extra 5-10 seconds just doesn’t nearly compensate for getting a bad shot.  Similar comments apply to the trailing team: of course you should avoid sheer wasting of time (so, maybe bring it up the floor quicker), but the prime focus should be on getting the best possible shot, until you’re in a real crunch.

2. Given point 1 (even if you want to extend my window by a couple of minutes), the crunch-time argument only supports benching a player with 5 fouls, not with fewer.  Third-quarter and second-quarter minutes presumably are still created equal.

3. Ok, when should a player come back in with 5?  This depends on how much more value you give to the last few minutes and on his hazard rate for fouling out.  Assume that he has a hazard rate of r fouls per minute and that his value per unit time with t minutes remaining is given by a function a(t).  Note that I am thinking of a as the contribution to winning percentage, not to points.  The average value you get by putting him in with T minutes to go is
${ V(T) = \int_0^T a(t) e^{-r(T-t)} dt}$
and the derivative of this is given by
${V'(T) = e^{-rT}\left [\int_0^T a'(t)e^{rt} dt + a(0) \right ] }$
where the first term represents the cost of shifting his minutes further from the end (we assume a’ is negative) and the a(0) term is the possible benefit from extra total time (once we always put in the shift cost, we can think of the extra time as coming at the end.)

A reasonable value for r is .125 representing 6 fouls in 48 minutes; naturally this would be higher for some players than others.  As for the shape of the function a, this is extremely hard to specify, and it makes sense to try various guesstimates. The point of having a formula is to see how various assumptions about endgame importance correspond to policy, not to provide illusory precision.

To get ballpark figures, it’s useful and pretty harmless to let a be piecewise linear.  We can normalize it so that a(0)=A, a(t)=1 for t>=M, and a decreases linearly from time 0 to M.  That is, “ordinary minutes” are scaled to have value 1 and the very end has value A, with M minutes of “crunch time” whose value increases gradually until the end.  Then, for any T>=M, we get

${V'(T) = e^{-rT}\left [A - (A-1) \frac{e^{rM}-1}{rM} \right ] }$

One very interesting feature is that this derivative has the same sign for all T>=M.  This means that we should either save the player for crunch time or ignore the 5 fouls, but a compromise where we bring him in with more than M minutes left cannot be right.  Now, let’s look at what values of A are needed to support saving the player.  If M=3, and he’s unusually foul-prone so r=.15, a quick calculation gives A=4.8 as the value for which V’ becomes negative.  This seems unreasonably high to me.  Remember, there is a chance the game is not even close in the last minute, and then everyone’s value is much less.  I agree that it’s still more on average, but 5 times more is much too rich for me.  Anyway, you can plug your favorite values into the formula if you want to explore.

By the way, my result that you either save the 5-foul player for crunch time or ignore foul-trouble surprised me, especially as it is quite robust to different specifications for a.  When this happens I always stop and describe the math to myself verbally to see if it makes sense.  I have a profound distrust of formulas I can’t describe verbally; maybe this is what makes me an economist rather than another kind of mathematician.  Anyway, what the math is saying is that, by looking at the playing time until foul-out as out of your control, you can decompose the effect of putting him in an instant earlier as 1) shifting his minutes earlier (bad) and 2) possibly getting an extra instant of time at the end (good).   For instants prior to M, both of these marginal effects decay *at the same rate*.  That is, shifting his minutes earlier matters less because he may not even reach crunch time, and reaching the last minute is also less likely, but the overall sign of the derivative cannot change.  This actually made sense once I thought about it.    I think theorists live for the cases where the math teaches us something that we can understand by verbal logic, but would have likely missed without the formulas.

This entry has gone on long enough that I will not (as least right now) try to address many more of the many intelligent comments.  I will take a moment to agree with one point: after a foul, especially if it is a *stupid* foul or your guy is upset by the ref’s call, you might bench him briefly to try to get him back in the proper flow.  This is a far cry from sitting him for the whole second quarter with 3 fouls, though.

Though this post may look like a political one, it is not! The reader is invited to choose his favorite good, bad and ugly.

The state of Israel has a long history with organizations that try to hurt it. One of  the organizations that fight Israel most dearly in the last decade is Hezbollah, based in Lebanon. Hezbollah used to regularly shoot missiles on Israeli villages, and Israel used to regularly bombard military bases in Lebanon. The usual warfare you would expect.

In the summer of 2006, retaliating the capture of two Israeli soldiers by Hezbollah, Israeli army was sent into Lebanon to hurt the Hezbollah as much as it could. After 33 days Israel has proved its ability to smash the military ability of Hezbollah. Hezbollah stopped firing missiles on Israel.

This is history. Where is game theory getting into play?

After Hezbollah realized that its ability to fight the Israeli army is close to nothing, it renewed its missile stock and acquired much better missiles that it used to have. Currently it holds accurate missiles with long range that can carry a lot of TNT. Next time that Israel attacks Lebanon, the whole of Israel will be bombarded; there will be no escape. A problem to Israel. But what about Hezbollah? Suppose that Hezbollah renews firing missiles on Israeli civil targets. Israel, realizing that Hezbollah is going to actually use its missiles against Israeli civil population, may react before Hezbollah uses everything that it has. Because Hezbollah can shoot missiles from all parts of Lebanon, the only way would be to conquer all of Lebanon, as fast as possible. A problem to Hezbollah.

So, now we are in a balance of terror. Before the 2006 war, each side could have shot missiles/bombard the other from time to time; now they cannot: any small skirmish may develop into an all-out war, where both sides will be severely damaged. Did the Israeli government envision this future before starting the war? Did the Hezbollah? Did any of the two sides do a strategic analysis of the situation?

At every day ${n\in\mathbb{N}}$ a player takes an action’. This is the starting point of many models of repeated interaction. We let time run to infinity to reflect the fact that players don’t have in mind a fixed termination point for the game. We do, however, fix the starting point ${n=0}$, which I think in many cases is unnatural: By the time I realize I know the bartender in my local Starbucks and maybe I should start tipping, I already lost count of the number of times I have been there. This is why I would like to model it as a game with infinite past. Also, it will be cool to have a paper that starts with At every day ${n\in\mathbb{Z}}$‘ for a change. But, as I am sure many game theorists have independently discovered, it is not clear how to proceed.

Greece is almost bankrupt. On May 19, 8.5 billion Euro of Greece bonds mature, and Greece does not have the money to pay its debt. Will Europe give a shoulder to Greece? The bonds have lost 12% in the past year, meaning that the public was aware of a non-negligible probability of government default.

And I sit in my cozy home, and wonder why the investors panic. If Europe does not thank Greece for its monetary carelessness with a 120 billion Euro present, then Greece is dead. And then nobody will buy bonds of any other weak European country, and Portugal and Spain will follow suit. And the European Union will die as well. Simple backward induction tells you that the 120 billion will end up in Greece.  In fact, as the statements of the Greek Finance Minister show, in Greece they did this backward induction as well.

But Aumann tells us that we cannot agree to disagree. So if the bonds lost 12%, and since I have no additional information that the public does not possess, then indeed there is (or at least, was) a non-negligible probability of default. Was there indeed a chance that Europe would not pay for Greek’s irresponsibility? Was the drop in the bonds’ value necessary to convince Merkel and Sarkozy that the market believes that the existence of the EU is in danger? Anyone has a clue?