You are currently browsing the category archive for the ‘Strategy’ category.

In Harsanyi games with incomplete information, also known as Bayesian games, each player has a type. The type of the player describes all that he knows and believes about the situation he faces: who are the players, what are his and their available actions, what are his and their utility functions, and what are the beliefs of the other players about the situation.

Since the player’s type describes his knowledge and beliefs, a player always knows his own type. But a player need not know the other players’ types. Indeed, a chess player knows his own abilities, but he may not the level of his opponent: he may ascribe probability 1/3 to the event that his opponent is familiar with the Benko gambit, and probability 2/3 to the event that the opponent is not familiar with this opening.

In a Bayesian game,  a chance move selects a vector of types, one type for each player, according to a known probability distribution p at the outset of the game. Each player learns his own type, but he does not know the types chosen for the other players. He does have a belief about the other players’ types, which is the conditional distribution of p given his own type.

The Bayesian game is an auxiliary construction. In reality there is no chance move that selects the player’s types: the knowledge and beliefs each player is equipped with determine his or her type. Bayesian games are  merely a way to model the incomplete information each player has on the other players’ types. Thus, the true situation the players face is the situation after the vector of types was selected, which is called the interim stage. The situation before the vector of types is chosen, which is called ex ante stage, is the mathematical way that Harsanyi found to model the game.

Consider now the following Bayesian game, that depends on a real number a (which is in the unit interval; below, all additions and subtractions are modulo 1). There are two players; the type space of each player is the unit interval [0,1]. The types of the players are correlated: if  player 1 has type x, then he believes that player 2′s type is either x of x+a (each with probability 1/2); if  player 2 has type x, then he believes that player 1′s type is either x of x-a (each with probability 1/2). This belief structure can be described by a common prior distribution: the types of the two players are chosen according to the uniform distribution over the following set T (this is a variation of an example of Ehud Lehrer and Dov Samet):

The type space of the Bayesian game

If player 1′s type is x, then he believes that player 2 may be of type x or x+a. It follows that player 2 may believes that player 1′s type is x-a, x, or x+a. So player 2 may believe that player 1 believes that player 2′s type is x-a, x, x+a or x+2a. When the situation is a game, to decide how to play, player 1 needs to take all types of player 2 (and of himself) of the form {x+na, n is an integer}. This set of finite if a is a rational number, and countable if a is an irrational number. Denote by Zx the set of all pairs of type {(x+na,x+na), n is an integer} union with {(x+na,x+(n+1)a), n is an integer}. The set Zx is called the minimal belief subspace of player 1. In the interim stage, after his type was selected and told to him, player 1 knows that the type vector is in Zx, that only type vectors in Zx appear in the belief hierarchy of player 2, and therefore he can think about the situation as if the Bayesian game is restricted to Zx: a type vector in Zx was chosen according to the conditional distribution over Zx. To determine how to play, player 1 should find an equilibrium in the game restricted to Z.

The uniform distribution of the set T that appears in the figure above induces a probability distribution over Zx. When Zx is finite (= a is a rational number), this is the uniform distribution over a finite set. Alas, when Zx is countable (a is irrational) there is no uniform distribution over Zx. In particular, the interim stage is not well defined! Thus, even though the interim stage is the actual situation the players face, and even though they can describe their beliefs using a Harsanyi game with a larger type space, the situation they face cannot be described as a Harsanyi game if they take into account only the types that are possible according to their information.

It is interesting to note that one can find a Bayesian equilibrium in the game restricted to Zx, for every x. However, when one tries to “glue” these equilibria together, one might find out that the resulting pair of strategies over [0,1] is not measurable, and in particular an equilibrium in the Harsanyi game (over T) need not exist. This finding was first noted by Bob Simon.

Since indeed the true situation the players face is the interim stage, and the ex ante stage is merely an auxiliary construction, how come the ex ante stage does not define a proper game in the interim stage? If this is the case, is the auxiliary construction of Harsanyi game over T the correct one?

Stopping games are simple multi-player sequential games; each player has two actions: to continue or to stop. The game terminates once at least one player decides to stop. The terminal payoff depends on the moment at which the game stopped, on the subset of players who decided to stop at the terminal time, and on a state variable whose evolution is controlled by nature. In other words, the terminal payoff is some stochastic process. If no player ever stops, the payoff is 0 (this is without loss of generality).

Stopping games arise in various contexts: wars of attrition, duels, exiting from a shrinking market to name but a few. Does an equilibrium exist in such games? If the game is played in discrete time and payoffs are discounted, then payoffs are continuous over the strategy space, and an equilibrium exists. What happens if the game is played in continuous time?

Surprisingly, if there are at least three players, an equilibrium may fail to exist, even if the payoffs are constant (that is, they depend only on the subset of players who decide to stop, and not on the moment at which the game is stopped (and there is no state variable). Consider the game in the following figure:

A three player stopping game

In this game, in every time instant t, player 1 chooses a row, player 2 chooses a column, and player 3 chooses a matrix. Each entry except the (continue,continue,continue) entry corresponds to a situation in which at least one player stops, so that the three-dimensional vector in the entry is the terminal payoff in that situation. The sum of payoffs in every entry is 0, and therefore whatever the players play the sum of their payoffs is 0. Each player who stops alone receives 1, and therefore each one would like to be the first to stop. It is an exercise to verify that the game does not terminate at time t=0: termination at time 0 can happen only if there is a player who stops with probability 1 at time 0, but if, say, player 1 stops at time 0 then it is dominant for player 2 to continue at time t=0, and then it is dominant for player 3 to stop at time t=0, but then it is dominant for player 1 to continue at time t=0.

Thus, in this game, the players stop with some positive probability (that is smaller than 1) at time t=0, and, if the game has not terminated at time 0, each one tries to stop before the other. If the game is played in continuous time, there can be no equilibrium. Don’t worry because the payoff is not discounted; adding a discount factor will not affect the conclusion.

It is interesting to note that in discrete time this game has an equilibrium: in discrete time the players cannot fight about who stops first after time t=0, because the first time in which they can stop after time t=0 is time t=1, so they all stop with some positive probability at time t=1, and, in fact, they stop with positive probability at every discrete time t.

So stopping games are one example in which a game in continuous time does not have an equilibrium, while the corresponding game in discrete time does have an equilibrium.

I have two kids, ages 14 and 11; you may have met them at Stony Brook. We have three PC’s at home so that we can work at the same time and play together using our home network. The PC’s stand one next to the other in the same room. This way we are together even when each one works on his own things, and I have control over the sites they visit on the net. In general, I do not allow telephones/TV/PC in bedroom: all devices that can cause a family member to be locked in his room must be located in public areas.

A couple of days ago my older son asked whether he can purchase his own laptop from his own money. Why do you need a laptop? I would like to write stories, and it is difficult to concentrate in the room-with-computers. Correct answer, for which I have no counterarguments. Yet so far he did not write so many stories, and he does not have much time to write stories anyway. Anything else? I may also read my e-mail. Wrong answer; this is exactly what your father is afraid of. You can use my laptop. Your laptop is not always available, and I cannot use it when I am at mom’s house. That’s correct. I am afraid that you will use the laptop for other activities, be locked in your room and we will never see you again. I can delete Internet Explorer.

So here is where we stand. The kid wants a laptop, I know that writing stories on the laptop is just the first step, and that even if now he will use it only for that goal, in some years he will use it for games, surfing, chatting, and all the things that kids in his age will do, and we will see him only for meals. I may fight windmills, like Don Quixote, and in few years anyway he will be locked in his room, but I have hopes that I can keep some social activity even when the kids are 17-year old. I also do not exclude the possibility that writing stories is nothing but an excuse: a reason he came up with so that I allow him to purchase a laptop, but in fact he wants the laptop for other reasons.

What should I do? Any suggestions?

And, Eran, this is one example for the use of strategic thinking in raising kids. Once you think of the consequences of your decisions, and try to figure out the reasons for you kids’ requests, you enter the zone of game theory. Others may use different terms, but, after all, each one uses the terms that he knows.

Unfortunately (or maybe fortunately) players tend to be boundedly rational. Our computational power is bounded, our memory is bounded, the number of people that an organization can hire is bounded. So it makes sense to study games in which players can use only a restricted set of “simple” strategies. This topic has been extensively studied in the late 80′s and 90′s in the context of repeated games. Eran advocated in previous posts the set of computable strategies. This set of strategies indeed rules out complex strategies, but it still allows unbounded memory.

Two families of strategies in repeated games have been studied in the past to model players with bounded computational power: strategies with bounded recall and strategies implementable by finite automata. A strategy with recall k can recall only the last k action profiles chosen by the players; whatever happened in the far past is forgotten. An automaton is a finite state machine: it has a finite number of states, and in each stage one of the states is designated the “current” state. In every stage, the machine has an output, which is a function of the current state, and it moves to a new state (which is the new “current” state) as a function of its current “current” state and of its input. If the set of inputs is the set of action profiles, and its set of outputs is the set of actions of player i, then an automaton can implements a strategy for player i. Unlike strategies with recall k, an automaton can remember events from the far past, but because the number of its states is bounded, the number of events that it can remember is bounded.

Prominent game theorists, like Abreu, Aumann, Ben Porath, Kalai, Lehrer, Neyman, Papadimitriou, Rubinstein, Sabourian, Sorin, studied repeated games played by finite automata, and repeated games played by players with bounded recall. Here I will restrict myself to finite automata.

In a nut-shell, the theoretical literature is divided into two strands:

1) Two-player zero-sum T-stage repeated games, where each player i is restricted to use strategies that can be implemented by automata with n_i states. Here the question is how the value depends on T, n_1 and n_2. This strand of literature allows one to answer questions like: how does the relative memory size of the two players affect the value of the game? Or, how much a player should invest in increasing his memory so that his payoff significantly increase?

2) Two-player non-zero-sum infinitely repeated games, where the players have lexicographic preferences: each player tries to maximize his long-run average payoff, but, subject to that, he tries to minimize the size of the automaton that he uses. Abreu and Rubinstein proved that the set of equilibrium payoffs significantly shrinks, and one does not obtain a folk theorem. Rather, the set of equilibrium payoffs are only those payoffs that are (a) feasible, (b) individually rational (w.r.t. the min-max value in pure strategies), and (c) can be supported by coordinated play: e.g., whenever player 1 plays T, player 2 plays L, and whenever player 1 plays B, player 2 plays R.

In practice, the memory size of the players is not fixed: players can increase their memory at a given cost, and sometimes they can decrease their memory size thereby reducing their expenses. This raises the following question: suppose that memory is costly, say, each memory cell costs x cents, and the utility is linear: the payoff of a player is the different between, say, his long-run average payoff in the game and the cost of his memory (x times his memory size). Say that a vector y=(y1,y2) is an Bounded Computational Capacity equilibrium payoff if (a) it is the limit of equilibrium payoffs, as the memory cost x goes to 0, and (b) the cost of the corresponding automata (that implement the sequence of equilibria) goes to 0 as x goes to 0. What is now the set of Bounded Computational Capacity equilibrium payoffs?

It is interesting to note that a Bounded Computational Capacity equilibrium payoff need not be a Nash equilibrium payoff, and a Nash equilibrium payoff need not be a Bounded Computational Capacity equilibrium payoff. Do I have an example that this actually happens? Unfortunately not.

It turns out that the set of mixed-strategy Bounded Computational Capacity equilibrium payoffs includes once again the set of feasible and individually rational payoffs (w.r.t. the min-max value in pure strategies). In this context, a mixed-strategy is a probability distribution over pure automata (so a mixed strategy is a mixed automton, and NOT a behavior automaton. The output of a behavior automaton is a mixed output). A mixed automaton is equivalent to having a decision maker randomly choose an agent to play the game, when each agent implements a pure automaton. Mixed automata naturally appear when one player does not know the automaton that the other player is going to use, and therefore he is in fact facing a distribution over automata, which is a mixed automaton.

So, to those of us who know Abreu and Rubinstein (1988), it turns out that their result depends on two assumptions: (a) memory is free, and (b) players are restricted to use pure strategies (that is, pure automata). These two requirements imply that the players must use simple strategies, and they will both use automata of the same size. Once memory is costly, and players can randomly choose their pure automaton, they can restore their ability to choose complex strategies, thereby restoring the folk theorem.

Well, a small step for research, a tiny step for humanity.

Part One: Least Unique-Bid Auctions

In recent years a new auction method became widely used on the internet. This method is called Least Unique-Bid Auction, and it goes as follows. An object is offered for sale, say an iPhone or a car. The bids are made in cents. Each participant can make as many bids as he or she wishes, paying 50 cents for each bid. So if I bid on the numbers 1, 2, 5 and 12 (all in units of cents), I pay 2 dollars for the participation. Once the time runs out and all bidders placed their bids, one counts the number of bidders who bid on each number. The winning bid is the minimal number that was bid by a single bidder. This bidder is the winner, and he pays his bid (in cents). So, if Anne bid on the numbers {1,2,3,6,12}, Bill bid on the numbers {1,2,3,4,5,6,7}, and Catherine bid on the numbers {3,4,5,7,13, 14,15,16}, then the number 12 wins, and Anne gets the object. The auctioneer gets 2.5 dollars from Anne for her 5 bids plus 12 cents which is her winning bid, he gets 3.5 dollars from Bill, and 4 dollars from Catherine.

In practice the auction is dynamic and not static; for each bid that you place, you know at each time period whether (a) it is currently the winner (no one else bid the same number, and there is no lower number that was bid by a single bidder), (b) it is a potential winner (no one else bid the same number, but there is at least one lower number that was bid by a single bidder), or (c) not a potential winner (somebody else also bid on that number).
One must admit that this type of auction is ingenious. The selling price is extremely low, usually less than 2% of the object’s market value, so people are drawn to participate. The names of the winners are listed on the site; there are recurring names. So people realize that there are strategies that profit. One can actually think of such strategies: for example, randomly choose numbers, and when you find a potential winner bid on all numbers below it, trying to kill all lower potential winning numbers. So why not participate and make money?
Bottom line: the auctioneer makes a lot of money.

Part Two: LUBA and the court

A couple of months ago a lawyer called me. He wants to sue a company that operates a certain type of LUBA in Israel on a class action, on the ground that this is a lottery, a gambling game, and not an ability game. By law, in Israel only the state can run lotteries. He asked me to provide an expert opinion that LUBA is a lottery. I apologized. I cannot do that. I believe that strategic bidders can make money in LUBA, and therefore, just as black-jack, LUBA is not a lottery. In fact, I write a paper with Marco Scarsini and Nicolas Vieille arguing that LUBA is not a lottery. Having said that, I believe that LUBA is worse that a lottery: in a lottery, all participants have the same probability of winning. This is not the case with LUBA: the presence of strategic bidders essentially kills the chance of non-strategic bidders, or partially strategic bidders, to win the auction.

Part Three: Moral

So LUBA may not be illegal, but it seems that there is something wrong with it. I discussed this issue with Zvika Neeman yesterday. Just like a pyramid scheme or day trading by non professionals, LUBA is a method to get money out of people who may not realize the strategic complexity of the situation that they face.

Part Four: Complexity Fraud

Merriam-Webster defines fraud as:

“a : Intentional perversion of truth in order to induce another to part with something of value or to surrender a legal right. b : An act of deceiving or misrepresenting.”

Dictionary.com defines it as:
“Deceit, trickery, sharp practice, or breach of confidence, perpetrated for profit or to gain some unfair or dishonest advantage.”

There are many types of fraud. I argue that there should be one additional type: complexity fraud. Sometimes people are asked to participate in complex interactions, paying some amount for participation in the hope of getting a reward at the end. The rules are all set in advance, so nobody can later argue that he or she did not know the rules. But most people are not well versed in game theory, and we have all bounded computational capacity. Therefore, when the interaction is complex, people cannot analyze it and rationally decide whether they want to participate or not. People tend to be optimistic, they over-estimate their ability and smartness. If the strategic analysis of the method was explained to them, and if they were faced by statistics, they would turn away from the method. If this is the case, then hiding the strategic analysis and the complexity of the situation is, in my view, as deceptive as any other fraud.

I am not a lawyer, and I do not know what the court will think of my arguments. I hope that congressmen and parliament members worldwide will look into them, and change the law accordingly.

This blog is ordinarily read by a small network in the economic theory community, so it was a jolt to see a thousand hits in a day for my “Foul Trouble” post shortly after we were linked by Tyler Cowen’s “Marginal Revolution” blog.  The post was also featured by columnists at NBC Sports and ESPN.  I even got a kind note from an executive VP of the Houston Rockets.  The Rockets were featured in a NYT magazine article by Michael Lewis last year as the “Moneyball” team of the NBA – that is, in recent years they have focused heavily on analytics.  The Rockets executive said that he enjoyed my article, but no word on to what extent they agree with me — trade secrets?  I’ll continue my thoughts here, but possibly they are way ahead of me.

As economists know, the point of a simple model is not to be “correct” but to serve as a framework for analysis.  As many comments noted, I only discussed some of the possible adjustments to the basic theory, and for those I did mention I only offered informal estimates as to whether the adjustment is small.  Does the baseline recommendation to ignore foul trouble have “so many caveats, it’s useless” as one comment complained?  I don’t think so, and, still not claiming exhaustiveness, will address two major classes of objections in more detail here.  One can be rebutted on theoretical grounds; the other has theoretical merit and we have to make some estimates to quantify it.

One class of objections has to do with tactical adjustments that either team makes when a player with foul trouble is in the game.   The endangered player may be more careful than usual, and the opponents may go out of their way to create contact with him.  While true, these observations don’t hold up as counterarguments.  Why?  I’ll explain this with game-theory jargon and then without.  The baseline argument says that if you bench the player your payoff gets worse for any fixed strategy-pair.  In a zero-sum game, this guarantees that the value of the game goes down, which is clear if you look at the definition of minmax.  That is, after tactical adjustments you’ll still be worse off than originally.

The more intuitive explanation of this goes as follows: the baseline argument implies that if the player just ignores his foul trouble, the team is better off letting him play.  If he *correctly* takes more care to avoid fouling, the team must be *even better* off.  Now, we’re dealing with human beings, so we must acknowledge the possibility that he *incorrectly*, or perhaps selfishly, plays too contact-shy.  Well, then he needs a kick in the pants: “Look, I want to trust you and leave you in with foul trouble, but if you’re giving up layups I have to put your butt on the bench.”  If this doesn’t work, well, sure, you might have to sit him.  There is a human element.  But I like Jeff’s comment on this: If he knows he’s going to sit with 3, won’t he be timid when he has 2?  Also, maybe some players normally foul more than optimal and play *better* D with foul trouble.  (Suggested by Ken Nelson .)   The human element can cut both ways.  In the Michael Lewis article, the Rockets’ GM refers to a foul as the worst outcome of a defensive play, percentagewise.  A slight exaggeration (dunks are worse), but trying to avoid fouls can’t be all that bad.  A similar chain of reasoning applies to the other team going after your man; I’ll leave that argument to the reader.

Now, a stronger argument, which I dealt with imperfectly in my initial post, has to do with not all minutes being created equal.  The best reason behind this has to do with clock management.  Some stars are much better than ordinary players at getting off a decent shot quickly, which can be important not only for the trailing team, but for the leading team if they want to burn most of the shot clock and then get something off.  (Thanks to the comments, beginning with Jeff,  for pointing this out.)  Now, while I think all this is true, I want to make some caveats to this caveat:

1.  Clock management should apply almost exclusively to the last 2-3 minutes.  Here, my sentiments seem to be backed up by many coaches and ex-coach color commentators: when a team goes into a shell, even with a significant lead such as 10 with 5 to play, they are making a huge mistake.  Burning an extra 5-10 seconds just doesn’t nearly compensate for getting a bad shot.  Similar comments apply to the trailing team: of course you should avoid sheer wasting of time (so, maybe bring it up the floor quicker), but the prime focus should be on getting the best possible shot, until you’re in a real crunch.

2. Given point 1 (even if you want to extend my window by a couple of minutes), the crunch-time argument only supports benching a player with 5 fouls, not with fewer.  Third-quarter and second-quarter minutes presumably are still created equal.

3. Ok, when should a player come back in with 5?  This depends on how much more value you give to the last few minutes and on his hazard rate for fouling out.  Assume that he has a hazard rate of r fouls per minute and that his value per unit time with t minutes remaining is given by a function a(t).  Note that I am thinking of a as the contribution to winning percentage, not to points.  The average value you get by putting him in with T minutes to go is
{ V(T) = \int_0^T a(t) e^{-r(T-t)} dt}
and the derivative of this is given by
{V'(T) = e^{-rT}\left [\int_0^T a'(t)e^{rt} dt + a(0) \right ] }
where the first term represents the cost of shifting his minutes further from the end (we assume a’ is negative) and the a(0) term is the possible benefit from extra total time (once we always put in the shift cost, we can think of the extra time as coming at the end.)

A reasonable value for r is .125 representing 6 fouls in 48 minutes; naturally this would be higher for some players than others.  As for the shape of the function a, this is extremely hard to specify, and it makes sense to try various guesstimates. The point of having a formula is to see how various assumptions about endgame importance correspond to policy, not to provide illusory precision.

To get ballpark figures, it’s useful and pretty harmless to let a be piecewise linear.  We can normalize it so that a(0)=A, a(t)=1 for t>=M, and a decreases linearly from time 0 to M.  That is, “ordinary minutes” are scaled to have value 1 and the very end has value A, with M minutes of “crunch time” whose value increases gradually until the end.  Then, for any T>=M, we get

{V'(T) = e^{-rT}\left [A - (A-1) \frac{e^{rM}-1}{rM} \right ] }

One very interesting feature is that this derivative has the same sign for all T>=M.  This means that we should either save the player for crunch time or ignore the 5 fouls, but a compromise where we bring him in with more than M minutes left cannot be right.  Now, let’s look at what values of A are needed to support saving the player.  If M=3, and he’s unusually foul-prone so r=.15, a quick calculation gives A=4.8 as the value for which V’ becomes negative.  This seems unreasonably high to me.  Remember, there is a chance the game is not even close in the last minute, and then everyone’s value is much less.  I agree that it’s still more on average, but 5 times more is much too rich for me.  Anyway, you can plug your favorite values into the formula if you want to explore.

By the way, my result that you either save the 5-foul player for crunch time or ignore foul-trouble surprised me, especially as it is quite robust to different specifications for a.  When this happens I always stop and describe the math to myself verbally to see if it makes sense.  I have a profound distrust of formulas I can’t describe verbally; maybe this is what makes me an economist rather than another kind of mathematician.  Anyway, what the math is saying is that, by looking at the playing time until foul-out as out of your control, you can decompose the effect of putting him in an instant earlier as 1) shifting his minutes earlier (bad) and 2) possibly getting an extra instant of time at the end (good).   For instants prior to M, both of these marginal effects decay *at the same rate*.  That is, shifting his minutes earlier matters less because he may not even reach crunch time, and reaching the last minute is also less likely, but the overall sign of the derivative cannot change.  This actually made sense once I thought about it.    I think theorists live for the cases where the math teaches us something that we can understand by verbal logic, but would have likely missed without the formulas.

This entry has gone on long enough that I will not (as least right now) try to address many more of the many intelligent comments.  I will take a moment to agree with one point: after a foul, especially if it is a *stupid* foul or your guy is upset by the ref’s call, you might bench him briefly to try to get him back in the proper flow.  This is a far cry from sitting him for the whole second quarter with 3 fouls, though.

[Update 5/17:  Thank you for the many interesting comments!  Please see my follow-up posted today.]

In a professional basketball game, a player is disqualified (“fouls out”) if he is charged with 6 personal fouls.  Observers of the NBA know that the direct effect of fouling out actually has less impact than the indirect effect of “foul trouble.”  That is, if a player has a dangerous number of fouls, the coach will voluntarily bench him for part of the game, to lessen the chance of fouling out.  Coaches seem to roughly use the rule of thumb that a player with n fouls should sit until n/6 of the game has passed.  Allowing a player to play with 3 fouls in the first half is a particular taboo.  On rare occasions when this taboo is broken, the announcers will invariably say something like, “They’re taking a big risk here; you really don’t want him to get his 4th.”

Is the rule of thumb reasonable? No!  First let’s consider a simple baseline model:  Suppose I simply want to maximize the number of minutes my star player is in the game.  When should I risk putting him back in the game after his nth foul?  It’s a trick question: I shouldn’t bench him at all!  Those of you who haven’t been brainwashed by the conventional wisdom on “foul trouble” probably find this obvious.  The proof is simple: if he sits, the only thing that has changed when he gets back in is that there is less time left in the game, so his expected minutes have clearly gone down.  In fact, the new distribution on minutes is first-order stochastically dominated, being just a truncation of the alternative.  This assumes only that his likelihood of getting a foul is time-invariant, which seems reasonable.

OK, while I believe the above argument is very relevant, it oversimplified the objective function, which in practice is not simply to maximize minutes.  I’ll discuss caveats now, but please note, there is tremendous value in understanding the baseline case.  It teaches that we should pay attention to foul trouble only insofar as our objective is not to maximize minutes.  I am very comfortable asserting that coaches don’t understand this!

First caveat: players are more effective when rested.  In fact, top stars normally play about 40 of 48 minutes.  If it becomes likely that a player will be limited to 30-35 minutes by fouling out, we may be better off loading those minutes further towards the end of the game to maximize his efficiency.  Notice, though, that this doesn’t lead to anything resembling the n/6 rule of thumb.  It says we should put him back in, at the very latest, when he is fully rested, and this isn’t close to what is done in practice.  In fact players often sit so long the rest may have a negative impact, putting them “out of the flow of the game.”

Second caveat: maybe not all minutes are created equal.  It may be particularly important to have star players available at the end of the game.  On a practical level, the final minute certainly has more possessions than a typical minute, but it also has more fouls, so maybe those effects cancel out.  I think the primary issue is more psychological: there is a strong perception that you need to lean more on your superstars at the end of the game.  I think this issue is drastically overrated, partly because it’s easy to remember losing in the last minute when a key player has fouled out, but a more silent poison when you lose because you were down going into that minute having rested him too long.  By the way, my subjective sense is that the last possession is more similar to any other than conventional wisdom suggests: a wide-open John Paxson or Steve Kerr is a better bet than a double-teamed Michael Jordan any time in the game.  On a couple of major occasions, Jordan agreed.  This isn’t to underestimate the star’s importance in scoring and getting other players good shots, just to say that this is not necessarily more important in the final minutes.  You do often hear that a team will rise to the occasion when a star is injured or suspended, so even conventional wisdom wavers here.  Finally, note that the foul-trouble rule of thumb is applied also to players who aren’t the primary scorer, so that this argument wouldn’t seem to apply.  I will give coaches a little credit: they do sometimes seem to realize that they shouldn’t worry about foul trouble for bench players who often don’t play at the end anyway.

One more psychological caveat: a player who just picked up a foul he thinks is unfair may be distracted and not have his head in the game immediately afterward.  This may warrant a brief rest.

Final note: Conventional wisdom seems to regard foul management as a risk vs. safety decision.  You will constantly hear something like, “a big decision here, whether to risk putting Duncan back in with 4 fouls.”  This is completely the wrong lens for the problem, since the “risky”* strategy is, with the caveats mentioned, all upside!  Coaches dramatically underrate the “risk” of falling behind, or losing a lead, by sitting a star for too long.  To make it as stark as possible, observe that the coach is voluntarily imposing the penalty that he is trying to avoid, namely his player being taken out of the game!  The most egregious cases are when a player sits even though his team is significantly behind.  I almost feel as though the coach prefers the certainty of losing to the “risk” of the player fouling out.  There may be a “control fallacy” here: it just feels worse for the coach to have a player disqualified than to voluntarily bench him, even if the result is the same.  Also, there is a bit of an agency/perception problem: the coach is trying to maximize keeping his job as well as winning, which makes him lean towards orthodoxy.

There are well-documented cases in the last decade of sports moving towards a more quantitative approach, so maybe there is hope for basketball strategy to change.  The foul-trouble orthodoxy is deeply ingrained, and it would be a satisfying blow for rationality to see it overturned.

*Final outcomes are binary, so the classical sense of risk aversion, involving a concave utility function in money, doesn’t apply at all.  But there is also a sense of what I call “tactical risk”: a decision may affect the variance of some variable on which your probability of final success depends in a convex (or concave) way.  I might write an essay sometime on the different meanings of “risk.”  Anyway, here you presumably should be risk-averse in your star’s minutes if ahead, risk-loving if behind.  But this is rendered utterly moot by first-order stochastic dominance!

Join 101 other followers

Follow

Get every new post delivered to your Inbox.

Join 101 other followers

%d bloggers like this: