Tonight at the department party we were continuing the debate on the usefulness of game theory. Part of the argument was about whether the math has any utility, or if you could get just as much from verbal arguments about strategy as in Schelling’s work. Rakesh had some good positive examples in pricing and auctions which perhaps he’ll write up here if he has time. Here’s one that came to my mind just now. You may not consider poker “real world” but bear with me; I think it illustrates a general point.

Consider the simplified model of poker in von Neumann-Morgenstern. In one of the versions, there is a result that the unique equilibrium strategy is to bet with the best 30% and worst 10% of hands. Now, nothing nearly so simple is optimal in real poker, and furthermore all poker players, even the least mathematical, know that they should bluff sometimes. So what was gained by this exercise? Well, we have a qualitative recommendation to bluff only with your very worst hands. This is far from obvious, and I never thought of it before doing the mathematical exercise. True, I can translate it into a verbal argument, as follows: bad hands and medium hands become equal if you bet with them, since they both lose whenever (or almost whenever) you get called. But medium hands are significantly better than bad hands when you check, since they may win the pot in a showdown. So you bluff with bad hands, not of course because they are better to bluff with, but because they are worse to check with. Note that this logic only applies fully when you are last to speak in the last round of betting. In earlier rounds, “semi-bluffs” where you hope for a fold but have chances if called are a common part of good strategy, and are more common than pure bluffs.

Enough about poker; here is the broader point of the example. Solving the problem mathematically imposes a discipline on our reasoning process which forces us to discover an important qualitative insight we could easily have missed otherwise. True, I am sure some strong poker players came to this insight intuitively over the years without formal study of Bayesian games, but many players surely missed it. The examples Rakesh was discussing seem similar to me. Yes, once you hear certain insights described in words, you may decide we never needed the math. But this is much too facile, akin to thinking that every problem is easy once you’ve seen the answer. Any illuminating chain of reasoning can be missed as easily as found, and formal models can channel our reasoning in the right direction.