You are currently browsing the category archive for the ‘Operations research’ category.

This post is only gonna make sense if you read my last-week post on Gittins Index

So, why does Gittins’ theorem hold for arms and not for projects ?

First, an example. Consider the following two projects A and B. The first period you work on A you have to choose an action Up or Down. Future rewards from working on project A depend on your first action. The (deterministic) rewards sequence is

A:  8, 0, 0, 0, … if you played Up; and     5, 5, 5, 5, … if you played Down.

Project B gives the deterministic rewards sequence

B:  7, 0, 0,…

If your discount factor is high (meaning you are sufficiently patient) then your best strategy is to start with project B, get 7 and then switch to project A, play Down and get 5 every period.

Suppose now that there is an additional project, project C which gives the deterministic payoff sequence

C:  6,6,6,….

Your optimal strategy now is to start with A, play Up, and then switch to C.

This is a violation of Independence of Irrelevant Alternatives: when only A and B are available you first choose B. When C becomes available you suddenly choose  A. The example shows that Gittins’ index theorem does not hold for projects.

What goes wrong in the proof ? We can still define the “fair charges” and the “prevailing charges” as in the multi-armed proof. For example, the fair charge for project A is 8: If the casino charges a smaller amount you will pay once, play Up, and go home, and the casino will lose money.

The point where the proof breaks down is step 5. It is not true that Gittins’ strategy maximizes the payments to the casino. The key difference is that in the multi-project case the sequence of prevailing charges of each bandit depends on your strategy. In contrast, in the multi-arm case, your strategy only determines which charge you will play when, but the charges themselves are independent of your strategy. Since the discounts sequence 1,\beta,\beta^2,\dots is decreasing, the total discounted payment in the multi-armed problem is maximized if you pay the charges in decreasing order, as you do under the Gittins Strategy.

After presenting Richard Weber’s remarkable proof of Gittins’ index theorem in my dynamic optimization class, I claimed that the best way to make sure that you understand a proof is to identify where the assumptions of the theorem are used. Here is the proof again, slightly modified from Weber’s paper, followed by the question I gave in class.

First, an arm or a bandit process is given by a countable state space S, a transition function q(s'|s) and a payoff function r:S\rightarrow [0,1]. The interpretation is that at every period, when the arm is at state s, playing it gives a reward r(s) and the arm’s state changes according to q(\cdot|s).

In the multi-armed bandit problem, at every period you choose an arm to play. The states of the arms you didn’t choose remain fixed. Your goal is to maximize expected total discounted rewards. Gittins’ theorem says that for each arm there exists a function \gamma:S\rightarrow [0,1] called the Gittins Index (GI from now on) such that, in a multi armed problem, the optimal strategy is to play at each period the arm whose current state has the largest GI. In fancy words, the theorem establishes that the choice which arm to play at each period satisfies Independent of Irrelevance Alternatives: Suppose there are three arms A,B,C whose current states are a,b,c. If you were going to start by playing A if only A and B were available, then you should not start with B when A,B,C are available.

The proof proceeds in several steps:

  1. Define the Gittins Index at state s to be the amount \gamma such that, if the casino charges \gamma every time you play the arm, then both playing and not playing are optimal actions at the state s. We need to prove that there exists a unique such \gamma. This is not completely obvious, but can be shown by appealing to standard dynamic programming arguments.
  2. Assume that you enter a casino with a single arm at some state s with GI \gamma. Assume also that the casino charges \gamma every time you play the arm. At every period, you can play, or quit playing, or take a break. From step 1, it follows that regardless of your strategy, the casino will always get a nonnegative net expected net payoff, and if you play optimally then the net expected payoff to the casino (and therefore also to you) is zero. For this reason this \gamma (the GI of the initial state) is called the fair charge. Here, playing optimally means that you either not play at all or start playing and continue to play every period until the arm reaches a state with GI strictly smaller then \gamma, in which case you must quit. It is important that as long as the arm is at a state with GI strictly greater than \gamma you continue playing. If you need to take a restroom break you must wait until the arm reaches a state with GI \le \gamma.
  3. Continuing with a single arm, assume now that the casino announces a new policy that at every period, if the arm reaches a state with GI that is strictly smaller than the GI of all previous states, then the charge for playing the arm drops to the new GI. We call these new (random) charges the prevailing charges. Again, the casino will always get a nonnegative net expected payoff, and if you play optimally then the net expected payoff is zero. Here, playing optimally means that you either not play at all or start playing and continue to play foreover. You can quit or take a bathroom break only at periods in which the prevailing charge equals the GI of the current state.
  4. Consider now the multi-arms problem, and assume again that in order to play an arm you have to pay its current prevailing charge as defined in step 3. Then again, regardless of how you play, the Casino will get a nonnegative net payoff (since by step 3 this is the case for every arm separately), and you can still get an expected net payoff 0 if you play optimally. Playing optimally means that you either not play or start playing. If you start playing you can quit, take a break, or switch to another arm only in periods in which the prevailing charge of the arm you are currently playing equals the GI of its current state.
  5. Forget for a moment about your profits and assume that what you care about is maximizing payments to the casino (I don’t mean net payoff, I mean just the charges that the casino receives from your playing). Since the sequence of prevailing charges of every arm is decreasing, and since the discount factor makes the casino like higher payments early, the Gittins strategy — the one in which you play at each period the arm with highest current GI, which by definition of the prevailing charge is also the arm with highest current prevailing charge — is the one that maximizes the Casino’s payments. In fact, this would be the case even if you knew the realization of the charges sequence in advance.
  6. The Gittins strategy is one of the optimal strategies from step 4. Therefore, its net expected payoff is 0.
  7. Therefore, for every strategy \sigma,
    Rewards from \sigma<= Charges from \sigma<= Charges from Gittins strategy
    (First inequality is step 4 and second is step 5)
    And Charges from Gittins strategy = Rewards from Gittins Strategy
    (step 6)
  8. Therefore, Gittins strategy gives the optimal possible total rewards.

That’s it.

Now, here is the question. Suppose that instead of arms we would have dynamic optimization problems, each given by a state space, an action space, a transition function, and a payoff function. Let’s call them projects. The difference between a project and an arm is that when you decide to work on a project you also decide which action to take, and the current reward and next state depend on the current state and on your action. Now read again the proof with projects in mind. Every time I said “play arm i”, what I meant is work on project i and choose the optimal action. We can still define an “index”, as in the first step: the unique charge \gamma such that, if you need to pay \gamma every period you work on the project (using one of the actions) then both not working and working with some action are optimal. The conclusion is not true for the projects problem though. At which step does the argument break down ?

Volume 42 of the AER, published in 1952, contains an article by Paul Samuelson entitled `Spatial Price Equilibrium and Linear Programming’. In it, Samuelson uses a model of Enke (1951) as a vehicle to introduce the usefulness of linear programming techniques to Economists. The second paragraph of the paper is as follows:

In recent years economists have begun to hear about a new type of theory called linear programming. Developed by such mathematicians as G. B. Dantzig, J. v. Neumann, A. W. Tucker, and G. W. Brown, and by such economists as R. Dorfman, T. C. Koopmans, W. Leontief, and others, this field admirably illustrates the failure of marginal equalization as a rule for defining equilibrium. A number of books and articles on this subject are beginning to appear. It is the modest purpose of the following discussion to present a classical economics problem which illustrates many of the characteristics of linear programming. However, the problem is of economic interest for its own sake and because of its ancient heritage.

Of interest are the 5 reasons that Samuelson gives for why readers of the AER should care.

  1. This viewpoint might aid in the choice of convergent numerical iterations to a solution.

  2. From the extensive theory of maxima, it enables us immediately to evaluate the sign of various comparative-statics changes. (E.g., an increase in net supply at any point can never in a stable system decrease the region’s exports.)

  3. By establishing an equivalence between the Enke problem and a maximum problem, we may be able to use the known electric devices for solving the former to solve still other maximum problems, and perhaps some of the linear programming type.

  4. The maximum problem under consideration is of interest because of its unusual type: it involves in an essential way such non-analytic functions as absolute value of X, which has a discontinuous derivative and a corner; this makes it different from the conventionally studied types and somewhat similar to the inequality problems met with in linear programming.

  5. Finally, there is general methodological and mathematical interest in the question of the conditions under which a given equilibrium problem can be significantly related to a maximum or minimum problem.


Duppe and Weintraub date the birth of Economic Theory,  at June 1949. It was the year in which Koopmans organized the Cowles Commission Activity Analysis Conference. It is also counted as conference Zero of the Mathematical Programming Symposium. I mention this because the connections between Economic Theory and Mathematical Programming and Operations Research had, at one time been very strong. The conference, for example, was conceived of by Tjalling Koopmans, Harold Kuhn, George Dantzig, Albert Tucker, Oskar Morgenstern, and Wassily Leontief with the support of the Rand corporation.

One of the last remaining links to this period who straddled, like a Colossus, both Economic Theory and Operations Research, Herbert Eli Scarf, passed away on November 15th, 2015.

Scarf came to Economics and Operations Research by way of Princeton’s mathematics department. Among his classmates was Gomory of the cutting plane method Milnor of topology fame and Shapley. Subsequently, he went on to  Rand ( Dantzig, Bellman, Ford & Fulkerson). While there he met Samuel Karlin and Kenneth Arrow who introduced him to inventory theory. It was in this subject that Scarf made the first of many important contributions: the optimality of (S, s) polices. He would go on to establish equivalence of the core and competitive equilibrium (jointly with Debreu), identify a sufficient condition for non-emptiness of the core of a NTU game (now known as Scarf’s Lemma), anticipated the application of Groebner basis in integer programming (neighborhood systems) and of course his magnificent `Computation of Economic Equilibria’.

Exegi monumentum aere perennnius regalique situ pyramidum altius, quod non imber edax, non Aquilo impotens possit diruere aut innumerabilis annorum series et fuga temporum. Non omnis moriar…….

I have finished a monument more lasting than bronze and higher than the royal structure of the pyramids, which neither the destructive rain, nor wild North wind is able to destroy, nor the countless series of years and flight of ages. I will not wholly die………….

Starr’s ’69 paper considered Walrasian equilibria in exchange economies with non-convex preferences i.e., upper contour sets of utility functions are non-convex. Suppose {n} agents and {m} goods with {n \geq m}. Starr identified a price vector {p^*} and a feasible allocation with the property that at most {m} agents did not receiving a utility maximizing bundle at the price vector {p^*}.

A poetic interlude. Arrow and Hahn’s book has a chapter that describes Starr’s work and closes with a couple of lines of Milton:

A gulf profound as that Serbonian Bog
Betwixt Damiata and Mount Casius old,
Where Armies whole have sunk.

Milton uses the word concave a couple of times in Paradise Lost to refer to the vault of heaven. Indeed the OED lists this as one of the poetic uses of concavity.

Now, back to brass tacks. Suppose {u_i} is agent {i}‘s utility function. Replace the upper contour sets associated with {u_i} for each {i} by its convex hull. Let {u^*_i} be the concave utility function associated with the convex hulls. Let {p^*} be the Walrasian equilibrium prices wrt {\{u^*_i\}_{i=1}^n}. Let {x^*_i} be the allocation to agent {i} in the associated Walrasian equilibrium.

For each agent {i} let

\displaystyle S^i = \arg \max \{u_i(x): p^* \cdot x \leq p^*\cdot e^i\}

where {e^i} is agent {i}‘s endowment. Denote by {w} the vector of total endowments and let {S^{n+1} = \{-w\}}.

Let {z^* = \sum_{i=1}^nx^*_i - w = 0} be the excess demand with respect to {p^*} and {\{u^*_i\}_{i=1}^n}. Notice that {z^*} is in the convex hull of the Minkowski sum of {\{S^1, \ldots, S^n, S^{n+1}\}}. By the Shapley-Folkman-Starr lemma we can find {x_i \in conv(S^i)} for {i = 1, \ldots, n}, such that {|\{i: x_i \in S^i\}| \geq n - m} and {0 = z^* = \sum_{i=1}^nx_i - w}.

When one recalls, that Walrasian equilibria can also be determined by maximizing a suitable weighted (the Negishi weights) sum of utilities over the set of feasible allocations, Starr’s result can be interpreted as a statement about approximating an optimization problem. I believe this was first articulated by Aubin and Elkeland (see their ’76 paper in Math of OR). As an illustration, consider the following problem :

\displaystyle \max \sum_{j=1}^nf_j(y_j)

subject to

\displaystyle Ay = b

\displaystyle y \geq 0

Call this problem {P}. Here {A} is an {m \times n} matrix with {n > m}.

For each {j} let {f^*_j(\cdot)} be the smallest concave function such that {f^*_j(t) \geq f_j(t)} for all {t \geq 0} (probably quasi-concave will do). Instead of solving problem {P}, solve problem {P^*} instead:

\displaystyle \max \sum_{j=1}^nf^*_j(y_j)

subject to

\displaystyle Ay = b

\displaystyle y \geq 0

The obvious question to be answered is how good an approximation is the solution to {P^*} to problem {P}. To answer it, let {e_j = \sup_t [f_j^*(t) - f_j(t)]} (where I leave you, the reader, to fill in the blanks about the appropriate domain). Each {e_j} measures how close {f_j^*} is to {f_j}. Sort the {e_j}‘s in decreasing orders. If {y^*} is an optimal solution to {P^*}, then following the idea in Starr’s ’69 paper we get:

\displaystyle \sum_{j=1}^nf_j(y^*_j) \geq \sum_{j=1}^nf^*_j(y^*_j)- \sum_{j=1}^me_j

Here is the question from Ross’ book that I posted last week

Question 1 We have two coins, a red one and a green one. When flipped, one lands heads with probability {P_1} and the other with probability {P_2}. Assume that {P_1>P_2}. We do not know which coin is the {P_1} coin. We initially attach probability {p} to the red coin being the {P_1} coin. We receive one dollar for each heads and our objective is to maximize the total expected discounted return with discount factor {\beta}. Find the optimal policy.

This is a dynamic programming problem where the state is the belief that the red coin is {P_1}. Every period we choose a coin to toss, get a reward and updated our state given the outcome. Before I give my solution let me explain why we can’t immediately invoke uncle Gittins.

In the classical bandit problem there are {n} arms and each arm {i} provides a reward from an unknown distribution {\theta_i\in\Delta([0,1])}. Bandit problems are used to model tradeoffs between exploitation and exploration: Every period we either exploit an arm about whose distribution we already have a good idea or explore another arm. The {\theta_i} are randomized independently according to distributions {\mu_i\in \Delta(\Delta([0,1]))}, and what we are interested in is the expected discounted reward. The optimization problem has a remarkable solution: choose in every period the arm with the largest Gittins index. Then update your belief about that arm using Bayes’ rule. The Gittins index is a function which attaches a number {G(\mu)} (the index) to every belief {\mu} about an arm. What is important is that the index of an arm {i} depends only on {\mu_i} — our current belief about the distribution of the arm — not on our beliefs about the distribution of the other arms.

The independence assumption means that we only learn about the distribution of the arm we are using. This assumption is not satisfied in the red coin green coin problem: If we toss the red coin and get heads then the probability that the green coin is {P_1} decreases. Googling `multi-armed bandit’ with `dependent arms’ I got some papers which I haven’t looked at carefully but my superficial impression is that they would not help here.

Here is my solution. Call the problem I started with `the difficult problem’ and consider a variant which I call `the easy problem’. Let {r=p/(p+\sqrt{p(1-p)}} so that {r^2/(1-r)^2=p/1-p}. In the easy problem there are again two coins but this time the red coin is {P_1} with probability {r} and {P_2} with probability {1-r} and, independently, the green coin is {P_1} with probability {(1-r)} and {P_2} with probability {r}. The easy problem is easy because it is a bandit problem. We have to keep track of beliefs {p_r} and {p_g} about the red coin and the green coin ({p_r} is the probability that the red coin is {P_1}), starting with {p_r={r}} and {p_g=(1-r)}, and when we toss the red coin we update {p_r} but keep {p_g} fixed. It is easy to see that the Gittins index of an arm is a monotone function of the belief that the arm is {P_1} so the optimal strategy is to play red when {p_r\ge p_g} and green when {p_g\ge p_r}. In particular, the optimal action in the first period is red when {p\ge 1/2} and green when {p\le 1/2}.

Now here comes the trick. Consider a general strategy {g} that assigns to every finite sequence of past actions and outcomes an action (red or green). Denote by {V_d(g)} and {V_e(g)} the rewards that {g} gives in the difficult and easy problems respectively. I claim that

\displaystyle \begin{array}{rcl} &V_e(g)=r(1-r) \cdot P_1/(1-\beta)+ \\ &r(1-r) \cdot P_2/(1-\beta) + (r^2+(1-r)^2) V_d(g).\end{array}

Why is that ? in the easy problem there is a probability {r(1-r)} that both coins are {P_1}. If this happens then every {g} gives payoff {P_1/(1-\beta)}. There is a probability {r(1-r)} that both coins are {P_2}. If this happens then every {g} gives payoff {P_2/(1-\beta)}. And there is a probability {r^2+(1-r)^2} that the coins are different, and, because of the choice of {r}, conditionally on this event the probability of {G} being {P_1} is {p}. Therefore, in this case {g} gives whatever {g} gives in the difficult problem.

So, the payoff in the easy problem is a linear function of the payoff in the difficult problem. Therefore the optimal strategy in the difficult problem is the same as the optimal strategy in the easy problem. In particular, we just proved that, for every {p}, the optimal action in the first period is red when {p\ge 1/2} and green with {p\le 1/2}. Now back to the dynamic programming formulation, from standard arguments it follows that the optimal strategy is to keep doing it forever, i.e., at every period to toss the coin that is more likely to be the {P_1} coin given the current information.

See why I said my solution is tricky and specific ? it relies on the fact that there are only two arms (the fact that the arms are coins is not important). Here is a problem whose solution I don’t know:

Question 2 Let {0 \le P_1 < P_2 < ... < P_n \le 1}. We are given {n} coins, one of each parameter, all {n!} possibilities equally likely. Each period we have to toss a coin and we get payoff {1} for Heads. What is the optimal strategy ?

It states that the Minkowski sum of a large number of sets is approximately convex. The clearest statement  as well as the nicest proof  I am familiar with is due to J. W. S. Cassels. Cassels is a distinguished number theorist who for many years taught the mathematical economics course in the Tripos. The lecture notes  are available in a slender book now published by Cambridge University Press.

This central limit like quality of the lemma is well beyond the capacity of a hewer of wood like myself. I prefer the more prosaic version.

Let {\{S^j\}_{j=1}^n} be a collection of sets in {\Re ^m} with {n > m}. Denote by {S} the Minkowski sum of the collection {\{S^i\}_{i=1}^n}. Then, every {x \in conv(S)} can be expressed as {\sum_{j=1}^nx^j} where {x^j \in conv(S^j)} for all {j = 1,\ldots, n} and {|\{j: x^j \not \in S^j| \leq m}.

How might this be useful? Let {A} be an {m \times n} 0-1 matrix and {b \in \Re^m} with {n > m}. Consider the problem

\displaystyle \max \{cx: Ax = b, x_j \in \{0,1\}\ \forall \,\, j = 1, \ldots, n\}.

Let {x^*} be a solution to the linear relaxation of this problem. Then, the lemma yields the existence of a 0-1 vector {x} such that {cx \geq cx^* = z} and {||Ax - b||_{\infty} \leq m}. One can get a bound in terms of Euclidean distance as well.

How does one do this? Denote each column {j} of the {A} matrix by {a^j} and let {d^j = (c_j, a^j)}. Let {S^j = \{d^j, 0\}}. Because {z = cx^*} and {b = Ax^*} it follows that {(z,b) \in conv(S)}. Thus, by the Lemma,

\displaystyle (z, b) = \sum_{j=1}^n(c_j, a^j)y_j

where each {y_j \in [0,1]} and {|\{j: y_j \in (0,1) \}| \leq m }. In words, {y} has at most {m} fractional components. Now construct a 0-1 vector {y^*} from {y} as follows. If {y_j \in \{0,1\}}, set {y^*_j = y_j}. If {y_j } is fractional, round {y^*_j} upto 1 with probability {y_j} and down to zero otherwise. Observe that {||Ay - b||_{\infty} \leq m} and the {E(cy) = cx^*}. Hence, there must exist a 0-1 vector {x} with the claimed properties.

The error bound of {m} is to large for many applications. This is a consequence of the generality of the lemma. It makes no use of any structure encoded in the {A} matrix. For example, suppose x^* were an extreme point and A a totally unimodular matrix. Then, the number of fractional components of $x^*$ are zero. The rounding methods of Kiralyi, Lau and Singh as well as of Kumar, Marathe, Parthasarthy and Srinivasan exploit the structure of the matrix. In fact both use an idea that one can find in Cassel’s paper. I’ll follow the treatment in Kumar et. al.

As before we start with {x^*}. For convenience suppose {0 < x^*_j < 1} for all {j = 1, \ldots, n}. As {A} as has more columns then rows, there must be a non-zero vector {r} in the kernel of {A}, i.e., {Ar = 0}. Consider {x + \alpha r} and {x -\beta r}. For {\alpha > 0} and {\beta > 0} sufficiently small, {x_j + \alpha r_j, x_j - \beta r_j \in [0,1]} for all {j}. Increase {\alpha} and {\beta} until the first time at least one component of {x +\alpha r} and {x- \beta r} is in {\{0,1\}}. Next select the vector {x + \alpha r} with probability {\frac{\beta}{\alpha + \beta}} or the vector {x- \beta r} with probability {\frac{\alpha}{\alpha + \beta}}. Call the vector selected {x^1}.

Now {Ax^1 = b}. Furthermore, {x^1} has at least one more integer component than {x^*}. Let {J = \{j: x^1_j \in (0,1)\}}. Let {A^J} be the matrix consisting only of the columns in {J} and {x^1(J)} consist only of the components of {x^1} in {J}. Consider the system {A^Jx^1(J) = b - \sum_{j \not \in J}x^1_j}. As long as {A^J} has more columns then rows we can repeat the same argument as above. This iterative procedure gives us the same rounding result as the Lemma. However, one can do better, because it may be that even when the number of columns of the matrix is less than the number of rows, the system may be under-determined and therefore the null space is non-empty.

In a sequel, I’ll describe an optimization version of the Lemma that was implicit in Starr’s 1969 Econometrica paper on equilibria in economies with non-convexities.

Here is the bonus question from the final exam in my dynamic optimization class of last semester. It is based on problem 8 Chapter II in Ross’ book Introduction to stochastic dynamic programming. It appears there as `guess the optimal policy’ without asking for proof. The question seems very natural, but I couldn’t find any information about it (nor apparently the students). I have a solution but it is tricky and too specific to this problem. I will describe my solution next week but perhaps somebody can show me a better solution or a reference ?


We have two coins, a red one and a green one. When flipped, one lands heads with probability P1 and the other with probability P2. We do not know which coin is the P1 coin. We initially attach probability p to the red coin being the P1 coin. We receive one dollar for each heads and our objective is to maximize the total expected discounted return with discount factor β. Describe the optimal policy, including a proof of optimality.

The last of the trio, Harold Kuhn, passed away on July 2nd, 2014. Upon hearing the news, I was moved to dig up some old lecture notes of Kuhn’s in which KTK is stated an proved. I’ve been carrying them around with me since 1981. From the condition they are in, this must have been the last time I looked at them. With good reason, for as I re-read them, it dawned upon me how much of them I had absorbed and taken to be my own thoughts. Kuhn motivates the KTK theorem by replacing the non-linear functions by their first order Taylor approximations. This turns the exercise into a linear program. The LP duality theorem suggests the theorem to be proved, and the separating hyperplane theorem does the rest. For details see the relevant chapter of my book. The notes go on to describe Kuhn and Tucker’s excitement and subsequent despair as they uncover a counterexample and the need for a constraint qualification.

William Karush, who passed in 1997, had arrived at the same theorem many years earlier in his 1939 University of Chicago Masters Thesis (Kuhn-Tucker is 1951). When Kuhn learned of Karush’s contribution through a reading of Takayama’s book on Mathematical Economics. Upon doing so he wrote Karush:

In March I am talking at an AMS Symposium on “Nonlinear Programming – A Historical View.” Last summer I learned through reading Takayama’s Mathematical Economics of your 1939 Master’s Thesis and have obtained a copy. First, let me say that you have clear priority on the results known as the Kuhn–Tucker conditions (including the constraint qualification). I intend to set the record as straight as I can in my talk.

The missive closes with this paragraph:

Dick Cottle, who organized the session, has been told of my plans to rewrite history and says `you must be a saint’ not to complain about the absence of recognition. Al Tucker remembers you from RAND, wonders why you never called this to his attention and sends his best regards.

Karush’s reply, 6 days later, equally gracious:

Thank you for your most gracious letter. I appreciate your thoughtfulness in wanting to draw attention to my early work. If you ask why I did not bring up the matter of priority before, perhaps the answer lies in what is now happening – I am not only going to get credit for my work, but I am going to crowned a “saint”.


In 1937, representatives of the Plywood trust called upon Comrade Professor Leonid Vitalievich Kantorovich with a problem. The trust produced 5 varieties of plywood using 8 different machines. How, they asked, should they allocate their limited supply of raw materials to the various machines so as to produce the maximum output of plywood in the required proportions? As problems go, it was, from this remove unremarkable. Remarkable is that the Comrade Professor agreed to take it on. The so called representatives might have been NKVD. Why? Uncle Joe’s first act upon taking power in 1929 was to purge the economists, or more precisely the Jewish ones. This was well before the purge of the communist party in 1936. Why the economists? They complained about waste in a planned economy `dizzy with success.’ Yet, here were the apparatchiks of the Trust asking the Comrade Professor to reduce waste.

Kantorovich writes, that at the time he was burnt out by pure mathematics. Combined with a concern at the rise of Hitler, he felt compelled to do something practical. And, so he turned his mind to the problem of the Plywood Trust. Frances Spufford, in his delightful work of `faction’ called Red Plenty, imagines what Kantorovich might have been thinking.

He had thought about ways to distinguish between better answers and worse answers to questions which had no right answer. He had seen a method which could do what the detective work of conventional algebra could not, in situations like the one the Plywood Trust described, and would trick impossibility into disclosing useful knowledge. The method depended on measuring each machine’s output of one plywood in terms of all the other plywoods it could have made.

If he was right — and he was sure he was, in essentials — then anyone applying the new method to any production situation in the huge family of situations resembling the one at the Plywood Trust, should be able to count on a measureable percentage improvement in the quantity of product they got from a given amount of raw materials. Or you could put that the other way around: they would make a measureable percentage saving on the raw materials they needed to make a given amount of product.

He didn’t know yet what sort of percentage he was talking about, but just suppose it was 3%. It might not sound like much, only a marginal gain, an abstemious eking out of a little bit more from the production process, at a time when all the newspapers showed miners ripping into fat mountains of solid metal, and the output of plants booming 50%, 75%, 150%. But it was predictable. You could count on the extra 3% year after year. Above all it was free. It would come merely by organising a little differently the tasks people were doing already. It was 3% of extra order snatched out of the grasp of entropy. In the face of the patched and mended cosmos, always crumbling of its own accord, always trying to fall down, it built; it gained 3% more of what humanity wanted, free and clear, just as a reward for thought. Moreover, he thought, its applications did not stop with individual factories, with getting 3% more plywood, or 3% more gun barrels, or 3% more wardrobes. If you could maximise, minimise, optimise the collection of machines at the Plywood Trust, why couldn’t you optimise a collection of factories, treating each of them, one level further up, as an equation? You could tune a factory, then tune a group of factories, till they hummed, till they purred. And that meant —

An english description of Kantorovich’s appeared in the July 1960 issue of Management Science. The opening line of the paper is:

The immense tasks laid down in the plan for the third Five Year Plan period require that we achieve the highest possible production on the basis of the optimum utilization of the existing reserves of industry: materials, labor and equipment.

The paper contains a formulation of the Plywood Trust’s problem as a linear program. A recognition of the existence of an optimal solution at an extreme point as well as the hopelessness of enumerating extreme as a solution method. Kantorovich then goes on to propose his method, which he calls the method of resolving multipliers. Essentially, Kantorovich proposes that one solve the dual and then use complementary slackness to recover the primal. One might wonder how Kantorovich’s contribution differs from the contributions of Koopmans and Dantzig. That is another story and as fair a description of the issues as I know can be found in Roy Gardner’s 1990 piece in the Journal of Economic Literature. I reproduce one choice remark:

Thus, the situation of Kantorovich is rather like that of the discoverer Columbus. He really never touched the American mainland, and he didn’t give its name, but he was the first one in the area.

As an aside, David Gale is the one often forgotten in this discussion. If the Nobel committee has awarded the prize for Linear Programming, Dantzig and Gale would have been included. Had Gale lived long enough, he might have won it again for matching making him the third to have won the prize twice in the same subject. The others are John Bardeen and Frederick Sanger.

Continuing with Spufford’s imaginings:

— and that meant that you could surely apply the method to the entire Soviet economy, he thought. He could see that this would not be possible under capitalism, where all the factories had separate owners, locked in wasteful competition with one another. There, nobody was in a position to think systematically. The capitalists would not be willing to share information about their operations; what would be in it for them? That was why capitalism was blind, why it groped and blundered. It was like an organism without a brain. But here it was possible to plan for the whole system at once. The economy was a clean sheet of paper on which reason was writing. So why not optimise it? All he would have to do was to persuade the appropriate authorities to listen.

Implementation of Kantorovich’s solution at the Plywood trust led to success. Inspired, Kantorovich sent a letter to Gosplan urging adoption of his methods. Here the fact that Kantorovich solved the dual first rather than the primal is important. Kantorovich interpreted his resolving multipliers (shadow prices today) as objectively determined prices. Kantorovich’s letter to Gosplan urged a replacement of the price system in place by his resolving multipliers. Kantorovich intended to implement optimal production plans through appropriate pieces. Gosplan, responded that reform was unecessary. Kantorovich narrowly missed a trip to the Gulag and stopped practicing Economics, for a while. Readers wanting a fuller sense of what mathematical life was like in this period should consult this piece by G. G. Lorentz.

After the war, Kantorovich took up linear programming again. At Lenningrad, he headed a team to reduce scrap metal produced at the Egorov railroad-car plant. The resulting reduction in waste reduced the supply of scrap iron for steel mills disrupting their production! Kantorovich escaped punishment by the Leningrad regional party because of his work on atomic reactors.

Kantorovich’s interpretation of resolving multipliers which he renamed as objectively determined valuations put him at odds with the prevailing labor theory of value. In the post Stalin era, he was criticized for being under the sway of Bohm-Bawerk, author of the notion of subjective utility. Aron Katsenelinboigen, relates a joke played by one of these critics on Kantorovich. A production problem was presented to Kantorovich where the labor supply constraint would be slack at optimality. Its `objectively determined valuation’ was therefore zero, contradicting the labor theory of value.

Nevertheless, Kantorovich survived. This last verse from the Ballard of L. V. Kantorvich authored by Josph Lakhman explains why:

Then came a big scholar with a solution.
Alas, too clever a solution.
`Objectively determined valuations’-
That’s the panacea for each and every doubt!
Truth be told, the scholar got his knukcles rapped
Slightly rapped
For such an unusual advice
That threatened to overturn the existing order.
After some thought, however, the conclusion was reached
That the valuations had been undervalued


Kellogg faculty blogroll