You are currently browsing the category archive for the ‘linear programming’ category.
Volume 42 of the AER, published in 1952, contains an article by Paul Samuelson entitled `Spatial Price Equilibrium and Linear Programming’. In it, Samuelson uses a model of Enke (1951) as a vehicle to introduce the usefulness of linear programming techniques to Economists. The second paragraph of the paper is as follows:
In recent years economists have begun to hear about a new type of theory called linear programming. Developed by such mathematicians as G. B. Dantzig, J. v. Neumann, A. W. Tucker, and G. W. Brown, and by such economists as R. Dorfman, T. C. Koopmans, W. Leontief, and others, this field admirably illustrates the failure of marginal equalization as a rule for defining equilibrium. A number of books and articles on this subject are beginning to appear. It is the modest purpose of the following discussion to present a classical economics problem which illustrates many of the characteristics of linear programming. However, the problem is of economic interest for its own sake and because of its ancient heritage.
Of interest are the 5 reasons that Samuelson gives for why readers of the AER should care.

This viewpoint might aid in the choice of convergent numerical iterations to a solution.

From the extensive theory of maxima, it enables us immediately to evaluate the sign of various comparativestatics changes. (E.g., an increase in net supply at any point can never in a stable system decrease the region’s exports.)

By establishing an equivalence between the Enke problem and a maximum problem, we may be able to use the known electric devices for solving the former to solve still other maximum problems, and perhaps some of the linear programming type.

The maximum problem under consideration is of interest because of its unusual type: it involves in an essential way such nonanalytic functions as absolute value of X, which has a discontinuous derivative and a corner; this makes it different from the conventionally studied types and somewhat similar to the inequality problems met with in linear programming.

Finally, there is general methodological and mathematical interest in the question of the conditions under which a given equilibrium problem can be significantly related to a maximum or minimum problem.
Duppe and Weintraub date the birth of Economic Theory, at June 1949. It was the year in which Koopmans organized the Cowles Commission Activity Analysis Conference. It is also counted as conference Zero of the Mathematical Programming Symposium. I mention this because the connections between Economic Theory and Mathematical Programming and Operations Research had, at one time been very strong. The conference, for example, was conceived of by Tjalling Koopmans, Harold Kuhn, George Dantzig, Albert Tucker, Oskar Morgenstern, and Wassily Leontief with the support of the Rand corporation.
One of the last remaining links to this period who straddled, like a Colossus, both Economic Theory and Operations Research, Herbert Eli Scarf, passed away on November 15th, 2015.
Scarf came to Economics and Operations Research by way of Princeton’s mathematics department. Among his classmates was Gomory of the cutting plane method Milnor of topology fame and Shapley. Subsequently, he went on to Rand ( Dantzig, Bellman, Ford & Fulkerson). While there he met Samuel Karlin and Kenneth Arrow who introduced him to inventory theory. It was in this subject that Scarf made the first of many important contributions: the optimality of (S, s) polices. He would go on to establish equivalence of the core and competitive equilibrium (jointly with Debreu), identify a sufficient condition for nonemptiness of the core of a NTU game (now known as Scarf’s Lemma), anticipated the application of Groebner basis in integer programming (neighborhood systems) and of course his magnificent `Computation of Economic Equilibria’.
Exegi monumentum aere perennnius regalique situ pyramidum altius, quod non imber edax, non Aquilo impotens possit diruere aut innumerabilis annorum series et fuga temporum. Non omnis moriar…….
I have finished a monument more lasting than bronze and higher than the royal structure of the pyramids, which neither the destructive rain, nor wild North wind is able to destroy, nor the countless series of years and flight of ages. I will not wholly die………….
Starr’s ’69 paper considered Walrasian equilibria in exchange economies with nonconvex preferences i.e., upper contour sets of utility functions are nonconvex. Suppose agents and goods with . Starr identified a price vector and a feasible allocation with the property that at most agents did not receiving a utility maximizing bundle at the price vector .
A poetic interlude. Arrow and Hahn’s book has a chapter that describes Starr’s work and closes with a couple of lines of Milton:
A gulf profound as that Serbonian Bog
Betwixt Damiata and Mount Casius old,
Where Armies whole have sunk.
Milton uses the word concave a couple of times in Paradise Lost to refer to the vault of heaven. Indeed the OED lists this as one of the poetic uses of concavity.
Now, back to brass tacks. Suppose is agent ‘s utility function. Replace the upper contour sets associated with for each by its convex hull. Let be the concave utility function associated with the convex hulls. Let be the Walrasian equilibrium prices wrt . Let be the allocation to agent in the associated Walrasian equilibrium.
For each agent let
where is agent ‘s endowment. Denote by the vector of total endowments and let .
Let be the excess demand with respect to and . Notice that is in the convex hull of the Minkowski sum of . By the ShapleyFolkmanStarr lemma we can find for , such that and .
When one recalls, that Walrasian equilibria can also be determined by maximizing a suitable weighted (the Negishi weights) sum of utilities over the set of feasible allocations, Starr’s result can be interpreted as a statement about approximating an optimization problem. I believe this was first articulated by Aubin and Elkeland (see their ’76 paper in Math of OR). As an illustration, consider the following problem :
subject to
Call this problem . Here is an matrix with .
For each let be the smallest concave function such that for all (probably quasiconcave will do). Instead of solving problem , solve problem instead:
subject to
The obvious question to be answered is how good an approximation is the solution to to problem . To answer it, let (where I leave you, the reader, to fill in the blanks about the appropriate domain). Each measures how close is to . Sort the ‘s in decreasing orders. If is an optimal solution to , then following the idea in Starr’s ’69 paper we get:
It states that the Minkowski sum of a large number of sets is approximately convex. The clearest statement as well as the nicest proof I am familiar with is due to J. W. S. Cassels. Cassels is a distinguished number theorist who for many years taught the mathematical economics course in the Tripos. The lecture notes are available in a slender book now published by Cambridge University Press.
This central limit like quality of the lemma is well beyond the capacity of a hewer of wood like myself. I prefer the more prosaic version.
Let be a collection of sets in with . Denote by the Minkowski sum of the collection . Then, every can be expressed as where for all and .
How might this be useful? Let be an 01 matrix and with . Consider the problem
Let be a solution to the linear relaxation of this problem. Then, the lemma yields the existence of a 01 vector such that and . One can get a bound in terms of Euclidean distance as well.
How does one do this? Denote each column of the matrix by and let . Let . Because and it follows that . Thus, by the Lemma,
where each and . In words, has at most fractional components. Now construct a 01 vector from as follows. If , set . If is fractional, round upto 1 with probability and down to zero otherwise. Observe that and the . Hence, there must exist a 01 vector with the claimed properties.
The error bound of is to large for many applications. This is a consequence of the generality of the lemma. It makes no use of any structure encoded in the matrix. For example, suppose were an extreme point and a totally unimodular matrix. Then, the number of fractional components of $x^*$ are zero. The rounding methods of Kiralyi, Lau and Singh as well as of Kumar, Marathe, Parthasarthy and Srinivasan exploit the structure of the matrix. In fact both use an idea that one can find in Cassel’s paper. I’ll follow the treatment in Kumar et. al.
As before we start with . For convenience suppose for all . As as has more columns then rows, there must be a nonzero vector in the kernel of , i.e., . Consider and . For and sufficiently small, for all . Increase and until the first time at least one component of and is in . Next select the vector with probability or the vector with probability . Call the vector selected .
Now . Furthermore, has at least one more integer component than . Let . Let be the matrix consisting only of the columns in and consist only of the components of in . Consider the system . As long as has more columns then rows we can repeat the same argument as above. This iterative procedure gives us the same rounding result as the Lemma. However, one can do better, because it may be that even when the number of columns of the matrix is less than the number of rows, the system may be underdetermined and therefore the null space is nonempty.
In a sequel, I’ll describe an optimization version of the Lemma that was implicit in Starr’s 1969 Econometrica paper on equilibria in economies with nonconvexities.
Sydney Afriat arrived in Purdue in the late 60’s with a Bentley in tow. Mort Kamien described him as having walked out of the pages of an Ian Flemming novel. Why he brought the Bentley was a puzzle, as there were no qualified mechanics as far as the eye could see. In Indiana, that is a long way. Afriat would take his Bentley on long drives only to be interrupted by mechanical difficulties that necessitated the Bentley being towed to wait for parts or specialized help.
I came upon Afriat when I learnt about the problem of rationalizability. One has a model of choice and a collection of observations about what an agent selected. Can one rationalize the observed choices by the given model of choice? In Afriat’s seminal paper on the subject, the observations consisted of pricequantity pairs for a vector of goods and a budget. The goal was to determine if the observed choices were consistent with an agent maximizing a concave utility function subject to the budget constraint. Afriat’s paper has prompted many other papers asking the same question for different models of choice. There is an aspect of these papers, including Afriat’s, that I find puzzling.
To illustrate, consider rationalizing expected utility (Eran Shmaya suggested that `expected consumption’ might be more accurate). Let be the set of possible states. We are given a sequence of observations and a single budget . Here represents consumption in state and is the unit price of consumption in state in observation . We want to know if there is a probability distribution over states, , such that each maximizes expected utility. In other words, solves
subject to
The solution to the above program is obvious. Identify the variable with the largest objective coefficient to constraint ratio and make it as large as possible. It is immediate that a collection of observations can be rationalized by a suitable set of nonzero and nonnegative ‘s if the following system has a feasible solution:
This completes the task as formulated by Afriat. A system of inequalities has been identified, that if feasible means the given observations can be rationalized. How hard is this to do in other cases? As long as the model of choice involves optimization and the optimization problem is well behaved in that first order conditions, say, suffice to characterize optimality, its a homework exercise. One can do this all day, thanks to Afriat; concave, additively separable concave, etc. etc.
Interestingly, no rationalizability paper stops at the point of identifying the inequalities. Even Afriat’s paper goes a step farther and proceeds to `characterize’ when the observations can be rationalized. But, feasibility of the inequalities themselves is just such a characterization. What more is needed?
Perhaps, the characterization involving inequalities lacks `interpretation’. Or, if the given system for a set of observations was infeasible, we may be interested in the obstacle to feasibility. Afriat’s paper gave a characterization in terms of the strong axiom of revealed preference, i.e., an absence of cycles of certain kinds. But that is precisely the Farkas alternative to the system of inequalities identified in Afriat. The absence of cycles condition follows from the fact that the initial set of inequalities is associated with the problem of finding a shortest path (see the chapter on rationalizability in my mechanism design book). Let me illustrate with the example above. It is equivalent to finding a nonnegative and non trivial solution to
Take logs:
This is exactly the dual to the problem of finding a shortest path in a suitable network (I believe that Afriat has a paper, that I’ve not found, which focuses on systems of the form ).The cycle characterization would involve products of terms like being less than 1 (or greater than 1 depending on convention). So, what would this add?
In my salad days, school masters would assign boys returning from the summer hols an essay: `What I did during the summer’. Yes, masters and boys. I served a portion of my youth in a `misbegotten penal colony upon a wind blasted heath’. The only females present were master’s wives, matrons and the French mistress. No, not that kind, the kind that offers instruction in French. As you can see, to the lascivious minds of boys, there was no end to the double entendres. However, I digress.
Over the summer Thanh Nguyen and myself completed a paper about stable matchings. The abstract is reproduced below.
The National Resident Matching program strives for a stable matching of medical students to teaching hospitals. With the presence of couples, stable matchings need not exist. For any student preferences, we show that each instance of a stable matching problem has a `nearby’ instance with a stable matching. The nearby instance is obtained by perturbing the capacities of the hospitals. Specifically, given a reported capacity for each hospital , we find a redistribution of the slot capacities satisfying for all hospitals and , such that a stable matching exists with respect to . Our approach is general and applies to other type of complementarities, as well as matchings with side constraints and contracts.
In other words, with the addition of at most 9 additional slots, one can guarantee the existence of a stable matchings. This is independent of the size of the market or doctors preferences (it does assume responsive preferences on the part of hospitals). The key tool is Scarf’s lemma which is a wonderful device for converting results about cardinal matching problems into results about ordinal matching problems. For more on this, consult the paper by Kiralyi and Pap, who should be credited with a formulation of Scarf’s lemma that makes its usefulness evident.
The last of the trio, Harold Kuhn, passed away on July 2nd, 2014. Upon hearing the news, I was moved to dig up some old lecture notes of Kuhn’s in which KTK is stated an proved. I’ve been carrying them around with me since 1981. From the condition they are in, this must have been the last time I looked at them. With good reason, for as I reread them, it dawned upon me how much of them I had absorbed and taken to be my own thoughts. Kuhn motivates the KTK theorem by replacing the nonlinear functions by their first order Taylor approximations. This turns the exercise into a linear program. The LP duality theorem suggests the theorem to be proved, and the separating hyperplane theorem does the rest. For details see the relevant chapter of my book. The notes go on to describe Kuhn and Tucker’s excitement and subsequent despair as they uncover a counterexample and the need for a constraint qualification.
William Karush, who passed in 1997, had arrived at the same theorem many years earlier in his 1939 University of Chicago Masters Thesis (KuhnTucker is 1951). When Kuhn learned of Karush’s contribution through a reading of Takayama’s book on Mathematical Economics. Upon doing so he wrote Karush:
In March I am talking at an AMS Symposium on “Nonlinear Programming – A Historical View.” Last summer I learned through reading Takayama’s Mathematical Economics of your 1939 Master’s Thesis and have obtained a copy. First, let me say that you have clear priority on the results known as the Kuhn–Tucker conditions (including the constraint qualification). I intend to set the record as straight as I can in my talk.
The missive closes with this paragraph:
Dick Cottle, who organized the session, has been told of my plans to rewrite history and says `you must be a saint’ not to complain about the absence of recognition. Al Tucker remembers you from RAND, wonders why you never called this to his attention and sends his best regards.
Karush’s reply, 6 days later, equally gracious:
Thank you for your most gracious letter. I appreciate your thoughtfulness in wanting to draw attention to my early work. If you ask why I did not bring up the matter of priority before, perhaps the answer lies in what is now happening – I am not only going to get credit for my work, but I am going to crowned a “saint”.
This is the second posts about stability and equilibrium in trading networks. The first post may be found here. In it I assumed a single homogenous divisble good being traded by agents with quasilinear preferences. Each agent was associated with a vertex of a graph and the edges represented pairs of agents who were permitted to trade with each other. Furthermore, each agent was designated the role of buyer and seller. Buyers could only trade with sellers and viceversa. This meant the underlying graph was bipartite. In this post I will drop the assumption that they trade a homogenous good and that it is dvisible.
As noted in the earlier post, when the utility of buyers and the costs of sellers are linear in quantity, total unimodularity of the underlying constraint matrix is sufficient to ensure the existence of Walrasian prices that support an efficient allocation that is integral. If buyer utility functions become concave or sellers’s cost functions become convex, this is no longer true.
Suppose instead that the buyers utility functions are Mconcave and seller’s cost functions are Mconvex (object to be defined soon), then, the efficient allocation (the precise formulation will appear below) is indeed integer valued and supporting Walrasian prices exist. The corresponding allocations and prices can be interpreted as stable outcomes which cannot be blocked. This result was established by Murota and Tamura (2001).
I’ll define Mconcavity. For any integer valued vector in let and . A real valued function defined on the integer vectors of (with nonempty domain) is M concave if for all in its domain and any index there exists an index in such that
Here is the 01 vector with a zero in every component except component . I describe three different ways one might interpret this notion.
First, one can think of Mconcavity as trying to capture an essential feature of the basis exchange property of matroids. If you don’t know what a matroid is, skip to the next paragraph. Think of the vectors and as being 01 and representing subsets of columns of some given matrix, , say. Define to be the rank of the columns in the subset associated with . What the definition of Mconcavity captures is this: for any column, I remove from , that is not in and add it to , I can find (possibly) a column from (not in ) to add to and not diminish the sum of the ranks of the two sets. The argument is straightforward.
A second interpretation is to be had by comparing the definition of Mconcavity to the following way of stating concavity of a function:
If and were numbers with , then, the first term on the right represents a step from closer to and the second term is a step from closer to . Mconcavity can be interpreted as a discrete analogy of this. Think of being above and to the left of . A step closer to would means one step down in the direction and a step to the right in the direction., i.e. . The vector has a similar interpretation.
The third interpretation comes from an equivalent definition of Mconcavity, one that shows that it is identical to the notion of gross substitutes proposed by Kelso and Crawford (1982). Let be a nonnegative price vector. Let be a nonnegative integer that maximizes , i.e., a utility maximizing bundle. Consider a new price vector . Then there exists a new utility maximizing bundle such that for all such that . In words, the demand for goods whose price did not change cannot go down.
The key result of Murota and Tamura is that the following linear program has an integer optimal solution when the ‘s are Mconcave and the ‘s are Mconvex.
subject to
Notice, that buyer’s utilities do not necessarily depend on the total amount consumed. They can depend on the vector of inflows from the various sellers (similarly for the sellers). We can interpret this to mean that the sellers sell different goods which are substitutes for each other and there is an upper limit, on the amount of seller ‘s good that can go to seller . Such a bound is necessary to make sure the problem is well defined. Consider the case when utilities and costs are linear in quantity. The problem could be unbounded then. A second interpretation is that seller’s sell the same good but under different contractual terms (payment plan, delivery date etc.) which make them, in the buyer’s eyes, imperfect substitutes. The only element of the contract not fixed is the price and that is what is determined in equilibrium.
In 1937, representatives of the Plywood trust called upon Comrade Professor Leonid Vitalievich Kantorovich with a problem. The trust produced 5 varieties of plywood using 8 different machines. How, they asked, should they allocate their limited supply of raw materials to the various machines so as to produce the maximum output of plywood in the required proportions? As problems go, it was, from this remove unremarkable. Remarkable is that the Comrade Professor agreed to take it on. The so called representatives might have been NKVD. Why? Uncle Joe’s first act upon taking power in 1929 was to purge the economists, or more precisely the Jewish ones. This was well before the purge of the communist party in 1936. Why the economists? They complained about waste in a planned economy `dizzy with success.’ Yet, here were the apparatchiks of the Trust asking the Comrade Professor to reduce waste.
Kantorovich writes, that at the time he was burnt out by pure mathematics. Combined with a concern at the rise of Hitler, he felt compelled to do something practical. And, so he turned his mind to the problem of the Plywood Trust. Frances Spufford, in his delightful work of `faction’ called Red Plenty, imagines what Kantorovich might have been thinking.
He had thought about ways to distinguish between better answers and worse answers to questions which had no right answer. He had seen a method which could do what the detective work of conventional algebra could not, in situations like the one the Plywood Trust described, and would trick impossibility into disclosing useful knowledge. The method depended on measuring each machine’s output of one plywood in terms of all the other plywoods it could have made.
If he was right — and he was sure he was, in essentials — then anyone applying the new method to any production situation in the huge family of situations resembling the one at the Plywood Trust, should be able to count on a measureable percentage improvement in the quantity of product they got from a given amount of raw materials. Or you could put that the other way around: they would make a measureable percentage saving on the raw materials they needed to make a given amount of product.
He didn’t know yet what sort of percentage he was talking about, but just suppose it was 3%. It might not sound like much, only a marginal gain, an abstemious eking out of a little bit more from the production process, at a time when all the newspapers showed miners ripping into fat mountains of solid metal, and the output of plants booming 50%, 75%, 150%. But it was predictable. You could count on the extra 3% year after year. Above all it was free. It would come merely by organising a little differently the tasks people were doing already. It was 3% of extra order snatched out of the grasp of entropy. In the face of the patched and mended cosmos, always crumbling of its own accord, always trying to fall down, it built; it gained 3% more of what humanity wanted, free and clear, just as a reward for thought. Moreover, he thought, its applications did not stop with individual factories, with getting 3% more plywood, or 3% more gun barrels, or 3% more wardrobes. If you could maximise, minimise, optimise the collection of machines at the Plywood Trust, why couldn’t you optimise a collection of factories, treating each of them, one level further up, as an equation? You could tune a factory, then tune a group of factories, till they hummed, till they purred. And that meant —
An english description of Kantorovich’s appeared in the July 1960 issue of Management Science. The opening line of the paper is:
The immense tasks laid down in the plan for the third Five Year Plan period require that we achieve the highest possible production on the basis of the optimum utilization of the existing reserves of industry: materials, labor and equipment.
The paper contains a formulation of the Plywood Trust’s problem as a linear program. A recognition of the existence of an optimal solution at an extreme point as well as the hopelessness of enumerating extreme as a solution method. Kantorovich then goes on to propose his method, which he calls the method of resolving multipliers. Essentially, Kantorovich proposes that one solve the dual and then use complementary slackness to recover the primal. One might wonder how Kantorovich’s contribution differs from the contributions of Koopmans and Dantzig. That is another story and as fair a description of the issues as I know can be found in Roy Gardner’s 1990 piece in the Journal of Economic Literature. I reproduce one choice remark:
Thus, the situation of Kantorovich is rather like that of the discoverer Columbus. He really never touched the American mainland, and he didn’t give its name, but he was the first one in the area.
As an aside, David Gale is the one often forgotten in this discussion. If the Nobel committee has awarded the prize for Linear Programming, Dantzig and Gale would have been included. Had Gale lived long enough, he might have won it again for matching making him the third to have won the prize twice in the same subject. The others are John Bardeen and Frederick Sanger.
Continuing with Spufford’s imaginings:
— and that meant that you could surely apply the method to the entire Soviet economy, he thought. He could see that this would not be possible under capitalism, where all the factories had separate owners, locked in wasteful competition with one another. There, nobody was in a position to think systematically. The capitalists would not be willing to share information about their operations; what would be in it for them? That was why capitalism was blind, why it groped and blundered. It was like an organism without a brain. But here it was possible to plan for the whole system at once. The economy was a clean sheet of paper on which reason was writing. So why not optimise it? All he would have to do was to persuade the appropriate authorities to listen.
Implementation of Kantorovich’s solution at the Plywood trust led to success. Inspired, Kantorovich sent a letter to Gosplan urging adoption of his methods. Here the fact that Kantorovich solved the dual first rather than the primal is important. Kantorovich interpreted his resolving multipliers (shadow prices today) as objectively determined prices. Kantorovich’s letter to Gosplan urged a replacement of the price system in place by his resolving multipliers. Kantorovich intended to implement optimal production plans through appropriate pieces. Gosplan, responded that reform was unecessary. Kantorovich narrowly missed a trip to the Gulag and stopped practicing Economics, for a while. Readers wanting a fuller sense of what mathematical life was like in this period should consult this piece by G. G. Lorentz.
After the war, Kantorovich took up linear programming again. At Lenningrad, he headed a team to reduce scrap metal produced at the Egorov railroadcar plant. The resulting reduction in waste reduced the supply of scrap iron for steel mills disrupting their production! Kantorovich escaped punishment by the Leningrad regional party because of his work on atomic reactors.
Kantorovich’s interpretation of resolving multipliers which he renamed as objectively determined valuations put him at odds with the prevailing labor theory of value. In the post Stalin era, he was criticized for being under the sway of BohmBawerk, author of the notion of subjective utility. Aron Katsenelinboigen, relates a joke played by one of these critics on Kantorovich. A production problem was presented to Kantorovich where the labor supply constraint would be slack at optimality. Its `objectively determined valuation’ was therefore zero, contradicting the labor theory of value.
Nevertheless, Kantorovich survived. This last verse from the Ballard of L. V. Kantorvich authored by Josph Lakhman explains why:
Then came a big scholar with a solution.
Alas, too clever a solution.
`Objectively determined valuations’
That’s the panacea for each and every doubt!
Truth be told, the scholar got his knukcles rapped
Slightly rapped
For such an unusual advice
That threatened to overturn the existing order.
After some thought, however, the conclusion was reached
That the valuations had been undervalued
Recent Comments