Its not: why haven’t I one won? I have. A sixth form science prize. In my salad days I would day dream about winning the big one (to the sound of Freddy Mercury crooning `We Are The Champions’). But, one ages and comes to term with one’s mediocrity.

In the good old days, when men were men and sheep were nervous, prizes were awarded for accomplishing particular tasks. The French academy of sciences, for example, established in 1781, I think, a system of prizes or contests. A committee would set a goal (in 1766 it was to solve the 6 body problem, in 1818 it was explain the properties of light) and submissions judged after a deadline and a prize, if merited, awarded. One sees that today with the X -prize and the Clay prize. The puzzle with these prizes is would the challenge they highlight not be undertaken in their absence? For example, resolving P = NP has been around well before the Clay prize and many a bright young thing had already given it serious thought.

Many prizes are `achievement’ awards, given out in recognition of a great accomplishment after the fact. Some are awarded by learned societies and named in honor of an ancient worthy (Leibniz, Lagrange, Laplace etc.) Others are funded by private individuals (Nobel, Nemmers, Simons etc.).

Some learned societies have a surfeit of prizes (Mathematics) that are concentrated in the hands of a few. Indeed, one might be able to construct a partial order of the prizes and come to the conclusion that some prize X can only be awarded provided prize Y has already been secured. Once again, there is the incentive question. It is hard to imagine the prize winner strives and continues to do so in the anticipation of winning further prizes. If the purpose of the prize is to honor the work (rather than the individual) why give the $$’s to the individual? Perhaps better to take the $$’s, divide them up and hand it to junior researchers in the same area telling them they have received it in honor of X, a pioneer of the field.

Other learned societies have very few prizes (American Economic Association). There is the Clark medal (famous), Walker medal (discontinued after Nobel), Ely Lecture and the Distinguished Fellow (who?). No doubt, this is a great comfort for the members’ status anxiety. Although I have it heard it said that a paucity of awards can adversely affect a discipline in that it lessens its members chances of securing grants. No doubt this is why some learned societies have prizes for every age group and speciality one can imagine: best under 40 in applied nobble nozing theory.

Why do private individuals fund prizes? Nobel is the archetype. Is it a way to purchase reflected glory? The founders of Facebook and Google are famous in their own right, so it is hard to see how a prize will burnish their images. Perhaps they genuinely wish to support research into topic X. One can easily imagine more effective ways to do this via grants, fellowships and conferences. Indeed, both the Kavli and Simons do just this (in addition to handing out prizes). Perhaps its advertising. If one wishes to publicize the importance of some field, does awarding a generous prize buy more publicity than a simple advertisement or cultivating journalists? Unclear. How many have heard of the recent `breakthrough’ prizes?

I had the opportunity to participate in a delightful workshop on mechanism design and the informed principal organized by Thomas Troeger and Tymofiy Mylovavnov. The setting was a charming `schloss‘ (manse rather than castle)  an hour and half outside of Mannheim. They had gathered together a murderer’s row of speakers and auditors. Suffice it to say I was the infimum of the group and lucky to be there.

One (among many) remarkable talks was given by Roger Myerson on his 1983 paper entitled `Mechanism Design by an Informed Principal‘. Kudos to Thomas and Tymofiy for coming up with the idea of doing this. It brought to mind some couplets from Locksley Hall:

When the centuries behind me like a fruitful land reposed;
When I clung to all the present for the promise that it closed:

When I dipt into the future far as human eye could see;
Saw the Vision of the world and all the wonder that would be.—

By the way, the last pair of lines appears on the dedication plaque that graces the USS Voyager (of the Star Trek franchise).

What did Roger do? He tried as best as possible, given the gulf of time, to explain why he had chosen the tack that he did in the paper (axiomatic) and his hope for how it would influence research on the subject.

A principal with private information must propose a mechanism to an agent. However, the choice of mechanism will reveal something of the principal’s private information to the agent. Thus, the problem of mechanism design in this setting is not a straight optimization problem. It is, at a high level, a signaling game. The signals are the set of mechanisms that the principal can propose. Thus, one seeks an equilibrium of this game. But which equilibrium?

In section 7 of the paper, Roger approaches the question axiomatically in the spirit of Nash bargaining. Indeed, Roger made just such an analogy in his talk. Nash did not have in mind any particular bargaining protocol, but a conviction that any reasonable protocol must satisfy some natural invariance conditions. Some decades later Rubinstein arrives with a bargaining protocol to justify Nash’s conviction. So, Roger sought the same here and expressed the wish to see this year a vindication of his hopes.

Lest you think the audience accepted Roger’s axioms uncritically, Thomas Troeger, pointed out Roger’s axiom 1 ruled out some possibly natural settings like Rothschild & Stiglitz. Roger argued that it was right and proper to rule this out and battle joined!

Yesterday the Israeli Parliament elected the state’s new president. This position is mostly ceremonial, and does not carry any significant duty. Nevertheless, quite a few people, mostly current and past Parliament members, wanted to get this job. The race was fruitful, as a couple of them were forced to withdraw their candidacy after secrets of weird sex stories and bribes were published in the press.

Three main candidates survived until the final stage, Rivlin, Itzik, and Shitrit (in addition to two candidates who did not stand much chance). Election is done by a two-round system: Each of the 120 Parliament members secretly votes for a candidate. If no candidate gets more than 60 votes, then all candidates except the leading two leave the arena, and the Parliament members secretly vote to either one of the two leaders. The candidate who got more than 60 votes is the new proud president of the state.

Politics was ugly. Rivlin, who is a member of the largest party, was the leading candidate, but the Prime Minister, who comes from the very same party, was against him. Many members of the opposition voted for Rivlin. In the first round, Rivlin and Shitrit got the highest number of votes, but none of them had a majority of votes. In the second round, Rivlin won.

What I found interesting in this process is what one of the Parliament members of the second largest party said. He said that in the first round some members of his party voted for Shitrit, but then, in the second round, they voted for Rivlin. Why? “It was a tactical vote.” This way they ensured that the final election will be between Rivlin and Shitrit and not between Rivlin and Itzik. Politics is ugly, but at least as long as the election process does not satisfy Independence of Irrelevant Alternatives, we do not have dictatorship.


Abraham Neyman had numerous contributions to game theory. He extended the analysis of the Shapley value in coalitional games with player set which is a measurable space, he proved the existence of the uniform value in stochastic games, and he developed the study of repeated games with boundedly rational players, among others. Abraham Neyman was also one of the founding fathers of the Center for Game Theory at the State University of New York at Stony Brook, which hosts the annual conference of the community for the past 25 years.

The International Journal of Game Theory will honor Abraham Neyman on his 66th birthday, which will take place in 2015, by a special issue, see the announcement here. Everyone is encouraged to submit a paper.

The Nan-Shan, a Siamese steamer under the control of Captain James MacWhirr, on his orders, sails into a typhoon in the South China Sea. Conrad described the captain as

“Having just enough imagination to carry him through each successive day”.

On board, an all-white crew and 200 Chinese laborers, returning home with seven years’ wages stowed in

“a wooden chest with a ringing lock and brass on the corners, containing the savings of his labours: some clothes of ceremony, sticks of incense, a little opium maybe, bits of nameless rubbish of conventional value, and a small hoard of silver dollars, toiled for in coal lighters, won in gambling-houses or in petty trading, grubbed out of earth, sweated out in mines, on railway lines, in deadly jungle, under heavy burdens—amassed patiently, guarded with care, cherished fiercely.”

Ship and souls driven by McWhirr’s will survive the Typhoon. The wooden chest does not. Its contents strewn below deck, the silver dollars are mixed together. It falls to McWhirr to determine how the dollars are to be apportioned between the Chinese laborers to forestall an uprising.

“It seems that after he had done his thinking he made that Bun Hin fellow go down and explain to them the only way they could get their money back. He told me afterwards that, all the coolies having worked in the same place and for the same length of time, he reckoned he would be doing the fair thing by them as near as possible if he shared all the cash we had picked up equally among the lot. You couldn’t tell one man’s dollars from another’s, he said, and if you asked each man how much money he brought on board he was afraid they would lie, and he would find himself a long way short. I think he was right there. As to giving up the money to any Chinese official he could scare up in Fuchau, he said he might just as well put the lot in his own pocket at once for all the good it would be to them. I suppose they thought so, too.”

My former colleague Gene Mumy, writing in the JPE, argued that McWhirr’s solution was arbitrary. We know what McWhirr’s response would have been:

” The old chief says that this was plainly the only thing that could be done. The skipper remarked to me the other day, ‘There are things you find nothing about in books.’ I think that he got out of it very well for such a stupid man.”

Mumy, undeterred, proposed instead a pivotal mechanism (Clark, Groves, Tidemann, Tullock etc). For each agent compute the difference between the total amount of money and the sum of all other claims. If an agent claims at most this amount, they receive their claim. If his claim exceeds this amount, he is penalized. Mumy showed that truth telling was a full information Nash equilibrium of the mechanism.

Saryadar, in a comment in the JPE, criticizes Mumy’s solution on the grounds that it rules out pre-play communication on the part of the agents. Such communication could allow agents to transmit threats (I’m claiming everything) that if credible change the equilibrium outcome. He also hints that the assumption of common knowledge of the contributions is hard to swallow.

Schweinzer and Shimoji revisit the problem with the observation that truth telling is not the only Nash equilibrium of the mechanism proposed by Mumy. Instead, they treat it as problem of implementation under incomplete information. The captain is assumed to know the total amount of money to be divided but not the agents. They propose a mechanism and identify a sufficient condition on beliefs under which truth telling is the unique rationalizable strategy for each agent. The mechanism is in the spirit of a scoring rule, and relies on randomization. I think McWhirr might have objected on the grounds that the money represented the entire savings of the laborers.

Conrad describes the aftermath.

“We finished the distribution before dark. It was rather a sight: the sea running high, the ship a wreck to look at, these Chinamen staggering up on the bridge one by one for their share, and the old man still booted, and in his shirt-sleeves, busy paying out at the chartroom door, perspiring like anything, and now and then coming down sharp on myself or Father Rout about one thing or another not quite to his mind. He took the share of those who were disabled to them on the No. 2 hatch. There were three dollars left over, and these went to the three most damaged coolies, one to each. We turned-to afterwards, and shovelled out on deck heaps of wet rags, all sorts of fragments of things without shape, and that you couldn’t give a name to, and let them settle the ownership themselves.”

This post is dedicated to a new and important result in game theory – the refutation of Mertens’ conjecture by Bruno Zilliotto. Stochastic games were defined by Shapley (1953). Such a game is given by

  • a set Z of states,
  • a set N = {1,2,…,n} of players,
  • for every state z and every player i a set A_i(z) of actions available to player i at state z. Denote by Λ = { (z,(a_i)_{i ∈ N}) : a_i ∈ A_i(z) for every i} the set of all pairs (state, action profile at that state).
  • for every player i, a stage payoff function u_i : Λ → R, and
  • a transition function q : Λ → Δ(Z), where Δ(Z) is the set of probability distributions over Z.

The game starts at an initial state z^1 ∈ Z and is played as follows. At every stage t, each player i chooses an action a_i^t ∈ A_i(z^t), receives a stage payoff u_i(a_1^t,…,a_n^t), and the play moves to a new state, z^{t+1}, that is chosen according to q(z^t;a_1^t,…,a_n^t).

In this post I assume that all sets are finite. The N-stage game is a finite game, and therefore by backwards induction it has an equilibrium. As I mentioned in my previous post, the discounted game has an equilibrium (even a stationary equilibrium) because of continuity-compactness arguments.


Read the rest of this entry »

When studying dynamic interactions, economists like discounted games. Existence of equilibrium is assured because the payoff is a continuous function of the strategies of the players, and construction of equilibrium strategies often require ingenious tricks, and so are fun to think about and fun to read. Unfortunately in practice the discounted evaluation is often not relevant. In many cases players, like countries, firms, or even humans, do not know their discount factor. Since the discounted equilibrium strategies highly depend on the discount factor, this is a problem. In other cases, the discount factor changes over time in an unknown way. This happens, for example, when the discount factor is derived from the interest rate or the players monetary or familial situation. Are predictions and insights that we get from a model with a fixed and known discount still hold in models with changing and unknown discount factor?

To handle such situations, the concept of a uniform equilibrium was defined. A strategy vector is a uniform ε-equilibrium if it is ε-equilibrium in all discounted games, for all discount factors sufficiently close to 1 (that is, whenever the players are sufficiently patient). Thus, if a uniform ε-equilibrium exists, then the players can play an approximate equilibrium as soon as they are sufficiently patient. In our modern world, in which one can make zillions of actions in each second, the discount factor is sufficiently close to 1 for all practical purposes. A payoff vector x is a uniform equilibrium payoff if for every ε>0 there is a uniform ε-equilibrium that yields payoff that is ε-close to x.

In repeated games, the folk theorem holds for the concept of uniform equilibrium (or for the concept of uniform subgame perfect equilibrium). Indeed, given a feasible and strictly individually rational payoff x, take a sequence of actions such that the average payoff along the sequence is close to x. Let the players play repeatedly this sequence of actions while monitoring the others, and have each deviation punished by the minmax value. When the discount factor is close to 1, the discounted payoff of the sequence of actions is close to the average payoff, and therefore the discounted payoff that this strategy vector yields is close to x. If one insists on subgame perfection, then punishment is achieved by a short period of minmaxing followed by the implementation of an equilibrium payoff that yields the deviator a low payoff.

For two-player zero-sum stochastic games, Mertens and Neyman (1981) proved that the uniform value exists. Vieille (2000) showed that in two-player non-zero-sum stochastic games uniform ε-equilibria exist. Whether or not this result extends to any number of players is still an open problem.

Why do I tell you all that? This is a preparation for the next post, that will present a new and striking result by a young French Ph.D. student.

This is the second posts about stability and equilibrium in trading networks. The first post may be found here. In it I assumed a single homogenous divisble good being traded by agents with quasi-linear preferences. Each agent was associated with a vertex of a graph and the edges represented pairs of agents who were permitted to trade with each other. Furthermore, each agent was designated the role of buyer and seller. Buyers could only trade with sellers and vice-versa. This meant the underlying graph was bipartite. In this post I will drop the assumption that they trade a homogenous good and that it is dvisible.

As noted in the earlier post, when the utility of buyers and the costs of sellers are linear in quantity, total unimodularity of the underlying constraint matrix is sufficient to ensure the existence of Walrasian prices that support an efficient allocation that is integral. If buyer utility functions become concave or sellers’s cost functions become convex, this is no longer true.

Suppose instead that the buyers utility functions are M{^{\#}}-concave and seller’s cost functions are M{^{\#}}-convex (object to be defined soon), then, the efficient allocation (the precise formulation will appear below) is indeed integer valued and supporting Walrasian prices exist. The corresponding allocations and prices can be interpreted as stable outcomes which cannot be blocked. This result was established by Murota and Tamura (2001).

I’ll define M{^{\#}}-concavity. For any integer valued vector {x} in {\Re^n} let {supp^+(x) = \{i: x_i > 0\}} and {supp^-(x) = \{i : x_i < 0\}}. A real valued function {f} defined on the integer vectors of {\Re^n} (with non-empty domain) is M{^{\#}} -concave if for all {x, y} in its domain and any index {u \in supp^+(x-y)} there exists an index {v} in {\{0\} \cup supp^{-}(x-y)} such that

\displaystyle f(x) + f(y) \leq f(x - \chi_u + \chi_v) + f(y + \chi_u - \chi_v).

Here {\chi_i} is the 0-1 vector with a zero in every component except component {i}. I describe three different ways one might interpret this notion.

First, one can think of M{^{\#}}-concavity as trying to capture an essential feature of the basis exchange property of matroids. If you don’t know what a matroid is, skip to the next paragraph. Think of the vectors {x} and {y} as being 0-1 and representing subsets of columns of some given matrix, {A}, say. Define {f(x)} to be the rank of the columns in the subset associated with {x}. What the definition of M{^{\#}}-concavity captures is this: for any column, {u} I remove from {x}, that is not in {y} and add it to {y}, I can find (possibly) a column from {y} (not in {x}) to add to {x} and not diminish the sum of the ranks of the two sets. The argument is straightforward.

A second interpretation is to be had by comparing the definition of M{^{\#}}-concavity to the following way of stating concavity of a function:

\displaystyle f(x) + f(y) \leq f(x - \frac{x-y}{2}) + f(y + \frac{x-y}{2})

If {x} and {y} were numbers with {x > y}, then, the first term on the right represents a step from {x} closer to {y} and the second term is a step from {y} closer to {x}. M{^{\#}}-concavity can be interpreted as a discrete analogy of this. Think of {x} being above and to the left of {y}. A step closer to {y} would means one step down in the {u} direction and a step to the right in the {v} direction., i.e. {x - \chi_u + \chi_v}. The vector {y + \chi_u - \chi_v} has a similar interpretation.

The third interpretation comes from an equivalent definition of M{^{\#}}-concavity, one that shows that it is identical to the notion of gross substitutes proposed by Kelso and Crawford (1982). Let {p \in \Re^n} be a non-negative price vector. Let {x} be a non-negative integer that maximizes {f(x) - p \cdot x}, i.e., a utility maximizing bundle. Consider a new price vector {p' \geq p}. Then there exists a new utility maximizing bundle {y} such that {y_i \geq x_i} for all {i} such that {p_i = q_i}. In words, the demand for goods whose price did not change cannot go down.

The key result of Murota and Tamura is that the following linear program has an integer optimal solution when the {U_i}‘s are M{^{\#}}-concave and the {C_j}‘s are M{^{\#}}-convex.

\displaystyle \max \sum_{i \in B}U_i(x_{ij}:\{j: (i,j) \in E\}) - \sum_{j \in S}C_j(x_{ij}: \{i:(i,j) \in E\})

subject to

\displaystyle 0 \leq x_{ij} \leq d_{ij}\,\, \forall (i,j) \in E

Notice, that buyer’s utilities do not necessarily depend on the total amount consumed. They can depend on the vector of inflows from the various sellers (similarly for the sellers). We can interpret this to mean that the sellers sell different goods which are substitutes for each other and there is an upper limit, {d_{ij}} on the amount of seller {j}‘s good that can go to seller {i}. Such a bound is necessary to make sure the problem is well defined. Consider the case when utilities and costs are linear in quantity. The problem could be unbounded then. A second interpretation is that seller’s sell the same good but under different contractual terms (payment plan, delivery date etc.) which make them, in the buyer’s eyes, imperfect substitutes. The only element of the contract not fixed is the price and that is what is determined in equilibrium.

In 1937, representatives of the Plywood trust called upon Comrade Professor Leonid Vitalievich Kantorovich with a problem. The trust produced 5 varieties of plywood using 8 different machines. How, they asked, should they allocate their limited supply of raw materials to the various machines so as to produce the maximum output of plywood in the required proportions? As problems go, it was, from this remove unremarkable. Remarkable is that the Comrade Professor agreed to take it on. The so called representatives might have been NKVD. Why? Uncle Joe’s first act upon taking power in 1929 was to purge the economists, or more precisely the Jewish ones. This was well before the purge of the communist party in 1936. Why the economists? They complained about waste in a planned economy `dizzy with success.’ Yet, here were the apparatchiks of the Trust asking the Comrade Professor to reduce waste.

Kantorovich writes, that at the time he was burnt out by pure mathematics. Combined with a concern at the rise of Hitler, he felt compelled to do something practical. And, so he turned his mind to the problem of the Plywood Trust. Frances Spufford, in his delightful work of `faction’ called Red Plenty, imagines what Kantorovich might have been thinking.

He had thought about ways to distinguish between better answers and worse answers to questions which had no right answer. He had seen a method which could do what the detective work of conventional algebra could not, in situations like the one the Plywood Trust described, and would trick impossibility into disclosing useful knowledge. The method depended on measuring each machine’s output of one plywood in terms of all the other plywoods it could have made.

If he was right — and he was sure he was, in essentials — then anyone applying the new method to any production situation in the huge family of situations resembling the one at the Plywood Trust, should be able to count on a measureable percentage improvement in the quantity of product they got from a given amount of raw materials. Or you could put that the other way around: they would make a measureable percentage saving on the raw materials they needed to make a given amount of product.

He didn’t know yet what sort of percentage he was talking about, but just suppose it was 3%. It might not sound like much, only a marginal gain, an abstemious eking out of a little bit more from the production process, at a time when all the newspapers showed miners ripping into fat mountains of solid metal, and the output of plants booming 50%, 75%, 150%. But it was predictable. You could count on the extra 3% year after year. Above all it was free. It would come merely by organising a little differently the tasks people were doing already. It was 3% of extra order snatched out of the grasp of entropy. In the face of the patched and mended cosmos, always crumbling of its own accord, always trying to fall down, it built; it gained 3% more of what humanity wanted, free and clear, just as a reward for thought. Moreover, he thought, its applications did not stop with individual factories, with getting 3% more plywood, or 3% more gun barrels, or 3% more wardrobes. If you could maximise, minimise, optimise the collection of machines at the Plywood Trust, why couldn’t you optimise a collection of factories, treating each of them, one level further up, as an equation? You could tune a factory, then tune a group of factories, till they hummed, till they purred. And that meant –

An english description of Kantorovich’s appeared in the July 1960 issue of Management Science. The opening line of the paper is:

The immense tasks laid down in the plan for the third Five Year Plan period require that we achieve the highest possible production on the basis of the optimum utilization of the existing reserves of industry: materials, labor and equipment.

The paper contains a formulation of the Plywood Trust’s problem as a linear program. A recognition of the existence of an optimal solution at an extreme point as well as the hopelessness of enumerating extreme as a solution method. Kantorovich then goes on to propose his method, which he calls the method of resolving multipliers. Essentially, Kantorovich proposes that one solve the dual and then use complementary slackness to recover the primal. One might wonder how Kantorovich’s contribution differs from the contributions of Koopmans and Dantzig. That is another story and as fair a description of the issues as I know can be found in Roy Gardner’s 1990 piece in the Journal of Economic Literature. I reproduce one choice remark:

Thus, the situation of Kantorovich is rather like that of the discoverer Columbus. He really never touched the American mainland, and he didn’t give its name, but he was the first one in the area.

As an aside, David Gale is the one often forgotten in this discussion. If the Nobel committee has awarded the prize for Linear Programming, Dantzig and Gale would have been included. Had Gale lived long enough, he might have won it again for matching making him the third to have won the prize twice in the same subject. The others are John Bardeen and Frederick Sanger.

Continuing with Spufford’s imaginings:

– and that meant that you could surely apply the method to the entire Soviet economy, he thought. He could see that this would not be possible under capitalism, where all the factories had separate owners, locked in wasteful competition with one another. There, nobody was in a position to think systematically. The capitalists would not be willing to share information about their operations; what would be in it for them? That was why capitalism was blind, why it groped and blundered. It was like an organism without a brain. But here it was possible to plan for the whole system at once. The economy was a clean sheet of paper on which reason was writing. So why not optimise it? All he would have to do was to persuade the appropriate authorities to listen.

Implementation of Kantorovich’s solution at the Plywood trust led to success. Inspired, Kantorovich sent a letter to Gosplan urging adoption of his methods. Here the fact that Kantorovich solved the dual first rather than the primal is important. Kantorovich interpreted his resolving multipliers (shadow prices today) as objectively determined prices. Kantorovich’s letter to Gosplan urged a replacement of the price system in place by his resolving multipliers. Kantorovich intended to implement optimal production plans through appropriate pieces. Gosplan, responded that reform was unecessary. Kantorovich narrowly missed a trip to the Gulag and stopped practicing Economics, for a while. Readers wanting a fuller sense of what mathematical life was like in this period should consult this piece by G. G. Lorentz.

After the war, Kantorovich took up linear programming again. At Lenningrad, he headed a team to reduce scrap metal produced at the Egorov railroad-car plant. The resulting reduction in waste reduced the supply of scrap iron for steel mills disrupting their production! Kantorovich escaped punishment by the Leningrad regional party because of his work on atomic reactors.

Kantorovich’s interpretation of resolving multipliers which he renamed as objectively determined valuations put him at odds with the prevailing labor theory of value. In the post Stalin era, he was criticized for being under the sway of Bohm-Bawerk, author of the notion of subjective utility. Aron Katsenelinboigen, relates a joke played by one of these critics on Kantorovich. A production problem was presented to Kantorovich where the labor supply constraint would be slack at optimality. Its `objectively determined valuation’ was therefore zero, contradicting the labor theory of value.

Nevertheless, Kantorovich survived. This last verse from the Ballard of L. V. Kantorvich authored by Josph Lakhman explains why:

Then came a big scholar with a solution.
Alas, too clever a solution.
`Objectively determined valuations’-
That’s the panacea for each and every doubt!
Truth be told, the scholar got his knukcles rapped
Slightly rapped
For such an unusual advice
That threatened to overturn the existing order.
After some thought, however, the conclusion was reached
That the valuations had been undervalued


This is the first of a series of posts about stability and equilibrium in trading networks. I will review and recall established results from network flows and point out how they immediately yield results about equilibria, stability and the core of matching markets with quasi-linear utility. It presumes familiarity with optimization and the recent spate of papers on matchings with contracts.

The simplest trading network one might imagine would involve buyers ({B}) and sellers ({S}) of a homogenous good and a set of edges {E} between them. No edges between sellers and no edges between buyers. The absence of an edge in {E} linking {i \in B} and {j \in S} means that {i} and {j} cannot trade directly. Suppose buyer {i \in B} has a constant marginal value of {v_i} upto some amount {d_i} and zero thereafter. Seller {j \in S} has a constant marginal cost of {c_j} upto some capacity {s_j} and infinity thereafter.

Under the quasi-linear assumption, the problem of finding the efficient set of trades to execute can be formulated as a linear program. Let {x_{ij}} for {(i,j) \in E} denote the amount of the good purchased by buyer {i \in B} from seller {j \in S}. Then, the following program identifies the efficient allocation:

\displaystyle \max \sum_{(i,j) \in E} (v_i - c_j)x_{ij}

subject to

\displaystyle \sum_{j \in S: (i,j) \in E}x_{ij} \leq d_i\,\, \forall i \in B

\displaystyle \sum_{i \in B:(i,j) \in E}x_{ij} \leq s_j\,\, \forall j \in S

\displaystyle x_{ij} \geq 0\,\, (i,j) \in E

This is, of course, an instance of the (discrete) transportation problem. The general version of the transportation problem can be obtained by replacing each coefficient of the objective function by arbitrary numbers {w_{ij}}. This version of the transportation problem is credited to the mathematician F. J. Hitchcock and published in 1941. Hitchcock’s most famous student is Claude Shannon.

The `continuous’ version of the transportation problem was formulated by Gaspard Monge and described in his 1781 paper on the subject. His problem was to split two equally large volumes (representing the initial location and the final location of the earth to be shipped) into infinitely many small particles and then `match them with each other so that the sum of the products of the lengths of the paths used by the particles and the volume of the particles is minimized. The {w_{ij}}‘s in Monge’s problem have a property since called the Monge property, that is the same as submodularity/supermodularity. This paper describes the property and some of its algorithmic implications. Monge’s formulation was subsequently picked up by Kantorovich and the study of it blossomed into the specialty now called optimal transport with applications to PDEs and concentration of measure. That is not the thread I will follow here.

Returning to the Hitchcock, or rather discrete, formulation of the transportation problem let {p_j} be the dual variables associated with the first set of constraints (the supply side) and {\lambda_i} the dual variables associated with the second or demand set of constraints. The dual is

\displaystyle \min \sum_{j \in S} s_jp_j + \sum_{i \in B}d_i\lambda_i

subject to

\displaystyle p_j + \lambda_i \geq [v_i-c_j]\,\, \forall (i,j) \in E

\displaystyle p_j, \lambda_i \geq 0\,\, \forall j \in S, i \in B

We can interpret {p_j} as the unit price of the good sourced from seller {j} and {\lambda_i} as the surplus that buyer {i} will enjoy at prices {\{p_j\}_{j \in S}}. Three things are immediate from the duality theorem, complementary slackness and dual feasibility.

  1. If {x^*} is a solution to the primal and {(p^*, \lambda^*)} an optimal solution to the dual, then, the pair {(x^*, p^*)} form a Walrasian equilibrium.
  2. The set of optimal dual prices, i.e., Walrasian prices live in a lattice.
  3. The dual is a (compact) representation of the TU (transferable utility) core of the co-operative game associated with this economy.
  4. Suppose the only bilateral contracts we allow between buyer {i} and seller {j} are when {(i,j) \in E}. Furthermore, a contract can specify only a quantity to be shipped and price to be paid. Then, we can interpret the set of optimal primal and dual solutions to be the set of contracts that cannot be blocked (suitably defined) by any buyer seller pair {(i,j) \in E}.
  5. Because the constraint matrix of the transportation problem is totally unimodular, the previous statements hold even if the goods are indivisible.

As these are standard, I will not reprove them here. Note also, that none of these conclusions depend upon the particular form of the coefficients in the objective function of the primal. We could replace {[v_i - c_j]} by {w_{ij}} where we interpret {w_{ij}} to be the joint gain gains from trade (per unit) to be shared by buyer {i} and seller {j}.

Now, suppose we replace constant marginal values by increasing concave utility functions, {\{U_i(\cdot)\}_{i \in B}} and constant marginal costs by {\{C_j (\cdot)\}_{j \in S}}? The problem of finding the efficient allocation becomes:

\displaystyle \max \sum_{i \in B}U_i(\sum_{j: (i,j) \in E}x_{ij}) - \sum_{j \in S}C_j(\sum_{i: (i,j) \in E}x_{ij})

subject to

\displaystyle \sum_{j \in S: (i,j) \in E}x_{ij} \leq d_i\,\, \forall i \in B

\displaystyle \sum_{i \in B:(i,j) \in E}x_{ij} \leq s_j\,\, \forall j \in S

\displaystyle x_{ij} \geq 0\,\, (i,j) \in E

This is an instance of a concave flow problem. The Kuhn-Tucker-Karush conditions yield the following:

  1. If {x^*} is a solution to the primal and {(p^*, \lambda^*)} an optimal Lagrangean, then, the pair {(x^*, p^*)} form a Walrasian equilibrium.
  2. The set of optimal Lagrange prices, i.e., Walrasian prices live in a lattice.
  3. Suppose the only bilateral contracts we allow between buyer {i} and seller {j} are when {(i,j) \in E}. Furthermore, a contract can specify only a quantity to be shipped and price to be paid. Then, we can interpret the set of optimal primal and dual solutions to be the set of contracts that cannot be blocked (suitably defined) by any buyer seller pair {(i,j) \in E}.

Notice, we lose the extension to indivisibility. As the objective function in the primal is now concave, an optimal solution to the primal may occur in the interior of the feasible region rather than at an extreme point. To recover `integrality’ we need to impose a stronger condition on {\{U_i\}_{i \in B}} and {\{C_j\}_{j \in S}}, specifically, they be {M}-concave and convex respectively. This is a condition tied closely to the gross substitutes condition. More on this in a subsequent post.

Join 113 other followers


Get every new post delivered to your Inbox.

Join 113 other followers

%d bloggers like this: