Abraham Neyman had numerous contributions to game theory. He extended the analysis of the Shapley value in coalitional games with player set which is a measurable space, he proved the existence of the uniform value in stochastic games, and he developed the study of repeated games with boundedly rational players, among others. Abraham Neyman was also one of the founding fathers of the Center for Game Theory at the State University of New York at Stony Brook, which hosts the annual conference of the community for the past 25 years.

The International Journal of Game Theory will honor Abraham Neyman on his 66th birthday, which will take place in 2015, by a special issue, see the announcement here. Everyone is encouraged to submit a paper.

The Nan-Shan, a Siamese steamer under the control of Captain James MacWhirr, on his orders, sails into a typhoon in the South China Sea. Conrad described the captain as

“Having just enough imagination to carry him through each successive day”.

On board, an all-white crew and 200 Chinese laborers, returning home with seven years’ wages stowed in

“a wooden chest with a ringing lock and brass on the corners, containing the savings of his labours: some clothes of ceremony, sticks of incense, a little opium maybe, bits of nameless rubbish of conventional value, and a small hoard of silver dollars, toiled for in coal lighters, won in gambling-houses or in petty trading, grubbed out of earth, sweated out in mines, on railway lines, in deadly jungle, under heavy burdens—amassed patiently, guarded with care, cherished fiercely.”

Ship and souls driven by McWhirr’s will survive the Typhoon. The wooden chest does not. Its contents strewn below deck, the silver dollars are mixed together. It falls to McWhirr to determine how the dollars are to be apportioned between the Chinese laborers to forestall an uprising.

“It seems that after he had done his thinking he made that Bun Hin fellow go down and explain to them the only way they could get their money back. He told me afterwards that, all the coolies having worked in the same place and for the same length of time, he reckoned he would be doing the fair thing by them as near as possible if he shared all the cash we had picked up equally among the lot. You couldn’t tell one man’s dollars from another’s, he said, and if you asked each man how much money he brought on board he was afraid they would lie, and he would find himself a long way short. I think he was right there. As to giving up the money to any Chinese official he could scare up in Fuchau, he said he might just as well put the lot in his own pocket at once for all the good it would be to them. I suppose they thought so, too.”

My former colleague Gene Mumy, writing in the JPE, argued that McWhirr’s solution was arbitrary. We know what McWhirr’s response would have been:

” The old chief says that this was plainly the only thing that could be done. The skipper remarked to me the other day, ‘There are things you find nothing about in books.’ I think that he got out of it very well for such a stupid man.”

Mumy, undeterred, proposed instead a pivotal mechanism (Clark, Groves, Tidemann, Tullock etc). For each agent compute the difference between the total amount of money and the sum of all other claims. If an agent claims at most this amount, they receive their claim. If his claim exceeds this amount, he is penalized. Mumy showed that truth telling was a full information Nash equilibrium of the mechanism.

Saryadar, in a comment in the JPE, criticizes Mumy’s solution on the grounds that it rules out pre-play communication on the part of the agents. Such communication could allow agents to transmit threats (I’m claiming everything) that if credible change the equilibrium outcome. He also hints that the assumption of common knowledge of the contributions is hard to swallow.

Schweinzer and Shimoji revisit the problem with the observation that truth telling is not the only Nash equilibrium of the mechanism proposed by Mumy. Instead, they treat it as problem of implementation under incomplete information. The captain is assumed to know the total amount of money to be divided but not the agents. They propose a mechanism and identify a sufficient condition on beliefs under which truth telling is the unique rationalizable strategy for each agent. The mechanism is in the spirit of a scoring rule, and relies on randomization. I think McWhirr might have objected on the grounds that the money represented the entire savings of the laborers.

“We finished the distribution before dark. It was rather a sight: the sea running high, the ship a wreck to look at, these Chinamen staggering up on the bridge one by one for their share, and the old man still booted, and in his shirt-sleeves, busy paying out at the chartroom door, perspiring like anything, and now and then coming down sharp on myself or Father Rout about one thing or another not quite to his mind. He took the share of those who were disabled to them on the No. 2 hatch. There were three dollars left over, and these went to the three most damaged coolies, one to each. We turned-to afterwards, and shovelled out on deck heaps of wet rags, all sorts of fragments of things without shape, and that you couldn’t give a name to, and let them settle the ownership themselves.”

This post is dedicated to a new and important result in game theory – the refutation of Mertens’ conjecture by Bruno Zilliotto. Stochastic games were defined by Shapley (1953). Such a game is given by

• a set Z of states,
• a set N = {1,2,…,n} of players,
• for every state z and every player i a set A_i(z) of actions available to player i at state z. Denote by Λ = { (z,(a_i)_{i ∈ N}) : a_i ∈ A_i(z) for every i} the set of all pairs (state, action profile at that state).
• for every player i, a stage payoff function u_i : Λ → R, and
• a transition function q : Λ → Δ(Z), where Δ(Z) is the set of probability distributions over Z.

The game starts at an initial state z^1 ∈ Z and is played as follows. At every stage t, each player i chooses an action a_i^t ∈ A_i(z^t), receives a stage payoff u_i(a_1^t,…,a_n^t), and the play moves to a new state, z^{t+1}, that is chosen according to q(z^t;a_1^t,…,a_n^t).

In this post I assume that all sets are finite. The N-stage game is a finite game, and therefore by backwards induction it has an equilibrium. As I mentioned in my previous post, the discounted game has an equilibrium (even a stationary equilibrium) because of continuity-compactness arguments.

When studying dynamic interactions, economists like discounted games. Existence of equilibrium is assured because the payoff is a continuous function of the strategies of the players, and construction of equilibrium strategies often require ingenious tricks, and so are fun to think about and fun to read. Unfortunately in practice the discounted evaluation is often not relevant. In many cases players, like countries, firms, or even humans, do not know their discount factor. Since the discounted equilibrium strategies highly depend on the discount factor, this is a problem. In other cases, the discount factor changes over time in an unknown way. This happens, for example, when the discount factor is derived from the interest rate or the players monetary or familial situation. Are predictions and insights that we get from a model with a fixed and known discount still hold in models with changing and unknown discount factor?

To handle such situations, the concept of a uniform equilibrium was defined. A strategy vector is a uniform ε-equilibrium if it is ε-equilibrium in all discounted games, for all discount factors sufficiently close to 1 (that is, whenever the players are sufficiently patient). Thus, if a uniform ε-equilibrium exists, then the players can play an approximate equilibrium as soon as they are sufficiently patient. In our modern world, in which one can make zillions of actions in each second, the discount factor is sufficiently close to 1 for all practical purposes. A payoff vector x is a uniform equilibrium payoff if for every ε>0 there is a uniform ε-equilibrium that yields payoff that is ε-close to x.

In repeated games, the folk theorem holds for the concept of uniform equilibrium (or for the concept of uniform subgame perfect equilibrium). Indeed, given a feasible and strictly individually rational payoff x, take a sequence of actions such that the average payoff along the sequence is close to x. Let the players play repeatedly this sequence of actions while monitoring the others, and have each deviation punished by the minmax value. When the discount factor is close to 1, the discounted payoff of the sequence of actions is close to the average payoff, and therefore the discounted payoff that this strategy vector yields is close to x. If one insists on subgame perfection, then punishment is achieved by a short period of minmaxing followed by the implementation of an equilibrium payoff that yields the deviator a low payoff.

For two-player zero-sum stochastic games, Mertens and Neyman (1981) proved that the uniform value exists. Vieille (2000) showed that in two-player non-zero-sum stochastic games uniform ε-equilibria exist. Whether or not this result extends to any number of players is still an open problem.

Why do I tell you all that? This is a preparation for the next post, that will present a new and striking result by a young French Ph.D. student.

This is the second posts about stability and equilibrium in trading networks. The first post may be found here. In it I assumed a single homogenous divisble good being traded by agents with quasi-linear preferences. Each agent was associated with a vertex of a graph and the edges represented pairs of agents who were permitted to trade with each other. Furthermore, each agent was designated the role of buyer and seller. Buyers could only trade with sellers and vice-versa. This meant the underlying graph was bipartite. In this post I will drop the assumption that they trade a homogenous good and that it is dvisible.

As noted in the earlier post, when the utility of buyers and the costs of sellers are linear in quantity, total unimodularity of the underlying constraint matrix is sufficient to ensure the existence of Walrasian prices that support an efficient allocation that is integral. If buyer utility functions become concave or sellers’s cost functions become convex, this is no longer true.

Suppose instead that the buyers utility functions are M${^{\#}}$-concave and seller’s cost functions are M${^{\#}}$-convex (object to be defined soon), then, the efficient allocation (the precise formulation will appear below) is indeed integer valued and supporting Walrasian prices exist. The corresponding allocations and prices can be interpreted as stable outcomes which cannot be blocked. This result was established by Murota and Tamura (2001).

I’ll define M${^{\#}}$-concavity. For any integer valued vector ${x}$ in ${\Re^n}$ let ${supp^+(x) = \{i: x_i > 0\}}$ and ${supp^-(x) = \{i : x_i < 0\}}$. A real valued function ${f}$ defined on the integer vectors of ${\Re^n}$ (with non-empty domain) is M${^{\#}}$ -concave if for all ${x, y}$ in its domain and any index ${u \in supp^+(x-y)}$ there exists an index ${v}$ in ${\{0\} \cup supp^{-}(x-y)}$ such that

$\displaystyle f(x) + f(y) \leq f(x - \chi_u + \chi_v) + f(y + \chi_u - \chi_v).$

Here ${\chi_i}$ is the 0-1 vector with a zero in every component except component ${i}$. I describe three different ways one might interpret this notion.

First, one can think of M${^{\#}}$-concavity as trying to capture an essential feature of the basis exchange property of matroids. If you don’t know what a matroid is, skip to the next paragraph. Think of the vectors ${x}$ and ${y}$ as being 0-1 and representing subsets of columns of some given matrix, ${A}$, say. Define ${f(x)}$ to be the rank of the columns in the subset associated with ${x}$. What the definition of M${^{\#}}$-concavity captures is this: for any column, ${u}$ I remove from ${x}$, that is not in ${y}$ and add it to ${y}$, I can find (possibly) a column from ${y}$ (not in ${x}$) to add to ${x}$ and not diminish the sum of the ranks of the two sets. The argument is straightforward.

A second interpretation is to be had by comparing the definition of M${^{\#}}$-concavity to the following way of stating concavity of a function:

$\displaystyle f(x) + f(y) \leq f(x - \frac{x-y}{2}) + f(y + \frac{x-y}{2})$

If ${x}$ and ${y}$ were numbers with ${x > y}$, then, the first term on the right represents a step from ${x}$ closer to ${y}$ and the second term is a step from ${y}$ closer to ${x}$. M${^{\#}}$-concavity can be interpreted as a discrete analogy of this. Think of ${x}$ being above and to the left of ${y}$. A step closer to ${y}$ would means one step down in the ${u}$ direction and a step to the right in the ${v}$ direction., i.e. ${x - \chi_u + \chi_v}$. The vector ${y + \chi_u - \chi_v}$ has a similar interpretation.

The third interpretation comes from an equivalent definition of M${^{\#}}$-concavity, one that shows that it is identical to the notion of gross substitutes proposed by Kelso and Crawford (1982). Let ${p \in \Re^n}$ be a non-negative price vector. Let ${x}$ be a non-negative integer that maximizes ${f(x) - p \cdot x}$, i.e., a utility maximizing bundle. Consider a new price vector ${p' \geq p}$. Then there exists a new utility maximizing bundle ${y}$ such that ${y_i \geq x_i}$ for all ${i}$ such that ${p_i = q_i}$. In words, the demand for goods whose price did not change cannot go down.

The key result of Murota and Tamura is that the following linear program has an integer optimal solution when the ${U_i}$‘s are M${^{\#}}$-concave and the ${C_j}$‘s are M${^{\#}}$-convex.

$\displaystyle \max \sum_{i \in B}U_i(x_{ij}:\{j: (i,j) \in E\}) - \sum_{j \in S}C_j(x_{ij}: \{i:(i,j) \in E\})$

subject to

$\displaystyle 0 \leq x_{ij} \leq d_{ij}\,\, \forall (i,j) \in E$

Notice, that buyer’s utilities do not necessarily depend on the total amount consumed. They can depend on the vector of inflows from the various sellers (similarly for the sellers). We can interpret this to mean that the sellers sell different goods which are substitutes for each other and there is an upper limit, ${d_{ij}}$ on the amount of seller ${j}$‘s good that can go to seller ${i}$. Such a bound is necessary to make sure the problem is well defined. Consider the case when utilities and costs are linear in quantity. The problem could be unbounded then. A second interpretation is that seller’s sell the same good but under different contractual terms (payment plan, delivery date etc.) which make them, in the buyer’s eyes, imperfect substitutes. The only element of the contract not fixed is the price and that is what is determined in equilibrium.

In 1937, representatives of the Plywood trust called upon Comrade Professor Leonid Vitalievich Kantorovich with a problem. The trust produced 5 varieties of plywood using 8 different machines. How, they asked, should they allocate their limited supply of raw materials to the various machines so as to produce the maximum output of plywood in the required proportions? As problems go, it was, from this remove unremarkable. Remarkable is that the Comrade Professor agreed to take it on. The so called representatives might have been NKVD. Why? Uncle Joe’s first act upon taking power in 1929 was to purge the economists, or more precisely the Jewish ones. This was well before the purge of the communist party in 1936. Why the economists? They complained about waste in a planned economy dizzy with success.’ Yet, here were the apparatchiks of the Trust asking the Comrade Professor to reduce waste.

Kantorovich writes, that at the time he was burnt out by pure mathematics. Combined with a concern at the rise of Hitler, he felt compelled to do something practical. And, so he turned his mind to the problem of the Plywood Trust. Frances Spufford, in his delightful work of faction’ called Red Plenty, imagines what Kantorovich might have been thinking.

He had thought about ways to distinguish between better answers and worse answers to questions which had no right answer. He had seen a method which could do what the detective work of conventional algebra could not, in situations like the one the Plywood Trust described, and would trick impossibility into disclosing useful knowledge. The method depended on measuring each machine’s output of one plywood in terms of all the other plywoods it could have made.

If he was right — and he was sure he was, in essentials — then anyone applying the new method to any production situation in the huge family of situations resembling the one at the Plywood Trust, should be able to count on a measureable percentage improvement in the quantity of product they got from a given amount of raw materials. Or you could put that the other way around: they would make a measureable percentage saving on the raw materials they needed to make a given amount of product.

He didn’t know yet what sort of percentage he was talking about, but just suppose it was 3%. It might not sound like much, only a marginal gain, an abstemious eking out of a little bit more from the production process, at a time when all the newspapers showed miners ripping into fat mountains of solid metal, and the output of plants booming 50%, 75%, 150%. But it was predictable. You could count on the extra 3% year after year. Above all it was free. It would come merely by organising a little differently the tasks people were doing already. It was 3% of extra order snatched out of the grasp of entropy. In the face of the patched and mended cosmos, always crumbling of its own accord, always trying to fall down, it built; it gained 3% more of what humanity wanted, free and clear, just as a reward for thought. Moreover, he thought, its applications did not stop with individual factories, with getting 3% more plywood, or 3% more gun barrels, or 3% more wardrobes. If you could maximise, minimise, optimise the collection of machines at the Plywood Trust, why couldn’t you optimise a collection of factories, treating each of them, one level further up, as an equation? You could tune a factory, then tune a group of factories, till they hummed, till they purred. And that meant –

An english description of Kantorovich’s appeared in the July 1960 issue of Management Science. The opening line of the paper is:

The immense tasks laid down in the plan for the third Five Year Plan period require that we achieve the highest possible production on the basis of the optimum utilization of the existing reserves of industry: materials, labor and equipment.

The paper contains a formulation of the Plywood Trust’s problem as a linear program. A recognition of the existence of an optimal solution at an extreme point as well as the hopelessness of enumerating extreme as a solution method. Kantorovich then goes on to propose his method, which he calls the method of resolving multipliers. Essentially, Kantorovich proposes that one solve the dual and then use complementary slackness to recover the primal. One might wonder how Kantorovich’s contribution differs from the contributions of Koopmans and Dantzig. That is another story and as fair a description of the issues as I know can be found in Roy Gardner’s 1990 piece in the Journal of Economic Literature. I reproduce one choice remark:

Thus, the situation of Kantorovich is rather like that of the discoverer Columbus. He really never touched the American mainland, and he didn’t give its name, but he was the first one in the area.

As an aside, David Gale is the one often forgotten in this discussion. If the Nobel committee has awarded the prize for Linear Programming, Dantzig and Gale would have been included. Had Gale lived long enough, he might have won it again for matching making him the third to have won the prize twice in the same subject. The others are John Bardeen and Frederick Sanger.

Continuing with Spufford’s imaginings:

– and that meant that you could surely apply the method to the entire Soviet economy, he thought. He could see that this would not be possible under capitalism, where all the factories had separate owners, locked in wasteful competition with one another. There, nobody was in a position to think systematically. The capitalists would not be willing to share information about their operations; what would be in it for them? That was why capitalism was blind, why it groped and blundered. It was like an organism without a brain. But here it was possible to plan for the whole system at once. The economy was a clean sheet of paper on which reason was writing. So why not optimise it? All he would have to do was to persuade the appropriate authorities to listen.

Implementation of Kantorovich’s solution at the Plywood trust led to success. Inspired, Kantorovich sent a letter to Gosplan urging adoption of his methods. Here the fact that Kantorovich solved the dual first rather than the primal is important. Kantorovich interpreted his resolving multipliers (shadow prices today) as objectively determined prices. Kantorovich’s letter to Gosplan urged a replacement of the price system in place by his resolving multipliers. Kantorovich intended to implement optimal production plans through appropriate pieces. Gosplan, responded that reform was unecessary. Kantorovich narrowly missed a trip to the Gulag and stopped practicing Economics, for a while. Readers wanting a fuller sense of what mathematical life was like in this period should consult this piece by G. G. Lorentz.

After the war, Kantorovich took up linear programming again. At Lenningrad, he headed a team to reduce scrap metal produced at the Egorov railroad-car plant. The resulting reduction in waste reduced the supply of scrap iron for steel mills disrupting their production! Kantorovich escaped punishment by the Leningrad regional party because of his work on atomic reactors.

Kantorovich’s interpretation of resolving multipliers which he renamed as objectively determined valuations put him at odds with the prevailing labor theory of value. In the post Stalin era, he was criticized for being under the sway of Bohm-Bawerk, author of the notion of subjective utility. Aron Katsenelinboigen, relates a joke played by one of these critics on Kantorovich. A production problem was presented to Kantorovich where the labor supply constraint would be slack at optimality. Its objectively determined valuation’ was therefore zero, contradicting the labor theory of value.

Nevertheless, Kantorovich survived. This last verse from the Ballard of L. V. Kantorvich authored by Josph Lakhman explains why:

Then came a big scholar with a solution.
Alas, too clever a solution.
Objectively determined valuations’-
That’s the panacea for each and every doubt!
Truth be told, the scholar got his knukcles rapped
Slightly rapped
That threatened to overturn the existing order.
After some thought, however, the conclusion was reached
That the valuations had been undervalued

This is the first of a series of posts about stability and equilibrium in trading networks. I will review and recall established results from network flows and point out how they immediately yield results about equilibria, stability and the core of matching markets with quasi-linear utility. It presumes familiarity with optimization and the recent spate of papers on matchings with contracts.

The simplest trading network one might imagine would involve buyers (${B}$) and sellers (${S}$) of a homogenous good and a set of edges ${E}$ between them. No edges between sellers and no edges between buyers. The absence of an edge in ${E}$ linking ${i \in B}$ and ${j \in S}$ means that ${i}$ and ${j}$ cannot trade directly. Suppose buyer ${i \in B}$ has a constant marginal value of ${v_i}$ upto some amount ${d_i}$ and zero thereafter. Seller ${j \in S}$ has a constant marginal cost of ${c_j}$ upto some capacity ${s_j}$ and infinity thereafter.

Under the quasi-linear assumption, the problem of finding the efficient set of trades to execute can be formulated as a linear program. Let ${x_{ij}}$ for ${(i,j) \in E}$ denote the amount of the good purchased by buyer ${i \in B}$ from seller ${j \in S}$. Then, the following program identifies the efficient allocation:

$\displaystyle \max \sum_{(i,j) \in E} (v_i - c_j)x_{ij}$

subject to

$\displaystyle \sum_{j \in S: (i,j) \in E}x_{ij} \leq d_i\,\, \forall i \in B$

$\displaystyle \sum_{i \in B:(i,j) \in E}x_{ij} \leq s_j\,\, \forall j \in S$

$\displaystyle x_{ij} \geq 0\,\, (i,j) \in E$

This is, of course, an instance of the (discrete) transportation problem. The general version of the transportation problem can be obtained by replacing each coefficient of the objective function by arbitrary numbers ${w_{ij}}$. This version of the transportation problem is credited to the mathematician F. J. Hitchcock and published in 1941. Hitchcock’s most famous student is Claude Shannon.

The continuous’ version of the transportation problem was formulated by Gaspard Monge and described in his 1781 paper on the subject. His problem was to split two equally large volumes (representing the initial location and the final location of the earth to be shipped) into infinitely many small particles and then match them with each other so that the sum of the products of the lengths of the paths used by the particles and the volume of the particles is minimized. The ${w_{ij}}$‘s in Monge’s problem have a property since called the Monge property, that is the same as submodularity/supermodularity. This paper describes the property and some of its algorithmic implications. Monge’s formulation was subsequently picked up by Kantorovich and the study of it blossomed into the specialty now called optimal transport with applications to PDEs and concentration of measure. That is not the thread I will follow here.

Returning to the Hitchcock, or rather discrete, formulation of the transportation problem let ${p_j}$ be the dual variables associated with the first set of constraints (the supply side) and ${\lambda_i}$ the dual variables associated with the second or demand set of constraints. The dual is

$\displaystyle \min \sum_{j \in S} s_jp_j + \sum_{i \in B}d_i\lambda_i$

subject to

$\displaystyle p_j + \lambda_i \geq [v_i-c_j]\,\, \forall (i,j) \in E$

$\displaystyle p_j, \lambda_i \geq 0\,\, \forall j \in S, i \in B$

We can interpret ${p_j}$ as the unit price of the good sourced from seller ${j}$ and ${\lambda_i}$ as the surplus that buyer ${i}$ will enjoy at prices ${\{p_j\}_{j \in S}}$. Three things are immediate from the duality theorem, complementary slackness and dual feasibility.

1. If ${x^*}$ is a solution to the primal and ${(p^*, \lambda^*)}$ an optimal solution to the dual, then, the pair ${(x^*, p^*)}$ form a Walrasian equilibrium.
2. The set of optimal dual prices, i.e., Walrasian prices live in a lattice.
3. The dual is a (compact) representation of the TU (transferable utility) core of the co-operative game associated with this economy.
4. Suppose the only bilateral contracts we allow between buyer ${i}$ and seller ${j}$ are when ${(i,j) \in E}$. Furthermore, a contract can specify only a quantity to be shipped and price to be paid. Then, we can interpret the set of optimal primal and dual solutions to be the set of contracts that cannot be blocked (suitably defined) by any buyer seller pair ${(i,j) \in E}$.
5. Because the constraint matrix of the transportation problem is totally unimodular, the previous statements hold even if the goods are indivisible.

As these are standard, I will not reprove them here. Note also, that none of these conclusions depend upon the particular form of the coefficients in the objective function of the primal. We could replace ${[v_i - c_j]}$ by ${w_{ij}}$ where we interpret ${w_{ij}}$ to be the joint gain gains from trade (per unit) to be shared by buyer ${i}$ and seller ${j}$.

Now, suppose we replace constant marginal values by increasing concave utility functions, ${\{U_i(\cdot)\}_{i \in B}}$ and constant marginal costs by ${\{C_j (\cdot)\}_{j \in S}}$? The problem of finding the efficient allocation becomes:

$\displaystyle \max \sum_{i \in B}U_i(\sum_{j: (i,j) \in E}x_{ij}) - \sum_{j \in S}C_j(\sum_{i: (i,j) \in E}x_{ij})$

subject to

$\displaystyle \sum_{j \in S: (i,j) \in E}x_{ij} \leq d_i\,\, \forall i \in B$

$\displaystyle \sum_{i \in B:(i,j) \in E}x_{ij} \leq s_j\,\, \forall j \in S$

$\displaystyle x_{ij} \geq 0\,\, (i,j) \in E$

This is an instance of a concave flow problem. The Kuhn-Tucker-Karush conditions yield the following:

1. If ${x^*}$ is a solution to the primal and ${(p^*, \lambda^*)}$ an optimal Lagrangean, then, the pair ${(x^*, p^*)}$ form a Walrasian equilibrium.
2. The set of optimal Lagrange prices, i.e., Walrasian prices live in a lattice.
3. Suppose the only bilateral contracts we allow between buyer ${i}$ and seller ${j}$ are when ${(i,j) \in E}$. Furthermore, a contract can specify only a quantity to be shipped and price to be paid. Then, we can interpret the set of optimal primal and dual solutions to be the set of contracts that cannot be blocked (suitably defined) by any buyer seller pair ${(i,j) \in E}$.

Notice, we lose the extension to indivisibility. As the objective function in the primal is now concave, an optimal solution to the primal may occur in the interior of the feasible region rather than at an extreme point. To recover integrality’ we need to impose a stronger condition on ${\{U_i\}_{i \in B}}$ and ${\{C_j\}_{j \in S}}$, specifically, they be ${M}$-concave and convex respectively. This is a condition tied closely to the gross substitutes condition. More on this in a subsequent post.

Ezra Klein, one among the chattering classes recently posted a summary of graduation speech given by Thomas Sargent to Berkeley economics undergraduates in 2007. Sargent prefaced his remarks with these words:

I will economize on words.

Lets see how well he succeeded:

Economics is organized common sense. Here is a short list of valuable lessons that our beautiful subject teaches.

1. Many things that are desirable are not feasible.

2. Individuals and communities face trade-offs.

3. Other people have more information about their abilities, their efforts, and their preferences than you do.

4. Everyone responds to incentives, including people you want to help. That is why social safety nets don’t always end up working as intended.

Everything after the comma is superfluous. Why emphasize the bit about people you want to help? Why choose to emphasize the business of safety nets? He could just as well have said: That is why markets don’t always end up working as intended.

5. There are tradeoffs between equality and efficiency.

Redundant given items 1 and 2 above. It also implies a commonly accepted definition of equality and efficiency.

6. In an equilibrium of a game or an economy, people are satisfied with their choices. That is why it is difficult for well meaning outsiders to change things for better or worse.

Wrong. Within the constraints of the game they may be satisfied with the outcome. It does not follow they are satisfied with the game they were obliged to play.

7. In the future, you too will respond to incentives. That is why there are some promises that you’d like to make but can’t. No one will believe those promises because they know that later it will not be in your interest to deliver. The lesson here is this: before you make a promise, think about whether you will want to keep it if and when your circumstances change.
This is how you earn a reputation.

He garbled this one by confusing the desire to make promises and whether they are credible. Here is an edit with fewer words.

Promises about tomorrow are easy to make, but because you will respond to incentives in the future they are hard to keep. Its even harder to convince others that you will keep them. Before you make a promise, think about whether you will want to keep it if and when your circumstances change. This is how you earn a reputation.

8. Governments and voters respond to incentives too. That is why governments sometimes default on loans and other promises that they have made.

Redundant given item 4 above.

9. It is feasible for one generation to shift costs to subsequent ones. That is what national government debts and the U.S. social security system do (but not the social security system of Singapore).

Item in brackets clearly redundant, but what is a bully pulpit for unless one get one’s licks in!

10. When a government spends, its citizens eventually pay, either today or tomorrow, either through explicit taxes or implicit ones like inflation.

11. Most people want other people to pay for public goods and government transfers (especially transfers to themselves).

12. Because market prices aggregate traders’ information, it is difficult to forecast stock prices and interest rates and exchange rates.

Rests on a supposition that some might find arguable. Conclusion might still follow. Perhaps, a simple you can’t beat the market without an unfair advantage’ might suffice.

Now compare with Yoram Baumann’s translation of Mankiw’s ten principles of economics:

#3. People are stupid.
#4. People aren’t that stupid.
#5. Trade can make everyone worse off.
#6. Governments are stupid.
#7. Governments aren’t that stupid.
#8. Blah blah blah.
#9. Blah blah blah.
#10. Blah blah blah.

On many campuses one will find notices offering modest sums to undergraduates to participate in experiments. When the experimenter does not attract sufficiently many subjects to participate at the posted rate, does she raise it? Do undergraduates make counter offers? If not, why not?  An interesting contrast is medical research where there has arisen a class of human professional guinea pigs. They have a jobzine and the anthropologist Roberto Abadie has book on the subject. Prices paid to healthy subjects to participate in  trials vary and increase with the potential hazards. The jobzine I mention earlier provides ratings of various research organizations who carry out such studies. A number of questions come to mind immediately: how are prices determined, are subjects in a position to offer informed consent, should such contracts be forbidden and does relying on such subjects induce a selection bias?

In the March 23rd edition of the NY Times Mankiw proposes a do no harm’ test for policy makers:

…when people have voluntarily agreed upon an economic arrangement to their mutual benefit, that arrangement should be respected.

There is a qualifier for negative externalities, and he goes on to say:

As a result, when a policy is complex , hard to evaluate and disruptive of private transactions, there is good reason to be skeptical of it.

Minimum wage legislation is offered as an example of a policy that fails the do no harm test.

The association with the Hippocratic oath gives it an immediate appeal. I think the test to be more Panglossian (or should I say Leibnizian) than Hippocratic.

There is an immediate heart strings’ argument against the test, because indentured servitude passes the do no harm’ test. However, indentured servitude contracts are illegal in many jurisdictions ( repugnant contracts?). This argument raises only more questions, like why would we rule out such contracts? I want to focus instead on two other aspects of the do no harm’ principle contained in the words voluntarily’ and benefit’. What is voluntary and benefit compared to what?

To fix ideas imagine two parties, who if they work together and expend equal effort can jointly produce a good worth $1. How should they split the surplus produced? How will they split the surplus produced? An immediate answer to the should’ question is 50-50. A deeper answer would suggest that they each receive their marginal product (or added value) of$1, but this impossible without an injection of money from the outside. There is no immediate answer to the will’ question as it will depend on the outside options of each of the agents and their relative patience. Suppose for example, the outside option of each party is \$0, one agent is infinitely patient and the other has a high discount rate. It isn’t hard to construct a model of bargaining where the lions share of the gains from trade go to the patient agent. Thus, what will’ happen will be very different from what should’ happen. What will’ happen depends on the relative patience and outside options of the agents at the time of bargaining. In my extreme example of a very impatient agent, one might ask why is it that one agent is so impatient? Is the patient agent exploiting the impatience of the other agent coercion?

When parties negotiate to their mutual benefit, it is to their benefit relative to the status quo. When the status quo presents one agent an outside option that is untenable, say starvation, is bargaining voluntary, even if the other agent is not directly threatening starvation? The difficulty with the do no harm’ principle in policy matters is the assumption that the status quo does less harm than a change in it would. This is not clear to me at all. Let me illustrate this  with two examples to be found in any standard microeconomic text book.

Assuming a perfectly competitive market, imposing a minimum wage constraint above the equilibrium wage would reduce total welfare. What if the labor market were not perfectly competitive? In particular, suppose it was a monopsony employer constrained to offer the same wage to everyone employed. Then, imposing a minimum wage above the monopsonist’s optimal wage would increase total welfare.