You are currently browsing the category archive for the ‘Uncategorized’ category.

The majorization relationship arises frequently in Economic applications, most recently in the context of information design (see Kleiner, Moldovanu and Strack (2021) for instance). This post highlights the connection between optimization problems involving the majorization relationship and polymatroid optimization problems.

The majorization relationship is a partial order on vectors. If $x$ is an $n-$vector, denote the $j^{th}$ largest component by $x_{[j]}$.  Let $x$ and $b$ be two vectors in $n$ dimensions. The vector $x$ is majorized by the vector $b$ if

$\sum_{j \leq i}x_{[j]} \leq \sum_{j \leq i}b_{[j]},\, \forall i \leq n-1$

$\sum_{j=1}^nx_{[j]} = \sum_{j=1}^nb_{[j]}$

In the typical application one is given a non-negative vector $b$ and the goal is to identify a vector $x$ that is majorized by $b$ to optimize a linear function of $x$. The problem is easily formulated as a linear program. Suppose  $b_1 \geq b_2 \geq \ldots \geq b_n \geq 0$. Then, our goal is to maximize $cx$ subject to

$\sum_{j \leq i}x_j \leq \sum_{j \leq i}b_j\,\, \forall i \leq n-1$

$\sum_{j=1}^nx_j = \sum_{j=1}^nb_j$

$x_1 \geq x_2 \geq \ldots \geq x_n \geq 0$

A classic result of Hardy, Littlewood and Polya states that $x$ is majorized by $b$ iff there is a doubly-stochastic matrix $\Pi$ such that $x=\Pi b$.  By the Birkhoff-von Neumann theorem it means that the set of vectors majorized by $b$ is the convex hull of all permutations of $b$, i.e., a permutahedron. Permutahedra are polymatroids (see, for example, Kuipers, Vermeulen and Voorneveld (2010)). Therefore, we can reformulate the linear program above as a polymatroid optimization problem which can be solved via a greedy algorithm. Dahl (2010) gives the underlying submodular function that describes the polymatroid. For each $S \subseteq \{1, \ldots, n\}=N$ let $f(S) = \sum_{j=1}^{|S|}b_j$. Note that $f(S)$ depends only on the cardinality of $S$ and not $S$ itself. Given this, the optimization problem above can be expressed as

$\max cx$

subject to

$\sum_{j=1}^nx_j = f(N)$

$\sum_{j \in S}x_j \leq f(S)\,\, \forall S \subset N$

$x_j \geq 0\,\, \forall j$

There is also a related minorization’ problem that is of interest (see Akhil and Sundaresan (2016) for a survey):

$\min cx$

subject to

$\sum_{j \leq i}x_j \geq \sum_{j \leq i}b_j\,\, \forall i \leq n-1$

$\sum_{j=1}^nx_j = \sum_{j=1}^nb_j$

$x_1 \geq \ldots \geq x_n \geq 0$

Dahl (2001) characterizes the extreme points of the feasible region of this program. Morton, von Randow, Ringwald (1985) show that its feasible region can be described as a polymatroid with monotonicity constraints. Set $g(S) = \max \{\sum_{i=1}^jb_i: \{1, 2, \ldots, j\} \subseteq S\}\,\, \forall S \subseteq N.$ In words $g(S) = \sum_{i=1}^jb_i$ if $j$ is the largest index such that the set $\{1, 2, \ldots, j\}$ is contained in $S$. Otherwise $g(S)=0$. Set $f(S) =g(N) - g(N \setminus S)$ The function $f(S)$ is non-decreasing and submodular on $N$. The feasible region of the minorization problem can be expressed as

$\sum_{j \in S}x_j \leq f(S)\,\, \forall S \subset N$

$\sum_{j \in N}x_j = f(N)$

$x_1 \geq \ldots \geq x_n \geq 0$

Charactertizing the structure of the optimal solution is straightforward. It involves the pooling’ of some variables. Variables are partitioned into sets consisting of a consecutive sequence of indices. Within each of these sets the variables are equal. Variables from sets with lower’ indices will of course have higher values than variables from sets with larger indices.

For the more general problem of optimizing a linear function over the intersection of a polymatroid and an affine space, see Hartvigsen (1998).

One of the delights of pursuing a relatively young discipline is that one meets its pioneers. As one grows old in the discipline, so do the pioneers, who eventually pass into the undiscovered country from whose bourn no traveler returns.’ Overlooked, at least by me, was that one also meets, in the chrysalis stage, those who will eventually lead the discipline into the next century. It was the untimely passing of William Sandholm on July 6th of this year, that brought this to mind.

I first met Bill in 1998. I had just moved to MEDS and he was on his way out as a new minted PhD. He, a shiny new penny and myself on the way to becoming so much loose change.

Within a decade, Bill rose to prominence as an authority on Evolutionary Game Theory. His book, “Population Games and Evolutionary Dynamics” became the standard reference for population games. The concept of evolutionary implementation can be credited to him.

Bill was also a provider of public goods. He wrote and made freely available software for working with evolutionary dynamics, served on panels and editorial boards.

As I recall Bill, I am reminded of a line from Mary Chase’s play, Harvey uttered by the main character Elwood Dowd:

Years ago my mother used to say to me, she’d say, ‘In this world, Elwood, you must be’ – she always called me Elwood – ‘In this world, Elwood, you must be oh so smart or oh so pleasant.’ Well, for years I was smart. I recommend pleasant. You may quote me.

Bill was both.

Time for some game theory about the recent development in Israeli politics. (You can read more about it here)

In the aftermath of the last election, no side could form a governing coalition. So the options were either a unity government, composed of the two major parties (Bibi Netanyahu’s Likud and Benny Gantz’ Kahol-Lavan) or yet another, fourth in a row, election. But with a year of election campaigns sowing hatred throughout the country, the ongoing coronavirus lockdown and an impending recession, nobody wants another election. So the two factions have to bargain on the terms of the unity government, with the possibility of another election being what game theorists call a disagreement point: this is what happens if negotiations break down.

Game theory teaches us that the disagreement point has a significant impact on the outcome, even when the parties reach an agreement, and that the better your position in the disagreement point, the better the bargaining outcome will be for you.

The disagreement point in the negotiation following the last election is impacted by two factors that did not exist in the previous rounds:

• The coronavirus came as a gift from the gods for Bibi. Frightened citizens usually stick with incumbent leaders in times of crisis, and in Bibi’s case, this tendency is magnified by the fact that he has been around for so many years. Even before the virus popped up, the greatest difficulty of the opposition was to convince the public that he is not the only person who is qualified for the prime-minister job. So an election will be advantageous to Bibi. Moreover, since the election will be postponed until after the lockdown, Bibi will likely stay in power for a long period even under the disagreement point.
• While he could not form a government, this time Gantz did manage to form a functioning majority coalition in the parliament. He was on the verge of electing a parliamentary speaker from his party, which would allow him to implement a law — the so-called “Bibi law” that forbids Bibi from being appointed for a  prime-minister due to the fact that he was indicted for bribery.

So, in terms of the disagreement point, Bibi has something good going for him and something bad going for him. So he made a threat: If you continue with your current plan to take over the parliament, I will abandon the negotiations and we will jump straight ahead to the disagreement point. Gantz, believing the threat, gave up on the plan. Instead, he promised to give the speakership to somebody from Bibi’s party and to join a government led by Bibi and, and he has split his own party for that purpose, bringing with him only half of it to the new coalition under Bibi.

So now Gantz made the disagreement point much better for Bibi and much worse from himself:
— If there is an election again, Gantz will be viewed as a fool who tore down his party for nothing, and he will likely not be elected.
— Gantz has made public that he too thinks Bibi should be a prime minister. This statement substantiates Bibi’s image as the only qualified person for the job. It also diminishes the moral argument against choosing an indicted person to prime-minister, which was the justification for Bibi’s law and a central theme in the campaign against him.
— Gantz currently took the speakership to himself. Joining Bibi’s government will force him to resign from the speakership, and he has already agreed to give it to Bibi’s party. After Bibi has the speakership, even before the government is formed, it is not possible to pass the Bibi law.

The alternative was to take over the parliament, uncover information about the government’s lack of preparation and incompetent handling of the corona crisis, and meanwhile negotiate secretly over the unity government. But without publicly supporting Bibi as the prime-minster until the deal is reached, all the while keeping the option of passing Bibi’s law if the negotiation fails. It’s hard to predict a bargaining outcome, but I think it is possible that under this plan, even if Bibi would have continued to be the prime minister for some time, we would see a meaningful division of power and a date for the termination of Bibi’s reign. Now there is no such hope.

Why did Gantz give up all his cards? I think the reason is that, because he believed in Bibi’s threat to abandon the negotiations, Gantz thought he had to choose between another election and a unity government. Indeed, Gantz conducted an election poll before he made this move, which suggests he was thinking in these terms.

Bibi’s threat, however, was empty, or, to use another game-theoretic jargon, not credible. The fact that the disagreement point would have become worse for Bibi precisely implies that, whatever he says now, he would have been willing to negotiate in order to escape it. The only difference is that the negotiation would be conducted in an environment that is less convenient to Bibi.
It is also not at all clear that the probability of reaching a deal has increased following Ganz’s move. In fact, an exultant Bibi might insist on terms that even Gantz will not be able to accept.

Many of Gantz’ voters are furious because they preferred another election over a Bibi government. But Gant’s voters who were willing to stomach a Bibi government should also be furious because Gantz created a bargaining environment in which he is desperate to achieve it.

This morning I sent the following message to the heads of the NSF and the ERC.

As Don Quixotian as this manifesto may be, it is in fact simple, sets incentives right, and requires little coordination among the two largest research funding agencies.

Dear Professors Cordova and Ferrari,

We are all aware of the escalation in journal publication costs for researchers, reaching thousands of dollars per article in  e.g. leading life sciences journals, and at the other extreme the phenomenon of completely fake journals, that publish un-refereed papers for huge fees.  This is public money going to private gate-keepers who provide little or even negative added value.

A relatively simple joint policy by the NSF and the ERC may overturn this situation:

1) Announce that as of 2.2.22, NSF and ERC grants will be awarded only to researchers all of whose journal publications as of that date appear in journals which are (i) open access, and (ii)  charge at most \$100 for submission+publication;  in grant applications, researchers’ CVs will be required to mention explicitly, next to each publication, the publication cost.

2) In view of this new policy, encourage entire editorial boards of all scientific journals to migrate to one of the open and free journal systems  (e.g. OJS), and to re-establish there their journal under the name “The New Journal/Review of…”, respectively.  The NSF and the ERC may finance a small team to facilitate the technical transition of editorial boards.

Sincerely,

Israel is holding an election Tuesday. It’s a multi-party system in Israel, based on proportional representation, and the main battle is between two blocs of parties. We call them the left-wing bloc and the right-wing bloc, and civics textbooks will tell you that this division is about policy: the Arab-Israeli conflict, economics and the role of religion. But let’s face it, nobody cares about public policy anymore. It’s all about Bibi. Parties that intend to form a coalition under Bibi belong to the right-wing bloc and parties that intend to rid the country of him belong to the left wing bloc.

In addition to the battle between the blocs, there is also a contest between the parties in each bloc, especially between one big party and several satellites. Bibi, in particular, is expected to use the final days of the campaign to siphon votes from satellites to his own Likud party, as he did in previous elections. Indeed, in recent days, Bibi and his social media fans said that the number of seats of the Likud party compared to the major opponent (called BlueWhite), and not the size of the blocs, will determine whether Bibi will form the government again. BlueWhite makes the same argument to left-wing voters.

But if Bibi is serious about cannibalizing his bloc, he could use a more fatal argument by appealing to the electoral threshold, that is the minimum share of total votes that a party has to achieve to be entitled to seats in the parliament. The current threshold 3.25%, which translates to roughly five seats in the 120-seat parliament. This sounds like a low threshold, but there are eleven parties in the current parliament, some of them have split into two, there are new parties that are serious contenders, and Likud and BlueWhite together are expected to win about half of the total votes. So many small parties are close to the threshold in the opinion polls. Since the two blocs are almost tied (with a small advantage to the right-wing bloc), the electoral threshold might end up playing a significant role in determining the outcome if many votes are cast to a party that doesn’t cross the threshold.

Enter game theory

It is difficult to do a game-theoretic analysis of anything election because we don’t have a good explanation for what drives voters to stand in line in the voting booth rather than go to the beach. Let me bypass this issue and just assume that you gain some utility from voting to your favorite party. So assume for simplicity that right-wing voters have two options, Likud or New Right (NR), which is one of the new parties in the right-wing bloc, whose leaders Bibi detests. According to the polls, 4% of the voters are New Right supporters. These guys will get utility 2 if they vote New Right and utility 1 if they vote Likud. And here is the twist: you only get this utility if the party you vote to passes the electoral threshold. If it doesn’t, you feel like your vote has been wasted and you get utility 0.

Whether or not an NR supporter will vote her preference depends on what she thinks the other supporter will do. In fact, NR supporters are involved in a population stag hunt game (sometimes called investment game). There are two Nash equilibria in the game: either everyone votes NR or everyone votes Likud. The first equilibrium requires players to take the risky action of voting NR, and if they all go along, they all get high utility 2. I have played variants of the investment game in class several times. Typically, players go for the safe choice, which in this case is Likud. If you do this, you guarantee yourself a utility 1 regardless of what the other players do.

How could the potential voters of NR coordinate on the risky equilibrium which gives a higher utility? I think the public opinion polls have a big role here. In the recent polls, NR has been consistently hovering above the threshold. The polls make it commonly known that there are sufficiently many potential NR voters, and also that they plan to go along with voting NR.

Suppose however that on Monday Bibi hints that his internal polls show that NR does not cross the threshold. What would our NR supporter do? She will likely know that there are no such polls since most Bibi supporters know that he is a lier. But perhaps other players will believe Bibi and switch to voting Likud, which will drive NR below the threshold. Or perhaps other players will worry that some other players will believe him. You see where this is going: to play the risky equilibrium, you need confidence that the other players are playing it too. Bibi can shake this confidence even if he can’t change the fact that NR has enough supporters to cross the threshold if they all collaborate.

By the way, I believe that established parties that are already represented in the parliament are less vulnerable to such an attack, since the confidence of their supporters in the risky equilibrium is stronger, as they played it once already.

Go ahead Bibi, make my day.

Apparently, it is quite the rage to festoon one’s slides with company logos, particularly of the  frightful five. At present this is done for free. It suggests a new business. A platform that matches advertisers to faculty. Faculty can offer up their slides and advertisers can bid for the right to place their logos on the slides.

After presenting Richard Weber’s remarkable proof of Gittins’ index theorem in my dynamic optimization class, I claimed that the best way to make sure that you understand a proof is to identify where the assumptions of the theorem are used. Here is the proof again, slightly modified from Weber’s paper, followed by the question I gave in class.

First, an arm or a bandit process is given by a countable state space $S$, a transition function $q(s'|s)$ and a payoff function $r:S\rightarrow [0,1]$. The interpretation is that at every period, when the arm is at state $s$, playing it gives a reward $r(s)$ and the arm’s state changes according to $q(\cdot|s)$.

In the multi-armed bandit problem, at every period you choose an arm to play. The states of the arms you didn’t choose remain fixed. Your goal is to maximize expected total discounted rewards. Gittins’ theorem says that for each arm there exists a function $\gamma:S\rightarrow [0,1]$ called the Gittins Index (GI from now on) such that, in a multi armed problem, the optimal strategy is to play at each period the arm whose current state has the largest GI. In fancy words, the theorem establishes that the choice which arm to play at each period satisfies Independent of Irrelevance Alternatives: Suppose there are three arms $A,B,C$ whose current states are $a,b,c$. If you were going to start by playing $A$ if only $A$ and $B$ were available, then you should not start with $B$ when $A,B,C$ are available.

The proof proceeds in several steps:

1. Define the Gittins Index at state $s$ to be the amount $\gamma$ such that, if the casino charges $\gamma$ every time you play the arm, then both playing and not playing are optimal actions at the state $s$. We need to prove that there exists a unique such $\gamma$. This is not completely obvious, but can be shown by appealing to standard dynamic programming arguments.
2. Assume that you enter a casino with a single arm at some state $s$ with GI $\gamma$. Assume also that the casino charges $\gamma$ every time you play the arm. At every period, you can play, or quit playing, or take a break. From step 1, it follows that regardless of your strategy, the casino will always get a nonnegative net expected net payoff, and if you play optimally then the net expected payoff to the casino (and therefore also to you) is zero. For this reason, this $\gamma$ (the GI of the initial state) is called the fair charge. Here, playing optimally means that you either not play at all or start playing and continue to play every period until the arm reaches a state with GI strictly smaller than $\gamma$, in which case you must quit. It is important that as long as the arm is at a state with GI strictly greater than $\gamma$ you continue playing. If you need to take a restroom break you must wait until the arm reaches a state with GI $\le \gamma$.
3. Continuing with a single arm, assume now that the casino announces a new policy that at every period, if the arm reaches a state with GI that is strictly smaller than the GI of all previous states, then the charge for playing the arm drops to the new GI. We call these new (random) charges the prevailing charges. Again, the casino will always get a nonnegative net expected payoff, and if you play optimally then the net expected payoff is zero. Here, playing optimally means that you either not play at all or start playing and continue to play forever. You can quit or take a bathroom break only at periods in which the prevailing charge equals the GI of the current state.
4. Consider now the multi-arms problem, and assume again that in order to play an arm you have to pay its current prevailing charge as defined in step 3. Then again, regardless of how you play, the Casino will get a nonnegative net payoff (since by step 3 this is the case for every arm separately), and you can still get an expected net payoff $0$ if you play optimally. Playing optimally means that you either not play or start playing. If you start playing you can quit, take a break, or switch to another arm only in periods in which the prevailing charge of the arm you are currently playing equals the GI of its current state.
5. Forget for a moment about your profits and assume that what you care about is maximizing payments to the casino (I don’t mean net payoff, I mean just the charges that the casino receives from your playing). Since the sequence of prevailing charges of every arm is decreasing, and since the discount factor makes the casino like higher payments early, the Gittins strategy — the one in which you play at each period the arm with highest current GI, which by definition of the prevailing charge is also the arm with highest current prevailing charge — is the one that maximizes the Casino’s payments. In fact, this would be the case even if you knew the realization of the charges sequence in advance.
6. The Gittins strategy is one of the optimal strategies from step 4. Therefore, its net expected payoff is $0$.
7. Therefore, for every strategy $\sigma$,
Rewards from $\sigma<=$ Charges from $\sigma<=$ Charges from Gittins strategy
(First inequality is step 4 and second is step 5)
And Charges from Gittins strategy = Rewards from Gittins Strategy
(step 6)
8. Therefore, the Gittins strategy gives the optimal possible total rewards.

That’s it.

Now, here is the question. Suppose that instead of arms we would have dynamic optimization problems, each given by a state space, an action space, a transition function, and a payoff function. Let’s call them projects. The difference between a project and an arm is that when you decide to work on a project you also decide which action to take, and the current reward and next state depend on the current state and on your action. Now read again the proof with projects in mind. Every time I said “play arm $i$”, what I meant is work on project $i$ and choose the optimal action. We can still define an “index”, as in the first step: the unique charge $\gamma$ such that, if you need to pay $\gamma$ every period you work on the project (using one of the actions) then both not working and working with some action is optimal. The conclusion is not true for the projects problem though. At which step does the argument break down?

Volume 42 of the AER, published in 1952, contains an article by Paul Samuelson entitled Spatial Price Equilibrium and Linear Programming’. In it, Samuelson uses a model of Enke (1951) as a vehicle to introduce the usefulness of linear programming techniques to Economists. The second paragraph of the paper is as follows:

In recent years economists have begun to hear about a new type of theory called linear programming. Developed by such mathematicians as G. B. Dantzig, J. v. Neumann, A. W. Tucker, and G. W. Brown, and by such economists as R. Dorfman, T. C. Koopmans, W. Leontief, and others, this field admirably illustrates the failure of marginal equalization as a rule for defining equilibrium. A number of books and articles on this subject are beginning to appear. It is the modest purpose of the following discussion to present a classical economics problem which illustrates many of the characteristics of linear programming. However, the problem is of economic interest for its own sake and because of its ancient heritage.

Of interest are the 5 reasons that Samuelson gives for why readers of the AER should care.

1. This viewpoint might aid in the choice of convergent numerical iterations to a solution.

2. From the extensive theory of maxima, it enables us immediately to evaluate the sign of various comparative-statics changes. (E.g., an increase in net supply at any point can never in a stable system decrease the region’s exports.)

3. By establishing an equivalence between the Enke problem and a maximum problem, we may be able to use the known electric devices for solving the former to solve still other maximum problems, and perhaps some of the linear programming type.

4. The maximum problem under consideration is of interest because of its unusual type: it involves in an essential way such non-analytic functions as absolute value of X, which has a discontinuous derivative and a corner; this makes it different from the conventionally studied types and somewhat similar to the inequality problems met with in linear programming.

5. Finally, there is general methodological and mathematical interest in the question of the conditions under which a given equilibrium problem can be significantly related to a maximum or minimum problem.

My students often use me as a sounding board for their new ventures. A sign that the modern University could pass for a hedge fund with classrooms. The request brings a chuckle as it always reminds me of my first exposure to entrepreneurial activity.

It happened in the most unlikeliest of places as well as times. A public (i.e. private) school in pre-Thatcherite England.  England was then the sick man of Europe and its decline was blamed upon the public schools.  Martin Wiener’s English Culture and the Decline of the Industrial Spirit, for example, argued that the schools had turned a nation of shopkeepers into one of lotus eaters.

Among the boys was a fellow, I’ll call Hodge. He was a well established source of contraband like cigarettes and pornographic magazines. He operated out of a warren of toilets in the middle of the school grounds called the White City. Why the school needed a small building devoted entirely to toilets was a product of the English distrust of indoor plumbing and central heating.

One lesson I learnt from Hodge was never buy a pornographic magazine sight unseen. The Romans call it caveat emptor, but, I think this, more vivid.

Hodge was always on the look out for new goods and services that he could offer for a profit to the other boys. One day, he hit upon the idea of buying a rubber woman (it was plastic and inflatable) and renting it out.  The customer base consisted of 400 teenage boys confined to a penal colony upon a wind blasted heath.

Consider the challenges. How was he to procure one (no internet)? Where would he hide the plastic inamorata to prevent theft or confiscation by the authorities? How would he find customers (no smart phones)? What should he charge? What was to prevent competition? And, of course, what happened? All, I think, best left to the imagination.

There is a test for `smarts’ that Sir Peter Medawar was fond of. I often think of it when teaching equilibrium.

If you have ever seen an El Greco, you will notice that the figures and faces are excessively elongated. Here is an example.The eye surgeon Patrick Trevor-Roper, brother to the historian Hugh offered an explanation. Readers of  certain vintage will recall the long running feud between Hugh Trevor-Roper and Evelyn Waugh. Waugh said that the best thing Hugh Trevor-Roper could do would be to change his name and leave Oxford for Cambridge. Hugh Trevor-Roper eventually  became Lord Dacre and left Oxford for Cambridge. But, I digress.

Returning to Patrick, he suggested that El Greco had a form of astigmatism, which distorted his vision and led to elongated images forming on his retina. Medawar’s question was simple: was Patrick Trevor-Roper correct?