A mixed strategy is a probability distribution over the set of pure strategies. When a player implements a mixed strategy, she chooses a pure strategy at the outset of the game, and follows that pure strategy all through the game.

A behavior strategy is a function that assigns a mixed action to each of the player’s information sets. When a player implements a behavior strategy, whenever the play reaches an information set the player chooses an action according to the mixed action that corresponds to that information set.

Thus, if the play crosses twice the same information set, a mixed strategy will choose the same action in both visits, while a behavior strategy will choose each time the action to play independently of past play.

The well known Kuhn’s Theorem states that in extensive-form games with perfect recall, the notions of mixed strategies and behavior strategies are the same: it is irrelevant whether the player makes her choices when the play visits the information set, or whether she makes these choices at the outset of the game, because the condition of perfect recall implies in particular that the play can cross each information set only once.

Kuhn’s Theorem holds for finite games: the depth of the game is finite, and there are finitely many actions in each decision nodes. The theorem can be extended to the case of infinite trees (countably many nodes), provided the number of actions in each decision nodes is finite.

In stopping problems the number of nodes is of the cardinality of the continuum, and therefore Kuhn’s Theorem does not apply. Stopping problems are a model that is used in mathematical finance: at every stage an agent receives payoff-relevant information, and she has to choose when to stop (when to sell stocks, or when to exercise options). The payoff is given by some stochastic process. Yuri Kifer asked me a question that boiled down to the following: in the model of stopping problems, are mixed strategies and behavior strategies equivalent? My first response was “sure”. Yuri wanted to see a proof. When I tried to write down the proof, a technical issue emerged. It required some time and the energy of three people to be able to provide a definite answer: “sure”.

A quite amusing post I stumbled upon describes pricing algorithms used by book retailers at Amazon.com, and what happens when two retailers use such algorithms. I am pretty sure that economists can say much about the use of these algorithms, optimal pricing algorithms, whether the customer gains or loses when retailers use them, and other similar questions. Alas, I am a mathemtician, so all I can do is read the post and smile.

Symbolab is a new search engine that allows one to search for equations. A great idea for those of us who, for example, forgot how the model of “stochastic game” is called, but remember that in stochastic games one is interested in the discounted sum of payoffs. The search engine presumably searches through articles to look for the equation that you look for.

Excited about the prospect that forgetting words is no longer devastating, I immediately searched for the formula of the discounted sum of payoffs:

$\sum _{t=1}^{\infty }\left(\lambda \left(1-\lambda \right)^{t-1}r\left(s_t,a_t\right)\right)$

Unfortunately 0 results were found. I did not give up and change the payoff function from “r” to “c” and then to “u”. Maybe the engine prefers costs or utilities. When I searched for the equation with costs, I got one result; a sum related to convex sets which is not related whatsoever to discounted sum. When I searched for the equation with utilities the engine gave up. I got 174 results! but they were of various sums, none of them (at least, not the 30 top ones which I looked at) involved discounted sums. There were references to statistics, ergodic theory, and various other topics, but no economics or game theory.

Maybe stochastic games are not that common on the net. Let’s look for something simpler:

$u_i\left(\sigma \right)\ge \:u_i\left(\sigma _{-i},\sigma ‘_i\right)$

which is the condition that defines Nash equilibrium. 0 results again. I gave up.

Unfortunately, it seems that we will have to continue remembering names of concepts, and will have to consult with colleagues when we need results in topics we are not familiar with. Magic has not been found yet.

It was a pleasure to awaken to the news that this years Nobel (memorial) in Economics went to Shapley and Roth (Schwartz lecture series hits bullseye again). Even more had it gone to me…..but, have yet to see pigs in the sky.

Pleasure turned to amusement as I heard and read the attempts of journalists to summarize the contributions honored. NPR suggested that it was about applying statistics. Forbes had a piece that among Indians would be described as putting shit in milk. This always makes me wonder about the other things they get wrong in the subjects I have no knowledge of. Nevertheless, I will plug one outlet for reasons that will become obvious upon reading it.

In this post, I set myself the task of seeing if I can do a better job than the fourth estate of conveying the nature of the contribution that was honored, Oct 15th, 2012. Here goes.

The fictional decentralized markets of the textbook, like the frictionless plane in a vacuum used in physics, are a useful device for establishing a benchmark. Real markets, however, must deal with frictions and the imperfections of their participants. One such market is for College Admissions in the US that is largely decentralized This decentralization increases uncertainty and raises costs. Students, for example, must forecast which colleges are likely to accept them. The greater the uncertainty in these forecasts the more insurance’ is purchased either by applying to a large set of colleges or aiming low’. On the college side, this insurance makes yields difficult to forecast. Increasing acceptance rates to increase yields has the effect of driving application numbers in the future down, so, waiting lists grow. These problems could be eliminated were one to switch to a centralized admissions market. How would one design such a centralized process? This is the question that the work of Shapley (and the late David Gale) and Alvin Roth addresses.

A major hurdle that a centralized market for college admissions must overcome is that it must match students with colleges in a way that respects the preferences and incentives of both parties. A centralized market cannot force a student to attend a college she does not want to, or require a college to accept a particular student. There is always the threat that the participants can pick up their marbles and walk away. If enough do, the incentives for the students and colleges that participate in the centralized market decline. Would a student participate in an centralized market if certain brand name colleges opted out? The work of (Gale &) Shapley was the first to formalize this concern with designing centralized markets that would be immune defections on the part of participants, i.e., stable. Their seminal paper articulated a model and a mechanism for matching students to colleges that would be stable in this sense. Alvin Roth’s own work builds on this in a number of ways. The first is to use the notion of stability to explain why some centralized markets fail. Second, to highlight the importance of other sources of instability associated with, say timing. Participants may wish to jump the clock’. Colleges, for example, offering admission to high school students in their junior year when there might be less competition for that student. The third, is to use the ideas developed in other contexts to allocate scarce resources where money as a medium of exchange is ruled out, most notably kidneys.

The work honored this year has its roots in a specialty of game theory, long considered unfashionable but one Shapley made deep contributions to: co-operative game theory. One can trace an unbroken line between the concern for stability in the design for markets and the abstract notions of stability discussed, for example, in the first book on game theory by von-Neuman and Morgenstern. It proves, once again, Keynes’ dictum:

“The power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas.’’

Le Monde tells us (in French) that researchers found 47-million-year-old fossils of nine mating pairs of turtles of species Allaeochelys crassesculpta in a lake in Germany. These fossils, which, according to the researchers, are the only fossils of mating pairs of animals to be found, taught the researchers a lot on this extinct species. But what caught my eye is a probabilistic statement made by one of the researchers:

“Des millions d’animaux vivent et meurent chaque année, et nombre d’entre eux se fossilisent par hasard, mais il n’y a vraiment aucune raison que ça arrive lorsque vous êtes en train de vous reproduire. Il est hautement improbable que les deux partenaires meurent en même temps, et les chances que les deux soient fossilisés à la fois sont encore plus maigres”, a expliqué Walter Joyce, de l’université allemande de Tübingen. Avec plusieurs couples, les probabilités s’amoindrissent encore.

Since the name Walter Joyce sounds English speaking, I searched for a version in English. I found this:

“No other vertebrates have ever been found like these, so these are truly exceptional fossils,” Joyce said. “The chances of both partners dying while mating are extremely low, and the chances of both partners being preserved as fossils afterward even lower. These fossils show that the fossil record has the potential to document even the most unlikely event if the conditions are right.”

The difference between the English version and the French version are slight yet important. The French reader learns that it is improbable that a pair of animals will die together while mating, and that it is even less probable that they will be fossilized together. The chances that this will happen to several mating pairs is even smaller. This is true if the death-while-mating+being-fossilized events of the different couples were independent. However, the article itself explains to the readers that here the events are dependent. During the mating process the turtles freeze, and sometimes sink to the bottom of the lake, where the water is poisonous. Thus, in this lake the probability of finding a fossil of a single turtle is less probable than finding the fossil of a mating pair of turtles, and the probability of finding several fossils of mating pairs is not necessarily lower than the probability of finding a single fossil of a mating pair… The English version does not correct this inaccuracy, but it does end with a note that can be interpreted as a correction of previous mistakes: “These fossils show that the fossil record has the potential to document even the most unlikely event if the conditions are right.”
Next week the semester starts, and I will get a new bunch of first-year undergrad students who are eager to study Introduction to Probability. I will ask them to read this article and spot the mistakes. I wonder how many will find what is wrong in this article.

One reason given for the value of an MBA degree is the relationships that one develops with other students as well as the connection to the larger alumni network. Such relationships can eventually be used to open doors, secure a place at the table’ and traded with others. While I’ve long since replaced the belief in res ipsa loquitur‘ for who you know matters’, I’m still not convinced by the relationship story.

Suppose introductions to gatekeepers and decision makers are scarce resources. When handing them out, why should I favor someone just because we attended the same institution? Presumably what matters is what good turn the other might do for me. Surely, this will depend on the position held rather than the school attended. Furthermore, why should the other’s academic pedigree suggest anything about the likelihood of the other returning the favor in the future? I know of no B-school that claims that it is selecting a class of future Cato’s.

True, favors are not requested or granted until a bond is established between the parties. Having something in common assists the formation of such bonds. But, why should having attended the same school be any more useful in this regard than a common interest in wine, golf or stamps?

Finally, if the alumni network is valuable, then merging two small networks should increase value for members of either network. Thus, in much the same way that some airlines share their frequent flyer programs (eg star alliance), we should see certain schools merging their networks. Haas and Anderson? Johnson School and Olin? One sees something like this at the executive masters level but not at the full time MBA level.

A paper by Azevdo, Weyl and White in a recent issue of Theoretical Economics caught my eye. It establishes existence of Walrasian prices in an economy with indivisible goods, a continuum of agents and quasilinear utility. The proof uses Kakutani’s theorem. Here is an argument based on an observation about extreme points of linear programs. It shows that there is a way to scale up the number of agents and goods, so that in the scaled up economy a Walrasian equilibrium exists.
First, the observation. Consider ${\max \{cx: Ax = b, x \geq 0\}}$. The matrix ${A}$ and the RHS vector ${b}$ are all rational. Let ${x^*}$ be an optimal extreme point solution and ${\Delta}$ the absolute value of the determinant of the optimal basis. Then, ${\Delta x^*}$ must be an integral vector. Equivalently, if in our original linear program we scale the constraints by ${\Delta}$, the new linear program has an optimal solution that is integral.

Now, apply this to the existence question. Let ${N}$ be a set of agents, ${G}$ a set of distinct goods and ${u_i(S)}$the utility that agent ${i}$ enjoys from consuming the bundle ${S \subseteq G}$. Note, no restrictions on ${u}$ beyond non-negativity and quasi-linearity.

As utilities are quasi-linear we can formulate the problem of finding the efficient allocation of goods to agents as an integer program. Let ${x_i(S) = 1}$ if the bundle ${S}$ is assigned to agent ${i}$ and 0 otherwise. The program is

$\displaystyle \max \sum_{i \in N}\sum_{S \subseteq G}u_i(S)x_i(S)$

subject to
$\displaystyle \sum_{S \subseteq G}x_i(S) \leq 1\,\, \forall i \in N$

$\displaystyle \sum_{i \in N}\sum_{S \ni g} x_i(S) \leq 1 \forall g \in G$

$\displaystyle x_i(S) \in \{0,1\}\,\, \forall i \in N, S \subseteq G$

If we drop the integer constraints we have an LP. Let ${x^*}$ be an optimal solution to that LP. Complementary slackness allows us to interpret the dual variables associated with the second constraint as Walrasian prices for the goods. Also, any bundle ${S}$ such that ${x_i^*(S) > 0}$ must be in agent ${i}$‘s demand correspondence.
Let ${\Delta}$ be the absolute value of the determinant of the optimal basis. We can write ${x_i^*(S) = \frac{z_i^*(S)}{\Delta}}$ for all ${i \in N}$ and ${S \subseteq G}$ where ${z_i^*(S)}$ is an integer. Now construct an enlarged economy as follows.

Scale up the supply of each ${g \in G}$ by a factor of ${\Delta}$. Replace each agent ${i \in N}$ by ${N_i = \sum_{S \subseteq G}z_i^*(S)}$ clones. It should be clear now where this is going, but lets dot the i’s. To formulate the problem of finding an efficient allocation in this enlarged economy let ${y_{ij}(S) = 1}$ if bundle ${S}$ is allocated the ${j^{th}}$ clone of agent ${i}$ and zero otherwise. Let ${u_{ij}(S)}$ be the utility function of the ${j^{th}}$ clone of agent ${i}$. Here is the corresponding integer program.

$\displaystyle \max \sum_{i \in N}\sum_{j \in N_i}\sum_{S \subseteq G}u_{ij}(S)y_{ij}(S)$

subject to
$\displaystyle \sum_{S \subseteq G}y_{ij}(S) \leq 1\,\, \forall i \in N, j \in N_i$

$\displaystyle \sum_{i \in N}\sum_{j \in N_i}\sum_{S \ni g} y_{ij}(S) \leq \Delta \forall g \in G$

$\displaystyle y_{ij}(S) \in \{0,1\}\,\, \forall i \in N, j \in N_i, S \subseteq G$

Its easy to see a feasible solution is to give for each ${i}$ and ${S}$ such that ${z_i^*(S) > 0}$, the ${z_i^*(S)}$ clones in ${N_i}$ a bundle ${S}$. The optimal dual variables from the relaxation of the first program complements this solution which verifies optimality. Thus, Walrasian prices that support the efficient allocation in the augmented economy exist.

In a previous post I described an infinite-horizon perfect-information game without a subgame-perfect equilibrium. Does there exist another refinement of Nash equilibrium that keeps the spirit of subgame perfection and is nonempty in every infinite-horizon perfect-information game?

One refinement of the concept of Nash equilibrium is that of trembling-hand equilibrium for extensive-form games, also called extensive-form perfect equilibrium. Call a strategy profile σ an extensive-form perfect equilibrium if there exists a sequence (σn) that satisfies the following properties:

1. The sequence (σn) converges to σ.
2. Under σn, each player in each of his decision nodes plays each action with probability at least 1/n.
3. σn is a Nash equilibrium in the game Γn in which every player must play in each of his decision nodes each action with probability at least 1/n. (Note that the third requirement implies the second requirement).

Does every game admit such an extensive-form perfect equilibrium? The following example, to appear in a new paper of János Flesch, Jeroen Kuipers, Ayala Mashiah-Yaakovi, Gijs Schoenmakers, Eran Shmaya, your humble servant, and Koos Vrieze,  is a variant of the example I provided here, and it shows that this is not the case. The conclusion is that at present we do not have any refinement of the concept of Nash equilibrium for infinite-horizon games that keeps the spirit of subgame perfection and is always nonempty.

I first met Jean-Francois Mertens while I was a graduate student at the Hebrew University of Jerusalem . He often visited the Center for Rationality at the Hebrew University and in one of those visits Abraham Neyman, my Ph.D. adviser, asked Mertens to sit with me and listen to my research.

Mertens has contributed to several areas in game theory.

His paper with Elon Kohlberg on stability of equilibria has opened a new research area.

His work with Shmuel Zamir on the universal belief space is fundamental and complements Harsanyi‘s work on belief hierarchies.

His study of the min-max value and max-min value in general repeated games had a huge influence on the research in this area. The Mertens conjecture on repeated games with signals, stating that in zero-sum repeated games with signals, whenever player 1 knows everything that player 2 knows (that is, the signal of player 1 contains the signal of player 2), then the limit of the n-stage game converges to the max-min value, has been the starting point of many works in this topic, and is still open, though recently some advances has been made to solve it.

His contribution to the Shapley value in games with continuum of players includes the extension of the diagonal formula, that historically applied to smooth nonatomic games, to a larger class of games that includes nondifferentiable and also discontinuous games.

For me, his most significant contribution is to the area of stochastic games, where he proved, together with Abraham Neyman, that every two-player zero-sum stochastic game has a uniform value. When Mertens and Neyman proved this result I was still in elementary school. The story I heard about the inception of this result is that in a workshop at CORE on repeated games in the Winter of 1978–1979, Sylvain Sorin was giving a talk on the Big Match, which was the first nontrivial two-player zero-sum undiscounted stochastic game that was solved. For Abraham Neyman, who was sitting in the audience, this was the first encounter with repeated games and stochastic games, and so he asked many questions, ranging from basic to ideas of extending the result to general stochastic games. Jean-Francois Mertens, who was sitting in the audience as well, patiently answered Neyman’s questions and explained why each route that Neyman suggested to extend the result is bound to fail. Like two swordmen, Neyman made another suggestion and Mertens fended it off. The result of this match was one of the most beautiful papers in game theory.

I met Jean-Francois Mertens at Toulouse in September last year, during the conference that Jerôme Renault organized. We had a dinner together. During the conference Mertens was very tired, and when he was back to Belgium he was diagnosed with a lung cancer in an advanced stage. He passed away on the night of July 17, 2012. We will all miss him. יהי זכרו ברוך

In this post I describe an alternative proof of a nifty result that appears in a forthcoming paper by Goeree, Kushnir, Moldovanu and Xi. They show (under private values) given any Bayesian incentive compatible mechanism, M, there is a dominant strategy mechanism that gives each agent the same expected surplus as M provides.

For economy of exposition only, suppose 2 agents and a finite set of possible outcomes, ${A}$. Suppose, also the same type space, ${\{1, \ldots, m\}}$ for both. Let ${f(\cdot)}$ be the density function over types. To avoid clutter, assume the uniform distribution, i.e., ${f(\cdot) = \frac{1}{m}}$. Nothing in the subsequent analysis relies on this.

When agent 1 reports type ${s}$ and agent 2 reports type ${t}$, denote by ${z_r(s,t) }$ the probability that outcome ${r \in A}$ is selected. The ${z}$‘s must be non-negative and satisfy ${\sum_{r \in A}z_r(s,t) = 1.}$

Associated with each agent ${i}$ is a vector ${\{a_{ir}\}_{r \in A}}$ that determines, along with her type, the utility she enjoys from a given allocation. In particular, given the allocation rule ${z}$, the utility that agent ${1}$ of type ${t}$ enjoys when the other agent reports type ${s}$ is ${\sum_{r \in A}ta_{1r}z_r(t,s).}$ A similar expression holds agent 2.

Let ${q_i(s,t) = \sum_{r \in A}a_{ir}z_r(s,t).}$ Interpret the ${q}$‘s as the quantity’ of goods that each agent receives. Dominant strategy implies that ${q_1(s,t)}$ should be monotone increasing in ${s}$ for each fixed ${t}$ and ${q_2(s,t)}$ should be monotone increasing in ${t}$ for each fixed ${s}$. The interim quantities will be: ${mQ_i(s) = \sum_t\sum_{r \in A}a_{ir}z_r(s,t).}$ Bayesian incentive compatibility (BIC) implies that the ${Q_i}$‘s should be monotone. To determine if given BIC interim `quantities’ ${Q_i}$‘s can be implemented via dominant strategies, we need to know if the following system is feasible

$\displaystyle \sum_{r \in A}a_{1r}[z_r(i,j) - z_r(i-1,j)] \geq 0\,\, \forall \,\, i = 2, \ldots, m \ \ \ \ \ (1)$

$\displaystyle \sum_{r \in A}a_{2r}[z_r(i,j) - z_r(i,j-1)] \geq 0\,\, \forall \,\, j = 2, \ldots, m \ \ \ \ \ (2)$

$\displaystyle \sum_t\sum_{r \in A}a_{1r}z_r(s,t) = mQ_1(s) \ \ \ \ \ (3)$

$\displaystyle \sum_s\sum_{r \in A}a_{2r}z_r(s,t) = mQ_2(t) \ \ \ \ \ (4)$

$\displaystyle \sum_{r \in A}z_r(s,t) = 1\,\, \forall s,t \ \ \ \ \ (5)$

$\displaystyle z_r(i,j) \geq 0\,\, \forall i,j,r \ \ \ \ \ (6)$

System (1-6) is feasible iff the optimal objective function value of the following program is zero:

$\displaystyle \min \sum_{i,j}w_{ij} + \sum_{i,j}h_{ij}$

subject to
$\displaystyle \sum_{r \in A}a_{1r}[z_r(i,j) - z_r(i-1,j)] + w_{ij} - u_{ij} = 0\,\, \forall \,\, i = 2, \ldots, m\,\, (\mu_{ij}) \ \ \ \ \ (7)$

$\displaystyle \sum_{r \in A}a_{2r}[z_r(i,j) - z_r(i,j-1)] + h_{ij} - v_{ij} = 0\,\, \forall \,\, j = 2, \ldots, m\,\, (\nu_{ij}) \ \ \ \ \ (8)$

$\displaystyle \sum_j\sum_{r \in A}a_{1r}z_r(i,j) = mQ_1(i)\,\, (\alpha_i) \ \ \ \ \ (9)$

$\displaystyle \sum_i\sum_{r \in A}a_{2r}z_r(i,j) = mQ_2(j)\,\, (\beta_j) \ \ \ \ \ (10)$

$\displaystyle \sum_{r \in A}z_r(i,j) = 1\,\, \forall i,j\,\, (\lambda(i,j)) \ \ \ \ \ (11)$

$\displaystyle w_{ij},\,\, h_{ij}\,\,z_r(i,j) \geq 0\,\, \forall i,j,r \ \ \ \ \ (12)$

Let ${(z^*, w^*, h^*, u^*, v^*)}$ be an optimal solution to this program.
Suppose, for a contradiction there is a pair ${(p,q)}$ such that ${w^*_{pq} > 0}$. I will argue that there must exist an ${s}$ such that ${u^*_{ps} > 0}$. Suppose not, then for each ${j \neq q}$, either ${w^*_{pj} > 0}$ or ${w^*_{pj} = 0}$ and ${u^*_{pj} = 0}$ (at optimality, it cannot be that ${w^*_{pq}}$ and ${u^*_{pq}}$ are both non-zero). In this case

$\displaystyle m[Q_1(p)-Q_1(p-1)] = \sum_j\sum_{r \in A}a_{1r}z^*_r(p,j) - \sum_j\sum_{r \in A}a_{1r}z^*_r(p-1,j)$

$= \sum_{j: w^*_{pj} > 0}a_{1r}[z^*_r(p,j) - z^*_r(p-1,j)]$ $= -\sum_j w^*_{pj}$. This last term is negative, a contradiction. Therefore, there is a $s$ such that $w^*_{ps} = 0$ but $u^*_{ps} > 0$.

Let ${Z_1(i,j) = \sum_{r \in A}a_{1r}z^*_r(i,j)}$, ${Z_2(i,j) = \sum_{r \in A}a_{2r}z^*_r(i,j)}$ and denote by ${Z(i,j)}$ the point ${(Z_1(i,j), Z_2(i,j))}$. Observe that ${Z(i,j)}$ is in the convex hull, ${K}$ of ${\{(a_{1r}, a_{2r}\}_{r \in A}}$ for all ${i,j}$. Thus choosing ${\{z_r(i,j)\}_{r \in A}}$‘s amounts to choosing a point ${Z(i,j) \in K}$. Equivalently, choosing a point ${Z(i,j) \in K}$ gives rise to a set of ${\{z_r(i,j)\}_{r \in A}}$‘s. For convenience assume that ${Z(i,j)}$ is in the strict interior of ${K}$ for all ${(i,j)}$ and that $K$ is full dimensional. This avoids having to deal with secondary arguments that obscure the main idea.

Recall, ${w^*_{pq} > 0}$ implies ${Z_1(p,q) 0}$ implies that ${Z_1(p,s) > Z_1(p,s)}$. Take all points ${\{Z(i,q)\}_{i \geq p}}$ and shift them horizontally to the right by ${\delta}$. Call these new points ${\{Z'(i,q)\}_{i \geq p}}$. Observe that ${Z'(i,q) \in K}$ for all ${i \geq p}$. Next, take all points ${\{Z(i,s)\}_{i \geq p}}$ and shift them horizontally to the left by ${\delta}$ to form new points ${\{Z'(i,s)\}_{i \geq p}}$. These points are also in ${K}$. Leave all other points ${\{Z(p,j)\}_{j \neq, q,s}}$ unchanged.

Because the vertical coordinates of all points were left unchanged, (8) and (10) are satisfied by this choice of points. Because ${\{Z(i,q)\}_{i \geq p}}$ and ${\{Z(i,s)\}_{i \geq p}}$ were shifted in opposite directions along the horizontal, (9) is still true. Finally, because all points in ${\{Z(i,q)\}_{i \geq p}}$ and ${\{Z(i,s)\}_{i \geq p}}$ were shifted by the same amount, (7) continues to hold.

The shift leftwards of ${Z(p,s)}$ reduces ${u_{ps}}$ while the rightward shift of ${Z(p,q)}$ reduces ${w_{pq}}$. Thus, we get a new solution with higher objective function value, a contradiction.

If ${Z(p,q)}$ and ${Z(p,s)}$ are not the interior of ${K}$ but on the boundary, then horizontal shifts alone may place them outside of ${K}$. In the case of ${Z(p,q)}$ this can only happen if ${Z_2(p,q) > Z_2(p-1,q)}$. In this case, shift ${Z(p,q)}$ across and to the right by ${\delta}$ as well and then downwards by the same amount. This would have to be matched by a corresponding upward shift by some point ${Z_2(h,q)}$. Similarly with ${Z(p,s)}$.