You are currently browsing the category archive for the ‘Uncategorized’ category.

200 students for a 9 am class in spite of a midterm on day 3; perhaps they’ve not read the syllabus.

Began with the ultimatum game framed in terms of a seller making a take or leave it offer to the buyer. The game allows one to make two points at the very beginning of class.

1) The price seller chooses depends on their model of how the buyer will behave. One can draw this point out by asking sellers to explain how they came by their offers. Best offers to discuss are the really low ones (i.e. give most of the surplus to the buyer) and the offers that split the difference.

2) Under the assumption that `more money is better than less’, point out that the seller captures most of the gains from trade. Why? The ability to make a credible take or leave it offer.

This makes for a smooth transition into the model of quasi-linear preferences. Some toy examples of how buyers make choices based on surplus. Emphasize it captures idea that buyers make trade-offs (pay more if you get more; if its priced low enough its good enough). Someone will ask about budget constraints. A good question, ignore budget for now and come back to it later in the semester.

Next, point out that buyers do not share the same reservation price (RP) for a good or service. Introduce demand curve as vehicle for summarizing variation in RPs. Emphasize that demand curve tells you demand as you change your price holding other prices fixed.

Onto monopoly with constant unit costs and limited to a uniform price. Emphasize that monopoly in our context does not mean absence of competition, only that competition keeps price fixed as we change ours. Reason for such an assumption is to understand first how buyers respond to one sellers price changes.

How does monopoly choose profit maximizing price? Trade-off between margin and volume. Simple monopoly pricing exercise. Answer by itself is uninteresting. Want to know what profit maximizing depends upon.

Introduce elasticity of demand, its meaning and derivation. Then, a table of how profit and elasticity vary with price in the toy example introduce earlier. Point out how elasticity rises as price rises. Demand starts to drop off faster than margin rises. Explain why we don’t stop where elasticity is 1. Useful place to point out that here a small price increase is revenue neutral but total costs fall. So, uniform price is doing things: determining how much is captured from buyers and controlling total production costs. Table also illustrates that elasticity of demand matters for choosing price.

Segue into the markup formula. Explain why we should expect some kind of inverse relationship between markup and elasticity. Do derivation of markup formula with constant unit costs.

Now to something interesting to make the point that what has come before is very useful: author vs. publisher, who would prefer a higher price for the book? You’ll get all possible answers which is perfect. Start with how revenue is different from profit (authors get percentage revenue). This difference means their interests are not aligned. So, they should pick different prices. But which will be larger? Enter markup formula. Author wants price where elasticity is 1. Publisher wants to price where elasticity is bigger than 1. So, publisher wants higher price. Wait, what about e-books? Then, author and publisher want same price because unit costs are zero.

This is the perfect opportunity to introduce the Amazon letter to authors telling them that elasticity of demand for e-books at the current $14.99 price is about 2.4. Well above 1. Clearly, all parties should agree to lower the price of e-books. But what about traditional books? Surely lower e-book price will cause some readers to switch from the traditional to the e-book. Shouldn’t we look at the loss in profit from that as well? Capital point, but make life simple. Suppose we have only e-books. Notice, under the agency model where Amazon gets a percentage of revenue, everyone’s incentives appear to be aligned.
Is Amazon correct in its argument that dropping the e-book price will benefit me the author? As expressed in their letter, no. To say that the elasticity of demand for my book at the current price is 2.4 means that if I drop my price 1%, demand will rise 2.4% HOLDING OTHER PRICES FIXED. However, Amazon is not taking about dropping the price of my book alone. They are urging a drop in the price of ALL books. It may well be that a drop in price for all e-books will result in an increase in total revenues for the e-book category. This is good for Amazon. However, it is not at all clear that it is good for me. Rustling of papers, and creaking of seats is a sign that time is up.

In the lasts posts I talked about a Bayesian agent in a stationary environment. The flagship example was tossing a coin with uncertainty about the parameter. As time goes by, he learns the parameter. I hinted about the distinction between `learning the parameter’, and `learning to make predictions about the future as if you knew the parameter’. The former seems to imply the latter almost by definition, but this is not so.

Because of its simplicity, the i.i.d. example is in fact somewhat misleading for my purposes in this post. If you toss a coin then your belief about the parameter of the coin determines your belief about the outcome tomorrow: if at some point your belief about the parameter is given by some {\mu\in [0,1]} then your prediction about the outcome tomorrow will be the expectation of {\mu}. But in a more general stationary environment, your prediction about the outcome tomorrow depends on your current belief about the parameter and also on what you have seen in the past. For example, if the process is Markov with an unknown transition matrix then to make a probabilistic prediction about the outcome tomorrow you first form a belief about the transition matrix and then uses it to predict the outcome tomorrow given the outcome today. The hidden markov case is even more complicated, and it gives rise to the distinction between the two notions of learning.

The formulation of the idea of `learning to make predictions’ goes through merging. The definition traces back at least to Blackwell and Dubins. It was popularized in game theory by the Ehuds, who used Blackwell and Dubins’ theorem to prove that rational players will end up playing approximate Nash Equilibrium. In this post I will not explicitly define merging. My goal is to give an example for the `weird’ things that can happen when one moves from the i.i.d. case to an arbitrary stationary environment. Even if you didn’t follow my previous posts, I hope the following example will be intriguing for its own sake.

Read the rest of this entry »

The news of Stanley Reiter’s passing arrived over the weekend. Born in a turbulent age long since passed, he lived a life few of us could replicate. He saw service in WW2 (having lied about his age), and survived the Battle of the Bulge. On the wings of the GI bill he went through City College, which  in those days, was the gate through which many outsiders passed on their way to the intellectual aristocracy.

But in the importance and noise of to-morrow
When the brokers are roaring like beasts on the floor of the Bourse

Perhaps  a minute to recall to what Stan left behind.

Stan, is well known of his important contributions to mechanism design in collaboration with Hurwicz and Mount. The most well known example of this is the notion of the size of the message space of a mechanism. Nisan and Segal pointed out the connection between this and the notion of communication complexity. Stan would have been delighted to learn about the connection between this and extension complexity.

Stan was in fact half a century ahead of the curve in his interest in the intersection of algorithms and economics. He was one of the first scholars to tackle the job shop problem. He proposed a simple index policy that was subsequently implemented and reported on in Business Week: “Computer Planning Unsnarls the Job Shop,” April 2, 1966, pp. 60-61.

In 1965, with G. Sherman, he proposed a local-search algorithm for the TSP (“Discrete optimizing”, SIAM Journal on Applied Mathematics 13, 864-889, 1965). Their algorithm was able to produce a tour at least as good as the tours that were reported in earlier papers. The ideas were extended with Don Rice  to a local search heuristic for  non-concave mixed integer programs along with a computation study of performance.

Stan was also remarkable as a builder. At Purdue, he developed a lively school of economic theory attracting the likes of Afriat, Kamien, Sonnenschein, Ledyard and Vernon Smith. He convinced them all to come telling them Purdue was just like New York! Then, to Northwestern to build two groups one in the Economics department and another (in collaboration with Mort Kamien) in the business school.

Four agents are observing infinite streams of outcomes in {\{S,F\}}. None of them knows the future outcomes and as good Bayesianists they represent their beliefs about unknowns as probability distributions:

  • Agent 1 believes that outcomes are i.i.d. with probability {1/2} of success.
  • Agent 2 believes that outcomes are i.i.d. with probability {\theta} of success. She does not know {\theta}; She believes that {\theta} is either {2/3} or {1/3}, and attaches probability {1/2} to each possibility.
  • Agent 3 believes that outcomes follow a markov process: every day’s outcome equals yesterday’s outcome with probability {3/4}.
  • Agent 4 believes that outcomes follow a markov process: every day’s outcome equals yesterday’s outcome with probability {\theta}. She does not know {\theta}; Her belief about {\theta} is the uniform distribution over {[0,1]}.

I denote by {\mu_1,\dots,\mu_4\in\Delta\left(\{S,F\}^\mathbb{N}\right)} the agents’ beliefs about future outcomes.

We have an intuition that Agents 2 and 4 are in a different situations from Agents 1 and 3, in the sense that are uncertain about some fundamental properties of the stochastic process they are facing. I will say that they have `structural uncertainty’. The purpose of this post is to formalize this intuition. More explicitly, I am looking for a property of a belief {\mu} over {\Omega} that will distinguish between beliefs that reflect some structural uncertainty and beliefs that don’t. This property is ergodicity.

 

Read the rest of this entry »

Abraham Neyman and Sergiu Hart are two of the prominent mathematical game theorists to date. Neyman contributed immensely to the study of the Shapley value, stochastic games, and repeated games and complexity. Hart contributed significantly to the study of correlated equilibrium and adaptive processes leading to it, value theory, and formation of coalitions.
Both Abraham and Sergiu will be 66 next year. To celebrate this rare occasion, the Center for the Study of Rationality at the Hebrew University of Jerusalem organizes two conferences, one in honor of each of them. The conference in honor of Abraham will be held on June 16–19, 2015, and the conference in honor of Sergiu will follow on June 21–24, 2015.
Mark the dates and reserve tickets.

You may have heard about ResearchGate, the so called facebook of scientists. Yes, another social network. Its structure is actually more similar to twitter: each user is a node and you can create directed edges from yourself to other users. Since I finally got rid of my facebook account (I am a Bellwether. In five years all the cool guys will not be on facebook), I decided to try ResearchGate. I wanted a stable platform to upload my preferable versions of my papers so that they will be the first to pop up on google. Also, I figured if I am returning to blogging then I need stuff to bitch about. ResearchGate only partially fulfill the first goal, but it does pretty well with the second.

Read the rest of this entry »

Last week I wrote a post about two issues with Elsevier’s e-system, which is the system that all journals run by Elsevier, including Games and Economic Behavior and Journal of Mathematical Economics, use for handling submissions: the fact that sometimes reviewers can see the blinded comments that other reviewers wrote to the editor, and the user agreement that allows Elsevier to change its terms without notifying the users.

After I corresponded with the editors of Games and Economic Behavior and Journal of Mathematical Economics and with the Economics Editor of Elsevier, the reason for the privacy breach became clear: the e-system allows each editor to choose whether the blinded comments of one referee to the author and the blinded comments of one referee to the editor will be seen by other reviewers. For each type of blinded comments the editor can decide whether to show it to all reviewers or not. Each editor makes his or her own choice. I guess that often editors are not aware of this option, and they do not know what was the choice that the previous editor, or the one before him, made.

Apparently, the configuration of Games and Economic Behavior was to allow reviewers to see only the blinded comments to the author, while the configuration of Journal of Mathematical Economics was to allow reviewers to see both types of blinded comments. Once the source of the problem became clear, Atsushi Kajii, the editor of Journal of Mathematical Economics decided to change the configuration, so that the blinded comments of reviewers to the editor will not be seen by other reviewers. I guess that in few days this change will become effective. Elsevier also promised to notify all of its journals, in which the configuration was like that of JME, about this privacy issue, and let the editors decide whether they want to keep this configuration or change it. In case this configuration remains, they will add a warning that warns the referee that the blinded comments can be read by other reviewers.

I am happy that the privacy breach came to a good end, and that in the future the e-system will keep the privacy the referees.

Regarding the second issue, Elsevier is not willing to change its user agreement. Reading the user agreements of other publishers, like Springer and INFORMS, shows that user agreements can be reasonable, and not all publishers keep the right to change the user agreement without notifying the users. The Economics Editor of Elsevier wrote: “This clause is not unreasonable as the user can choose to discontinue the services at any time.” As I already wrote in the previous post, I choose to discontinue the service.

 

 

 

When I give a presentation about expert testing there is often a moment in which it dawns for the first time on somebody in the audience that I am not assuming that the processes are stationary or i.i.d. This is understandable. In most modeling sciences and in statistics stationarity is a natural assumption about a stochastic process and is often made without stating. In fact most processes one comes around are stationary or some derivation of a stationary process (think the white noise, or i.i.d. sampling, or markov chains in their steady state). On the other hand, most game theorists and micro-economists who work with uncertainty don’t know what is a stationary process even if they have heard the word (This is a time for you to pause and ask yourself if you know what’s stationary process). So a couple of introductory words about stationary processes is a good starting point to promote my paper with Nabil

First, a definition: A stationary process is a sequence {\zeta_0,\zeta_1,\dots} of random variables such that the joint distribution of {(\zeta_n,\zeta_{n+1},\dots)} is the same for all {n}-s. More explicitly, suppose that the variables assume values in some finite set {A} of outcomes. Stationarity means that for every {a_0,\dots,a_k\in A}, the probability {\mathop{\mathbb P}(\zeta_n=a_0,\dots,\zeta_{n+k}=a_{n+k})} is independent in {n}. As usual, one can talk in the language of random variables or in the language of distributions, which we Bayesianists also call beliefs. A belief {\mu\in\Delta(A^\mathbb{N})} about the infinite future is stationary if it is the distribution of a stationary process.

Stationarity means that Bob, who starts observing the process at day {n=0}, does not view this specific day as having any cosmic significance. When Alice arrives two weeks later at day {n=14} and starts observing the process she has the same belief about her future as Bob had when he first arrives (Note that Bob’s view at day {n=14} about what comes ahead might be different from Alice’s since he has learned something meanwhile, more on that later). In other words, each agent can denote by {0} the first day in which they start observing the process, but there is nothing in the process itself that day {0} corresponds to. In fact, when talking about stationary processes it will clear our thinking if we think of them as having infinite past and infinite future {\dots,\zeta_{-2},\zeta_{-1},\zeta_0,\zeta_1,\zeta_2,\dots}. We just happen to pop up at day {0}.

 

Read the rest of this entry »

Games and Economic Behavior, Journal of Economic Theory, Journal of Mathematical Economics, and Economics Letters are four journals that publish game theoretic papers and are published by Elsevier. They all use Elsevier e-system to handle submissions. I already talked in the past about the difficulty of operating these e-systems. Rakesh discussed the boycott against Elsevier. Recently I had some experience that made me stop using the Elsevier’s system altogether, even though I serve on the editorial board of Games and Economic Behavior. I will not use Émile Zola’s everlasting words for such an earthly matter; I will simply tell my experience.

1) The e-system seems to be sometimes insecure.

I was surprised when a referee with whom I consulted on the evaluation a paper (for GEB) told me that the system showed to him the private message that the other referee wrote to me, and that the same thing happened to him with JME. To prove his point, he sent to me screenshots with the private letter of the other referee for JME.

2) The user agreement of Elsevier is a contract that one should never agree to sign.

I guess no one bothered to read the user agreement of Elsevier. I did. The first paragraph binds us to the agreement:

This Registered User Agreement (“Agreement”) sets forth the terms and conditions governing the use of the Elsevier websites, online services and interactive applications (each, a “Service”) by registered users. By becoming a registered user, completing the online registration process and checking the box “I have read and understand the Registered User Agreement and agree to be bound by all of its terms” on the registration page, and using the Service, you agree to be bound by all of the terms and conditions of this Agreement.

The fourth paragraph, titled “changes” says that any change made to the contract is effective immediately, and so it binds you. If you want to make sure they did not add some paragraph to which you disagree, you must read the whole user agreement every time you use the system.

Elsevier reserves the right to update, revise, supplement and otherwise modify this Agreement from time to time. Any such changes will be effective immediately and incorporated into this Agreement. Registered users are encouraged to review the most current version of the Agreement on a periodic basis for changes. Your continued use of a Service following the posting of any changes constitutes your acceptance of those changes.

I contacted Elsevier about the user agreement and got the following response:

The Elsevier website terms and conditions (see http://www.elsevier.com/legal/elsevier-website-terms-and-conditions) cannot be customized upon request; however, these terms and conditions do not often change and notification would be provided via the “Last revised” date at the bottom of this page.  The current terms and conditions were Last revised: 26 August 2010.

Well, it is comforting that they did not make any change in the past four years, but will Elsevier’s CEO agree to open an account in a bank that has the “change” paragraph in the contract?

I stopped using the e-system of Elsevier, both as a referee and as an editor.

Roscoff is a village at the north-west corner of France, located on a small piece of land that protrudes into the English canal. Right here, in 1548, the six-year-old Mary, Queen of Scots, having been betrothed to the Dauphin François, disembarks.

As far as I understood, the most common sights in the area are tourists and sea food. As far as I can tell, the main advantage of Roscoff is the Laboratoire Biologique, which is used to host conferences. Every now and then the French game theory group makes use of this facility and organizes a conference in this secluded place. The first week of July was one of these nows and thens. This is my third time to attend the Roscoff conference, and I enjoyed meeting colleagues, the talks, and the vegetarian food that all non-sea-food eaters got.

Here I will tell you about one of the talks by Roberto Cominetti.

Brouwer’s fixed point theorem states that every continuous function $f$ that is defined on a compact and convex subset $X$ of a Euclidean space has a fixed point. When the function $f$ is a contraction, that is, when there is $ρ ∈ [0,1)$ such that $d(f(x),f(y)) ≤ ρ d(x,y)$ for every $x,y \in X$, then Banach’s fixed point theorem tell us that there is a unique fixed point $x*$ and there is an algorithm to approximate it: choose an arbitrary point $x_0 ∈ X$ and define inductively $x_{k+1} = f(x_k)$. The sequence $(x_k)$ converges to $x*$ at an exponential rate.

When the function $f$ is non-expansive, that is, $d(f(x),f(y)) \leq d(x,y)$ for every $x,y \in X$, there may be more than a single fixed point (e.g., $f$ is the identity) and the sequence defined above need not converge to a fixed point (e.g., a rotation in the unit circle).

In his talk, Roberto talked about a procedure that does converge to a fixed point when $f$ is non-expansive. Let $(α_k)$ be a sequence of numbers in $(0,1)$. Choose $x_0 ∈ X$ in an arbitrary way and define inductively $x_{k+1} = α_{k+1} f(x_k) + (1-α_{k+1}) x_k$. Surprisingly enough, under this definition the distance $d(x_k,f(x_k))$ is bounded by

d(x_k,f(x_k)) ≤ C diameter(X) / \sqrt( α_1 (1-α_1) + α_2 (1-α_2)  + … + α_n (1-α_n) ),

where C = 1/\sqrt(π).

In particular, if the denominator goes to infinity, which happens, for example, if the sequence $(α_k)$ is constant, then the sequence $(x_k)$ converges to a fixed point. Since the function that assigns to each two-player zero-sum strategic-form game its value is non-expansive, this result can become handy in various situations.

This is a good opportunity to thank the organizers of the conference, mainly Marc Quincampoix and Catherine Rainer, who made a great job in organizing the week.

IMG_20140630_105619

Join 118 other followers

Follow

Get every new post delivered to your Inbox.

Join 118 other followers

%d bloggers like this: