You are currently browsing the category archive for the ‘economics’ category.

Platooning, driverless cars and ride hailing services have all been suggested as ways to reduce congestion. In this post I want to examine the use of coordination via ride hailing services as a way to reduce congestion. Assume that large numbers of riders decide to rely on ride hailing services. Because the services use Google Maps or Waze for route selection, it would be possible to coordinate their choices to reduce congestion.

To think thorough the implications of this, its useful to revisit an example of Arthur Pigou. There is a measure 1 of travelers all of whom wish to leave the same origin ({s}) for the same destination ({t}). There are two possible paths from {s} to {t}. The `top’ one has a travel time of 1 unit independent of the measure of travelers who use it. The `bottom’ one has a travel time that grows linearly with the measure of travelers who employ it. Thus, if fraction {x} of travelers take the bottom path, each incurs a travel time of {x} units.

A central planner, say, Uber, interested in minimizing total travel time will route half of all travelers through the top and the remainder through the bottom. Total travel time will be {0.5 \times 1 + 0.5 \times 0.5 = 0.75}. The only Nash equilibrium of the path selection game is for all travelers to choose the bottom path yielding a total travel time of {1}. Thus, if the only choice is to delegate my route selection to Uber or make it myself, there is no equilibrium where all travelers delegate to Uber.

Now suppose, there are two competing ride hailing services. Assume fraction {\alpha} of travelers are signed up with Uber and fraction {1-\alpha} are signed up with Lyft. To avoid annoying corner cases, {\alpha \in [1/3, 2/3]}. Each firm routes its users so as to minimize the total travel time that their users incur. Uber will choose fraction {\lambda_1} of its subscribers to use the top path and the remaining fraction will use the bottom path. Lyft will choose a fraction {\lambda_2} of its subscribers to use the top path and the remaining fraction will use the bottom path.

A straight forward calculation reveals that the only Nash equilibrium of the Uber vs. Lyft game is {\lambda_1 = 1 - \frac{1}{3 \alpha}} and {\lambda_2 = 1 - \frac{1}{3(1-\alpha)}}. An interesting case is when {\alpha = 2/3}, i.e., Uber has a dominant market share. In this case {\lambda_2 = 0}, i.e., Lyft sends none of its users through the top path. Uber on the hand will send half its users via the top and the remainder by the bottom path. Assuming Uber randomly assigns its users to top and bottom with equal probability, the average travel time for a Uber user will be

\displaystyle 0.5 \times 1 + 0.5 \times [0.5 \times (2/3) + 1/3] = 5/6.

The travel time for a Lyft user will be

\displaystyle [0.5 \times (2/3) + 1/3] = 2/3.

Total travel time will be {7/9}, less than in the Nash equilibrium outcome. However, Lyft would offer travelers a lower travel time than Uber. This is because, Uber which has the bulk of travelers, must use the top path to reduce total travel times. If this were the case, travelers would switch from Uber to Lyft. This conclusion ignores prices, which at present are not part of the model.

Suppose we include prices and assume that travelers now evaluate a ride hailing service based on delivered price, that is price plus travel time. Thus, we are assuming that all travelers value time at $1 a unit of time. The volume of customers served by Uber and Lyft is no longer fixed and they will focus on minimizing average travel time per customer. A plausible guess is that there will be an equal price equilibrium where travelers divide evenly between the two services, i.e., {\alpha = 0.5}. Each service will route {1/3} of its customers through the top and the remainder through the bottom. Average travel time per customer will be {5/3}. However, total travel time on the bottom will be {2/3}, giving every customer an incentive to opt out and drive their own car on the bottom path.

What this simple minded analysis highlights is that the benefits of coordination may be hard to achieve if travelers can opt out and drive themselves. To minimize congestion, the ride hailing services must limit traffic on the bottom path. This is the one that is congestible. However, doing so makes its attractive in terms of travel time encouraging travelers to opt out.

Colleagues outside of Economics often marvel at the coordinated nature of the Economics job market. The job market is so efficient, that the profession no longer wastes resources by having everyone read each candidate’s job market paper. That task is assigned to one person (Tyler Cowen) who reports back to the rest of us. In case you missed the report, here it is

Economics is not alone in having a coordinated job market. Philosophy has one, but it has begun to show signs of unraveling. The ability to interview via Skype, for example, has reduced the value in the eyes of many, for a preliminary interview at their annual meeting. In response, the American Philosophy Association posted the following statement regarding the job market calendar:

For tenure-track/continuing positions advertised in the second half of the calendar year, we recommend an application deadline of November 1 or later. It is further recommended that positions be advertised at least 30 days prior to the application deadline to ensure that candidates have ample time to apply.

In normal circumstances a prospective employee should have at least two weeks for consideration of a written offer from the hiring institution, and responses to offers of a position whose duties begin in the succeeding fall should not be required before February 1.

When advertising in PhilJobs: Jobs for Philosophers, advertisers will be asked to confirm that the hiring institution will follow the above guidelines. If an advertiser does not do so, the advertisement will include a notice to that effect.

Its natural to wonder if the Economics market is not far behind. Skype interviews are already taking place. The current set up requires a department to evaluate and select candidates  for preliminary interviews within a month (roughly the middle of November to mid December) which is hardly conducive to mature reflection (and argument).

I don’t often go to empirical talks, but when I do, I fall asleep. Recently, while so engaged, I dreamt of the `replicability crisis’ in Economics (see Chang and Li (2015)). The penultimate line of their abstract is the following bleak assessment:

`Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable.’

Eager to help my empirical colleagues snatch victory from the jaws of defeat, I did what all theorists do. Build a model. Here it is.

The journal editor is the principal and the agent is an author. Agent has a paper characterized by two numbers {(v, p)}. The first is the value of the findings in the paper assuming they are replicable. The second is the probability that the findings are indeed replicable. The expected benefit of the paper is {pv}. Assume that {v} is common knowledge but {p} is the private information of agent. The probability that agent is of type {(v,p)} is {\pi(v,p)}.

Given a paper, the principal can at a cost {K} inspect the paper. With probability {p} the inspection process will replicate the findings of the paper. Principal proposes an incentive compatible direct mechanism. Agent reports their type, {(v, p)}. Let {a(v, p)} denote the interim probability that agent’s paper is provisionally accepted. Let {c(v, p)} be the interim probability of agent’s paper not being inspected given it has been provisionally accepted. If a provisionally accepted paper is not inspected, it is published. If a paper subject to inspection is successfully replicated, the paper is published. Otherwise it is rejected and, per custom, the outcome is kept private. Agent cares only about the paper being accepted. Hence, agent cares only about

\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p.

The principal cares about replicability of papers and suffers a penalty of {R > K} for publishing a paper that is not replicable. Principal also cares about the cost of inspection. Therefore she maximizes

\displaystyle \sum_{v,p}\pi(v,p)[pv - (1-p)c(v,p)R]a(v,p) - K \sum_{v,p}\pi(v,p)a(v,p)(1-c(v,p))

\displaystyle = \sum_{v,p}\pi(v,p)[pv-K]a(v,p) + \sum_{v,p}\pi(v,p)a(v,p)c(v,p)[K - (1-p)R].

The incentive compatibility constraint is
\displaystyle a(v, p)c(v,p) + a(v, p)(1-c(v,p))p \geq a(v, p')c(v,p') + a(v, p')(1-c(v,p'))p.

Recall, an agent cannot lie about the value component of the type.
We cannot screen on {p}, so all that matters is the distribution of {p} conditional on {v}. Let {p_v = E(p|v)}. For a given {v} there are only 3 possibilities: accept always, reject always, inspect and accept. The first possibility has an expected payoff of

\displaystyle vp_v - (1-p_v) R = (v+R) p_v - R

for the principal. The second possibility has value zero. The third has value { vp_v -K }.
The principal prefers to accept immediately over inspection if

\displaystyle (v+R) p_v - R > vp_v - K \Rightarrow p_v > (R-K)/R.

The principal will prefer inspection to rejection if { vp_v \geq K}. The principal prefers to accept rather than reject depends if {p_v \geq R/(v+R).}
Under a suitable condition on {p_v} as a function of {v}, the optimal mechanism can be characterized by two cutoffs {\tau_2 > \tau_1}. Choose {\tau_2} to be the smallest {v} such that

\displaystyle p_v \geq \max( (R/v+R), ((R-K)/R) ).

Choose {\tau_1} to be the largest {v} such that {p_v \leq \min (K/v, R/v+R)}.
A paper with {v \geq \tau_2} will be accepted without inspection. A paper with {v \leq \tau_1} will be rejected. A paper with {v \in (\tau_1, \tau_2)} will be provisionally accepted and then inspected.

For empiricists the advice would be to should shoot for high {v} and damn the {p}!

More seriously, the model points out that even a journal that cares about replicability and bears the cost of verifying this will publish papers that have a low probability of being replicable. Hence, the presence of published papers that are not replicable is not, by itself, a sign of something rotten in Denmark.

One could improve outcomes by making authors bear the costs of a paper not being replicated. This points to a larger question. Replication is costly. How should the cost of replication be apportioned? In my model, the journal bore the entire cost. One could pass it on to the authors but this may have the effect of discouraging empirical research. One could rely on third parties (voluntary, like civic associations, or professionals supported by subscription). Or, one could rely on competing partisan groups pursuing their agendas to keep the claims of each side in check. The last seems at odds with the romantic ideal of disinterested scientists but could be efficient. The risk is partisan capture of journals which would shut down cross-checking.

From Kris Shaw, a TA in for my ECON 101 class, I learnt that the band Van Halen once required that brown M&M’s not darken their dressing room door. Why? Maybe it was a lark. Perhaps, a member of the band (or two) could not resist chuckling over the idea of a minor factotum appointed to the task of sorting the M&Ms. When minor factotum is asked what they did that day, the response was bound to elicit guffaws. However, minor factotum might have made it a point to not wash their hands before sorting the M&Ms. Then, who would be laughing harder?

A copy of the M&M rider can be found here. Along with van Halen’s explanation of why the rider was included:

……the group has said the M&M provision was included to make sure that promoters had actually read its lengthy rider. If brown M&M’s were in the backstage candy bowl, Van Halen surmised that more important aspects of a performance–lighting, staging, security, ticketing–may have been botched by an inattentive promoter.

So the rider helps screen, apparently, whether the promotor pays attention to detail. I think the explanation problematic. It suggests that it is hard to monitor effort expended by promoter on important things like staging for example. So, monitor something completely irrelevant. The strategic promoter should shirk on the staging and expend effort on the M&Ms.

 

Duppe and Weintraub date the birth of Economic Theory,  at June 1949. It was the year in which Koopmans organized the Cowles Commission Activity Analysis Conference. It is also counted as conference Zero of the Mathematical Programming Symposium. I mention this because the connections between Economic Theory and Mathematical Programming and Operations Research had, at one time been very strong. The conference, for example, was conceived of by Tjalling Koopmans, Harold Kuhn, George Dantzig, Albert Tucker, Oskar Morgenstern, and Wassily Leontief with the support of the Rand corporation.

One of the last remaining links to this period who straddled, like a Colossus, both Economic Theory and Operations Research, Herbert Eli Scarf, passed away on November 15th, 2015.

Scarf came to Economics and Operations Research by way of Princeton’s mathematics department. Among his classmates was Gomory of the cutting plane method Milnor of topology fame and Shapley. Subsequently, he went on to  Rand ( Dantzig, Bellman, Ford & Fulkerson). While there he met Samuel Karlin and Kenneth Arrow who introduced him to inventory theory. It was in this subject that Scarf made the first of many important contributions: the optimality of (S, s) polices. He would go on to establish equivalence of the core and competitive equilibrium (jointly with Debreu), identify a sufficient condition for non-emptiness of the core of a NTU game (now known as Scarf’s Lemma), anticipated the application of Groebner basis in integer programming (neighborhood systems) and of course his magnificent `Computation of Economic Equilibria’.

Exegi monumentum aere perennnius regalique situ pyramidum altius, quod non imber edax, non Aquilo impotens possit diruere aut innumerabilis annorum series et fuga temporum. Non omnis moriar…….

I have finished a monument more lasting than bronze and higher than the royal structure of the pyramids, which neither the destructive rain, nor wild North wind is able to destroy, nor the countless series of years and flight of ages. I will not wholly die………….

You shouldn’t swing a dead cat, but if you did, you’d hit an economist doing data. Wolfers wrote:

“…...modern microeconomists are more likely to spend their days knee-deep in large-scale data sets describing the real-world decisions made by millions of people, and less likely to be mired in Greek-letter abstractions.”

Knee-deep usually goes with shit, while mired with bog. I’ll pick bog over shit, but suspect that that was not Wolfers’ intent.

The recent paper by Chang and Li about the difficulty of replicating empirical papers  does rather take the wind out of the empirical sails. One cannot help but wonder about the replicability of replicability studies. No doubt, a paper on the subject will be forthcoming.

Noah Smith on his blog wrote:

So the supply of both good and mediocre empirics has increased, but only the supply of mediocre theory has increased. And demand for good papers – in the form of top-journal publications – is basically constant. The natural result is that empirical papers are crowding out theory papers.

Even if one accepts the last sentence, the first can only be conjecture.  One might very well think that the supply of mediocre empirical papers is caused entirely by an increase in the supply of mediocre theory papers whose deficiencies are  glossed over with a patina of empirics. Interestingly, when reviewers could find nothing nice to say about Piketty’s theories they praised his data instead. Its like praising the author of a false theorem by saying while the proof is wrong, it is long.

The whole business has the feel of  tulip mania. Empirical papers as abundant as weeds. Analytics startups as plentiful as hedge funds. Analytics degree programs spreading like herpes. Positively Gradgrindian.

“THOMAS GRADGRIND, sir. A man of realities. A man of facts and calculations. A man who proceeds upon the principle that two and two are four, and nothing over, and who is not to be talked into allowing for anything over.”

In empirical econ classes around the world I imagine (because I’ve never been in one) Gradgrindian figures laying down the law:

“Facts alone are wanted in life. Plant nothing else, and root out everything else. You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.”

I have nothing against facts. I am quite partial to some.  But, they do not speak for themselves without an underlying theory.

Chu Kin Chan, an undergraduate student from the Chinese University of Hong Kong, has collected the placement statistics of the top 10 PhD programs in Economics from the last 4 years. You can find the report here. In it you will find the definition of top 10 as well as which placements `counted’. Given that not all PhD’s in economics who get academic positions do so in Economics departments, you can expect some judgement is required in deciding if a placements counts as a `top 10′ or `top 20′.

The results are similar to findings in other disciplines (the report refers to some of these). The top 10 departments place 5 times as many students in the top 20 departments as do those ranked 11 through 20. If you score a top 10 placement as +1, any other academic placement as a 0 and a non-academic placement as a -1, and then compute an average score per school, only one school gets a positive average score: MIT.

Chan also compares ranking of departments  by placement with a ranking  based on a measure of scholarly impact proposed by Glen Ellison. What is interesting is that departments that are very close to each other in the scholarly impact rating can differ quite a lot in terms of placement outcomes.

Read in tandem with the Card & Della Vigna study on falling acceptance rates in top journals and the recent Baghestanian & Popov piece on alma mater effects makes me glad not to be young again!

Chicago’s Booth school surveys select Economics faculty (the IMG panel)  on a variety of questions. Panelists are emailed a question and respond electronically, if so moved. They are asked to state whether they agree, strongly agree, disagree, are uncertain etc. as well as provide a level of confidence and, if they wish, some words of explanation. Here is one of the questions:

Using surge pricing to allocate transportation services — such as Uber does with its cars — raises consumer welfare through various potential channels, such as increasing the supply of those services, allocating them to people who desire them the most, and reducing search and queuing costs.

The correct answer to this question is: it depends. See below for the explanation. Back to the IMG panel. What is its purpose? According to the web site:

This panel explores the extent to which economists agree or disagree on major public policy issues. To assess such beliefs we assembled this panel of expert economists. Statistics teaches that a sample of (say) 40 opinions will be adequate to reflect a broader population if the sample is representative of that population.

Yes, but what is the underlying population? The IMG site does not say, instead it summarizes the cv’s of the sample:

The panel members are all senior faculty at the most elite research universities in the United States. The panel includes Nobel Laureates, John Bates Clark Medalists, fellows of the Econometric society, past Presidents of both the American Economics Association and American Finance Association, past Democratic and Republican members of the President’s Council of Economics, and past and current editors of the leading journals in the profession. This selection process has the advantage of not only providing a set of panelists whose names will be familiar to other economists and the media, but also delivers a group with impeccable qualifications to speak on public policy matters.

This is the high table of Economists, a group so select that the sample probably is the population. Why bother with the remarks about sampling?

How did the panelists respond to the surge pricing question? One  strongly agreed with the statement but with a level of confidence of 1 (which I think is the lowest). This panelist also provided an explanation that makes clear that the reported confidence level  was incorrect.  Another,  offers an `Agree’ with level of confidence of 3. Why not declare `uncertainty’? Or is the panelists trying to say: generally true but with some exceptions. The other responses suggest busy people  trying to be helpful (recall Truman) on a task that is low priority for them.

Only one panelist provides an answer that can be interpreted as `it depends’. That panelist reports being uncertain with a level of confidence of 10. This panelist also provides an explanation:

`Consumer plus producer surplus should rise but in the absence of competition consumer surplus may not. With competition consumers will gain.’

Two make things concrete, consider a monopolist who faces two states of the world characterized by two demand curves: peak and off-peak, with off-peak state occurring most of the time. Now compare consumer surplus in two scenarios: same price in both states of the world, different price in each state. In which scenario will consumer surplus be higher? Which is a lovely intermediate micro question! In addition, if buyers are liquidity constrained, a price mechanism will not efficiently match rides to riders who value them the most.

I think the answer to the question posed reveals less about agreement on policy than the default  assumption of the responder about the nature of the underlying market (passenger transportation).

Because I have white hair and that so sparse as to resemble the Soviet harvest of 1963, I am asked for advice. Just recently I was asked about `hot’ research topics in the sharing economy. `You mean a pure exchange economy?, said I in reply.  Because I have white hair etc, I sometimes forget to bite my tongue.

Returning to topic, the Economist piece I linked to above, gets it about right. With a fall in certain transaction costs, trades that were otherwise infeasible, are realized. At a high level there is nothing more to be said beyond what we know already about exchange economies.

A closer looks suggests something of interest in the role of the mediator (eBay, Uber) responsible for the reduction in transaction costs. They are not indifferent Walrasian auctioneers but self interested ones. eBay and Uber provide an interesting contrast in `intrusiveness’. The first reduces the costs with respect to search, alleviates the lemons problem and moral hazard by providing information and managing payments. It does not, however, set prices. These are left to participants to decide. In sum, eBay it appears,  tries to eliminate the textbook obstacles to a perfectly competitive market. Uber, also does these things but more. It chooses prices and the supplier who will meet the reported demand. One might think eBay does not because of the multitude of products it would have to track. The same is true for Uber. A product on Uber is a triple of origin, destination and time of day.    The rider and driver may not be thinking about things in this way, but Uber certainly must in deciding prices and which supplier will be chosen to meet the demand. Why doesn’t Uber allow riders to post bids and drivers to post asks?

Three items on copyright and revenue all on the same day.

First, is Taylor Swift’s open letter to Apple upbraiding them for not paying royalties to artists for their music during the trial period of its new streaming music service.  It caused the weenies at Apple to change their tune.

Second, a high court ruling in the UK which erased an earlier UK decision that made it lawful for users to copy purchased content for personal use. Related is freedom of panorama  which permits the photographing of copyrighted buildings and sculptures in public places. Up for vote this summer before the European parliament is legislation that would restrict such rights.

The Swift letter echoes the points she made earlier when she pulled her wares from Spotify:

“In my opinion, the value of an album is, and will continue to be, based on the amount of heart and soul an artist has bled into a body of work, and the financial value that artists (and their labels) place on their music when it goes out into the marketplace.”

One of the more poetic renditions of the labor theory of value I’ve read. Here is another line from the same missive:

“Valuable things should be paid for.”

No. Its the added value of a good or service that commands a premium. Pearsall-Smith got this right when writing of the novelists of his age.

“The diction, the run of phrase of each of them seems quite undistinguishable from that of the others, each of whose pages might have been written by any one of his fellows.”

Thus, the question is whether the heart and soul each artist bleeds into their work serve to differentiate it in a way that matters from others. The effectiveness of music recommender systems suggests not.

Enough of `Swiftian’ logic and lets turn to the UK high court ruling. The Electronic Frontier Foundation complained that it contained more economic theory than common sense. An irritating remark as the level of theory barely exceeded that you would find in an intermediate micro-economics course. It makes me wonder whether the pundits at the EFFs ever went to college.

The ruling is a perfect example of how consistency can become a procrustean bed. The UK government had earlier made the duplication of copyrighted material for personal use legal. It claimed that its reasons for doing were consistent with an EU copyright directive that requires the copyright holder to be compensated for forgone revenues lost to copying. The Judge concluded that the government’s rulings were, in fact, inconsistent with the EU directive and overturned it making copying for personal use illegal.

The law, as Dickens said, is an ass (the quadruped not the posterior). So, lets focus on the economics. The ruling by the way quotes Varian’s 2005 piece in the Journal of Economic Perspectives as well as Boldrin and Levine.

Suppose I sell you a song in a medium which is costly to reproduce and transport. If you want to listen to the song both at home and in your office you must purchase two copies. Now, a sea change. The medium on which the song is transmitted changes. The cost of duplication and transport is now zero. Am I worse off? If I am, then under the EU directive I should be compensated for this loss.

With this sea change, you would buy one fewer copy. However, I, recognizing the sea change gives you the same benefit as buying two copies, can simply raise my price to account for this. The High court ruling called this  pricing-in and the case turned upon whether the music seller, me in this example, could perfectly price-in. If not, then under the EU directive I am entitled (bizarre, I know) to compensation for lost profits.

If the sea change allows you to consume music in ways you previously could not (in the bathroom, in your car at night etc.) then it seems obvious that I could anticipate this and price-in. If the sea-change allows you to copy and distribute my music costlessly, then, I may be forced to sell my music at a discount or withhold it (see the Varian paper for intermediate cases). Whether I am harmed or not depends on whether you intend to use the sea change for personal use or to compete with me.

Interestingly, the discussion in the ruling as well as Varian’s paper ignores those who own the devices for transmitting, duplicating, storing and playing the music. Lets use the example in Varian pg. 11. You are willing to pay $20 for home use of a CD and $10 (actually 10 - \epsilon to break ties) for office use. The cost of copying is initially infinity.

The revenue maximizing price is clearly $20 for a CD, unless I could use a 2-part tariff. Now a third party develops a technology for copying CDs that is simple and convenient. Copying is now legal. Under the pricing-in story I should just charge $30 (assuming you have the technology). I’m better off and you are no worse off. However, we have ignored the owner of the copying technology. You, the music consumer have $30 to shell out. I can certainly capture $20 of it but to capture the remaining $10, I need the owner of the copying technology. Any simultaneous split of the $10 is a Nash equilibrium. The point is that the music and the technology that allows one to copy, format shift etc complements the music itself. That $10 is a joint gain to the owner of the song as well as the owner of the copying technology. One might argue that the owner of the copying technology is entitled to the full $10 as it is her innovation that allowed one to capture it. Hence, the copyright holder, me in this example, suffers no loss from the fact you can now copy my music.

Follow

Get every new post delivered to your Inbox.

Join 216 other followers