You are currently browsing the category archive for the ‘economics’ category.

The efficacy of lockdowns was debated at the start of the pandemic and continues to this day. Sweden, famously, chose not to implement a lockdown. As Anders Tegnell, remarked:
`Closedown, lockdown, closing borders — nothing has a historical scientific basis, in my view.’

Lyman Stone of the American Enterprise Institute expresses it more forcefully:

`Here’s the thing: there’s no evidence of lockdowns working. If strict lockdowns actually saved lives, I would be all for them, even if they had large economic costs. But the scientific and medical case for strict lockdowns is paper-thin.’

The quotes above reveal an imprecision at the heart of the debate. What exactly is a lockdown? Tegnell uses a variety of words that suggest a variety of policies that can be lumped together. Stone is careful to say there is no evidence in support of strict lockdowns. Suggesting that `less’ strict lockdowns might be beneficial. So, lets speculate about the other extreme: no lockdown relaxed or otherwise.

The presence of infected individuals increases the cost of certain interactions. If infected individuals don’t internalize this cost, then, in the absence of any intervention we will observe a reduction in the efficient level of economic activity.

How will this manifest itself? Agents may adopt relatively low cost precautionary actions like wearing masks. On the consumer side they will substitute away from transactions that expose them to the risk of infection. For example, take out rather than sit down and delivery versus shopping in person. In short, we will see a drop in the demand for certain kinds of transactions. Absent subsidies for hospital care, we should expect an increase in price (or wait times) for medical care further incentivizing precautionary actions on the part of individuals.

The various models of network formation in the face of contagion we have (eg Erol and Vohra (2020)) all suggest we will see changes in how individuals socialize. They will reduce the variety of their interactions and concentrate them in cliques of `trusted’ agents.

On the supply side, firms will have to make costly investments to reduce the risk of infection to customers and possibly workers. The ability of firms to pass these costs onto their customers or their suppliers will depend on the relative bargaining power of each. Restaurant workers, for example, may demand increased compensation for the risks they bear, but this will occur at the same time as a drop in demand for their services.

To summarize, a `no lockdown’ policy will, over time, resemble a lockdown policy. Thus, the question is whether there is a  coordinated lockdown policy that is superior to an uncoordinated one that emerges endogenously.

One of the delights of pursuing a relatively young discipline is that one meets its pioneers. As one grows old in the discipline, so do the pioneers, who eventually pass into `the undiscovered country from whose bourn no traveler returns.’ Overlooked, at least by me, was that one also meets, in the chrysalis stage, those who will eventually lead the discipline into the next century. It was the untimely passing of William Sandholm on July 6th of this year, that brought this to mind.

I first met Bill in 1998. I had just moved to MEDS and he was on his way out as a new minted PhD. He, a shiny new penny and myself on the way to becoming so much loose change.

Within a decade, Bill rose to prominence as an authority on Evolutionary Game Theory. His book, “Population Games and Evolutionary Dynamics” became the standard reference for population games. The concept of evolutionary implementation can be credited to him.

Bill was also a provider of public goods. He wrote and made freely available software for working with evolutionary dynamics, served on panels and editorial boards.

As I recall Bill, I am reminded of a line from Mary Chase’s play, Harvey uttered by the main character Elwood Dowd:

Years ago my mother used to say to me, she’d say, ‘In this world, Elwood, you must be’ – she always called me Elwood – ‘In this world, Elwood, you must be oh so smart or oh so pleasant.’ Well, for years I was smart. I recommend pleasant. You may quote me.

Bill was both.

Will widely available and effective tests for COVID-19 awaken the economy from its COVID induced coma? Paul Romer, for one, thinks so. But what will each person do with the information gleaned from the test? Should we expect someone who has tested positive for the virus to stay home and someone who has tested negative to go to work? If the first receives no compensation for staying home, she will leave for work. The second, anticipating that infected individuals have an incentive to go to work, will choose to stay home. As a result, the fraction of the population out and about will have an infection rate exceeding that in the population at large.

In a new paper by Rahul Deb, Mallesh Pai, Akhil Vohra and myself we argue that widespread testing alone will not solve this problem. Testing in concert with subsidies will. We propose a model in which both testing and transfers are targeted. We use it to jointly determine where agents should be tested and how they should be incentivized. The idea is straightforward. In the absence of widespread testing to distinguish between those who are infected and those who are not, we must rely on individuals to sort themselves. They are in the best position to determine the likelihood they are infected (e.g. based on private information about exposures, how rigorously they have been distancing etc.). Properly targeted testing with tailored transfers give them the incentive to do so.

We also distinguish between testing at work and testing `at home’. An infected person who leaves home to be tested at work poses an infection risk to others who choose to go outside. Testing `at home’ should be interpreted as a way to test an individual without increasing exposure to others. Our model also suggests who should be tested at work and who should be tested at home.

Some days ago I learnt that a job offer to a promising postdoc I advise evaporated. Not unexpected in these times, but disappointing nevertheless . There are now about 300 Universities with hiring pauses or freezes in place.

For Universities that are tuition driven, this is understandable. For those with large endowments of which a large portion are unrestricted this is puzzling. It is true that about 75% of all US university  endowment funds are invested in equities and these have declined since the start of the pandemic. But, the 3 month treasury rate is, at the time I write this, at 0.22%. Why aren’t they borrowing? More generally, why don’t we see consumption smoothing?

An interesting paper by Brown, Dimmock, Kang, and Weisbenner (2014) documents how University endowments respond to shocks. They write:

Our primary finding is that university endowments respond asymmetrically to contemporaneous positive and negative financial shocks. In response to contempo- raneous positive shocks, endowments tend to leave current payouts unchanged. Such behavior is consistent with endowments following their stated payout policies, which are based on past endowment values and not current returns, in order to smooth payouts (e.g., pay out 5 percent of the past three-year average of endowment values).

However, following contemporaneous negative shocks, endowments actively reduce payout rates. Unlike their response to positive shocks, this behavior is inconsistent with endowments following their standard smoothing rules. This asymmetry in the response to positive and negative shocks is especially strong if we explicitly control for the payout rate that is implied by the universities’ stated payout rules (something we do for a subsample of the endowments for which we have sufficient information to precisely document their payout rules). We also fail to find consistent evidence that universities change endowment payouts to offset shocks to other sources of university revenues. These findings, which we confirm through several robustness checks, suggest that endowments’ behavior differs from that predicted by several normative models of endowment behavior.

They argue that their data supports the idea that Universities are engaged in endowment hoarding, i.e.,  maintenance of the endowment is treated as an end in itself. The Association for American Universities argues that endowment hoarding is a myth, see item 9 at this link.  Their response confirms the 3 year average rule but is silent on the asymmetric response to shocks reported above.

More generally, one might ask what is the purpose of a University endowment? Hansmann (1990) offers an interesting discussion of why a University even has an endowment (other enterprises are run through a mixture of debt and equity).  Tobin (1974) articulated one for modeling purposes which I suspect captures what many have in mind:

The trustees of an endowed institution are the guardians of the future against the claims of the present. Their task is to preserve equity among generations. The trustees of an endowed university … assume the institution to be immortal.

If one takes the principle of intergenerational equity seriously, then, would it not make sense to borrow from a better future into a worse present? Unless, of course, it is expected that the future will be even worse than today.

The race to publish COVID-19 related papers is on, and I am already behind. Instead, I will repurpose a paper by Eduard Talamas and myself on networks and infections which is due out in GEB.

It is prompted by the following question: if you are given the option to distribute—without cost to you or anyone else—a perfectly safe but only moderately effective vaccine for a viral infection, should you? That we’ve posed it means the answer must be no or at least maybe not.

Unsurprisingly, it has to do with incentives. When the risk of becoming infected from contact declines, individuals tend to be less circumspect about coming into contact with others. This is risk compensation, first suggested by Charles Adams  in 1879 and popularized by Sam Peltzman in the 1970’s.

Therefore, the introduction of a vaccine has two effects. On the one hand, it reduces the probability that an individual becomes infected upon contact. On the other hand, it decreases individuals’ incentives to take costly measures to avoid contact. If the second effect outweighs the first, there will be an increase in infections upon the introduction of a moderately effective vaccine.

These are statements about infection rates not welfare. Individuals make trade-offs. In this case between the risk of infection and the costs of avoiding it. Therefore, observing that an individual’s infection probability will increase upon introduction of a partially effective vaccine is insufficient to argue against introduction.

In our paper, Eduard and I show that the introduction of a vaccine whose effectiveness falls below some threshold could make everyone worse off, even when each individual is perfectly rational and bears the full cost of becoming infected. If the vaccine is highly effective, this outcome is reversed. This is because risky interactions can be strategic complements. An individual’s optimal amount of risky interactions can be increasing in the amount of risky interactions that others take.

To illustrate, call two individuals that engage in risky interactions partners. Every risky interaction that Ann’s partner Bob has with Chloe affects Ann’s incentives to have risky interactions with Chloe in two countervailing ways. It increases Chloe’s infection probability. But it also increases the probability that Ann is infected conditional on Chloe being infected—because if Chloe is infected, chances are that Ann’s partner Bob is also infected. Given that a risky interaction between Ann and Chloe only increases the probability that Ann becomes infected when Chloe is infected and Ann is not, the combination of these effects can lead to an increase in Ann’s incentives to engage with Chloe and her partners when Bob engages with Chloe.

One might ask, given the huge trove of papers on epidemiological models, this effect must have been identified before and discussed? No, or at least not as far as we could tell. This is because we depart from from a standard feature of these models. We allow agents to strategically choose their partners— instead of only allowing them to choose the number of partners, and then having matches occur uniformly at random.

This morning, a missive from the Econometrics society arrived in my in box announcing “two modest fees associated with the submission and publication of papers in its three journals.” As of May 1st 2020, the Society will assess a submission fee of $50 and a page charge of $10 per page for accepted papers. With papers on the short side running to around 30 pages and 10 page appendices this comes out to about $400. By the standards of the natural sciences this is indeed modest.

At the low end the American Meteorological Society charges $120 per page, no submission fee. In the middle tier, the largest open-access publishers — BioMed Central and PLoS — charge $1,350–2,250 to publish peer-reviewed articles in many of their journals, and their most selective offerings charge $2,700–2,900. At the luxury end of the market is the Proceedings of the National Academy which starts out at $1590 for 6 pages and rises upto $4,215 for a 12 page paper.

My colleague Aislinn Bohren has suggested rewarding referees with free page coupons: publish one page free for each five pages you referee. This may suffer the same fate as the Capitol Hill Baby Sitting co-operative.

In the short run the effect will be to drive papers to JET and GEB as not all academics have research budgets which will cover the fees. An alternative is to submit the paper for $50. If accepted, decline to have it published. Send it elsewhere and send a copy of the acceptance letter to one’s promotion and tenure committee. Voila, a new category in the CV: accepted at Econometrica but not published.

 

 

An agent with an infectious disease confers a negative externality on the rest of the community. If the cost of infection is sufficiently high, they are encouraged and in some cases required to quarantine themselves. Is this the efficient outcome? One might wonder if a Coasian approach would generate it instead. Define a right to walk around when infected which can be bought and sold. Alas, infection has the nature of public bad which is non-rivalrous and non-excludable. There is no efficient, incentive compatible individually rational (IR) mechanism for the allocation of such public bads (or goods). So, something has to give. The mandatory quarantine of those who might be infected can be interpreted as relaxing the IR constraint for some.

If one is going to relax the IR constraint it is far from obvious that it should be the IR constraint of the infected. What if the costs of being infected vary dramatically? Imagine a well defined subset of the population bears a huge cost for infection while the cost for everyone else is minuscule. If that subset is small, then, the mandatory quarantine (and other mitigation strategies) could be far from efficient. It might be more efficient for the subset that bears the larger cost of infection to quarantine themselves from the rest of the community.

 

Six years ago, I decided to teach intermediate microeconomics. I described my views on how it should be taught in an earlier post. The notes for that course grew into a textbook that is now available in Europe and in the US this April. I am particularly delighted at being able to sport Paolo Ucello’s `The Hunt’ upon the cover. The publishers, Cambridge University Press, asked me to provide an explanation for why I had chosen this, and it appears on the rear cover. Should you make your way to Oxford, be sure to stop by the Ashmolean Museum to see it, the painting of course, in all its glory. I day dream, that like Samuelson’s `Economics’, it will sell bigly.

51jKqdtlkzL

Over a rabelaisian feast with convivial company, conversation turned to a twitter contretemps between economic theorists known to us at table. Its proximate cause was the design of the incentive auction for radio spectrum. The curious can dig around on twitter for the cut and thrust. A summary of the salient economic issues might be helpful for those following the matter.

Three years ago, in the cruelest of months, the FCC conducted an auction to reallocate radio spectrum. It had a procurement phase in which spectrum would be purchased from current holders and a second phase in which it was resold to others. The goal was to shift spectrum, where appropriate, from current holders to others who might use this scarce resource more efficiently.

It is the procurement phase that concerns us. The precise details of the auction in this phase will not matter. Its design is rooted in Ausubel’s clinching auction by way of Bikhchandani et al (2011) culminating in Milgrom and Segal (2019).

The pricing rule of the procurement auction was chosen under the assumption that each seller owned a single license. If invalid, it allows a seller with multiple licenses to engage in what is known as supply reduction to push up the price. Even if each seller initially owned a single license, a subset of sellers could benefit from merging their assets and coordinating their bids (or an outsider could come in and aggregate some sellers prior to the auction). A recent paper by my colleagues Doraszelski, Seim, Sinkinson and Wang offers estimates of how much sellers might have gained from strategic supply reduction.

Was the choice of price rule a design flaw? I say, compared to what? How about the VCG mechanism? It would award a seller owning multiple licenses the marginal product associated with their set of licenses. In general, if the assets held by sellers are substitutes for each other, the marginal product of a set will exceed the sum of the marginal products of its individual elements. Thus, the VCG auction would have left the seller with higher surplus than they would have obtained under the procurement auction assuming no supply reduction. As noted in Paul Milgrom’s  book, when goods are substitutes, the VCG auction creates an incentive for mergers. This is formalized in Sher (2010). The pricing rule of the procurement auction could be modified to account for multiple ownership (see Bikhchandani et al (2011)) but it would have the same qualitative effect. A seller would earn a higher surplus than they would have obtained under the procurement auction assuming no supply reduction. A second point of comparison would be to an auction that was explicitly designed to discourage mergers of this kind. If memory serves, this reduces the auction to a posted price mechanism.

Was there anything that could have been done to discourage  mergers? The auction did have reserve prices, so an upper limit was set on how much would be paid for licenses. Legal action is a possibility but its not clear whether that could have been pursued without delaying the auction.

Stepping back, one might ask a more basic question: should the reallocation of spectrum have been done by auction? Why not follow Coase and let the market sort it out? The orthodox answer is no because of hold-up and transaction costs. However, as Thomas Hazlett has argued, there are transaction costs on the auction side as well.

 

Volume 42 of the AER, published in 1952, contains an article by Paul Samuelson entitled `Spatial Price Equilibrium and Linear Programming’. In it, Samuelson uses a model of Enke (1951) as a vehicle to introduce the usefulness of linear programming techniques to Economists. The second paragraph of the paper is as follows:

In recent years economists have begun to hear about a new type of theory called linear programming. Developed by such mathematicians as G. B. Dantzig, J. v. Neumann, A. W. Tucker, and G. W. Brown, and by such economists as R. Dorfman, T. C. Koopmans, W. Leontief, and others, this field admirably illustrates the failure of marginal equalization as a rule for defining equilibrium. A number of books and articles on this subject are beginning to appear. It is the modest purpose of the following discussion to present a classical economics problem which illustrates many of the characteristics of linear programming. However, the problem is of economic interest for its own sake and because of its ancient heritage.

Of interest are the 5 reasons that Samuelson gives for why readers of the AER should care.

  1. This viewpoint might aid in the choice of convergent numerical iterations to a solution.

  2. From the extensive theory of maxima, it enables us immediately to evaluate the sign of various comparative-statics changes. (E.g., an increase in net supply at any point can never in a stable system decrease the region’s exports.)

  3. By establishing an equivalence between the Enke problem and a maximum problem, we may be able to use the known electric devices for solving the former to solve still other maximum problems, and perhaps some of the linear programming type.

  4. The maximum problem under consideration is of interest because of its unusual type: it involves in an essential way such non-analytic functions as absolute value of X, which has a discontinuous derivative and a corner; this makes it different from the conventionally studied types and somewhat similar to the inequality problems met with in linear programming.

  5. Finally, there is general methodological and mathematical interest in the question of the conditions under which a given equilibrium problem can be significantly related to a maximum or minimum problem.

 

Kellogg faculty blogroll