In a 2020 paper in the Journal of Economic Literature, Heckman and Moktan argue that the obsessive focus on top 5 publications has a deleterious effect on the profession. In addition to documenting the impact a top 5 publication has on career outcomes, they argue that their hold distorts the incentives of junior faculty. For example, junior faculty may scrap a possibly good idea if it can’t get published in a top-five or focus on ideas with a ready made audience on the relevant editorial boards.
In addition to insufficient experimentation/exploration, there are two other consequences we should expect. As departments outsource their promotion and tenure decisions to the editorial boards of the top 5, we should anticipate an increase in the balkanization of individual departments. One’s colleagues have less of an incentive to engage with one’s own work and conversely. There should also be a decline in the willingness of faculty to contribute to the needs of the department. After all, a department is now merely an ensemble of special interest groups.
A second consequence should be an increase in attempts to subvert the primary goal of the peer review process: to provide disinterested, but, informed assessments of work. One need only look at Computer Science (CS) to see that such a possibility is not far fetched. Unless Economists are immune to the temptations that plague other mortals, we should anticipate the same. Nihar Shah at CMU surveys the problems that beset peer review in CS. Like Economics, there is a small subset of prestige venues for one’s work. Acceptance into these venues affects grants and tenure. The increased stakes have spawned collusive refereeing rings (mutual appreciation societies would be a less perjorative term) that subvert the goal of disinterested, but, informed review. There is also a strong suspicion that there are coalitions of scientists who agree to include each other as co-authors on multiple papers (risk sharing) so as to maximize the chances that they will make it in.
Nihar’s paper discusses various strategies to inoculate peer review against strategic behavior, but none are perfect. The fundamental problem is that the rewards to acceptance in these select venues exceed the expected penalties one might face.
These issues are part of a larger question: what is the optimal organization of scientific activity? The literature on contests, moral hazard and mechanism design focus on the individual component, ignoring other aspects such as rewarding discovery vs verification, incentivizing sharing, exploration and the decision to enter scientific work. For example, entry may involve high up front investments in specialized skills. Who will make these investments if the ex-post rewards from doing so are concentrated on a tiny number?
1 comment
October 4, 2022 at 8:58 am
Nihar B. Shah
The questions about incentives in academia are indeed very important and thank you for highlighting them.
A comment about the issue of collusions…
In computer science, we often have automated assignment of reviewers to papers, which may also include “bidding” by reviewers on which papers they are able to review. This can be gamed.
A natural question is — what if editors hand pick the reviewers for papers? Can that be gamed? Maybe it can… Editors might assign reviewers by looking at which previous papers the submission builds on, or which previous papers it cites. If an author wants to increase chances of getting assigned to a specific reviewer, they could try to cite that reviewer’s papers more etc.
In case collusions happen to be a concern in journals, perhaps one mitigating strategy could be to ensure that any author’s papers are sent to a diverse pool of reviewers across time.