Acceptance rates and the aesthetics of peer review

January 13, 2016

By Justin Esarey

Based on the contributions to The Political Methodologist‘s special issue on peer review, it seems that many political scientists are not happy with the kind of feedback they receive from the peer review process. A theme seems to be that reviewers focus less on the scientific merits of a piece–viz., what can be learned from the evidence offered–and more on whether the piece is to the reviewer’s taste in terms of questions asked and methodologies employed. While I agree that this feedback is unhelpful and undesirable, I am also concerned that it is a fundamental feature of the way our peer review system works. More specifically, I believe that a system of journals with prestige tiers enforced by extreme selectivity creates a review system where scientific soundness is a necessary but far from sufficient criteria for publication, meaning that fundamentally aesthetic and sociological factors ultimately determine what gets published and inform the content of our reviews.

As Brendan Nyhan says, “authors frequently despair not just about timeliness of the reviews they receive but their focus.” Nyhan seeks to improve the focus of reviews by offering a checklist of questions that reviewers should answer as a part of their reviews (omitting those questions that, presumably, they should /not/ seek to answer). These questions revolve around ensuring that evidence offered is consistent with conclusions (“Does the author control for or condition on variables that could be affected by the treatment of interest?”) and that statistical inferences are unlikely to be spurious (“Are any subgroup analyses adequately powered and clearly motivated by theory rather than data mining?”).

The other contributors express opinions in sync with Nyhan’s point of view. For example, Tom Pepinsky says:

“I strive to be indifferent to concerns of the type ‘if this manuscript is published, then people will work on this topic or adopt this methodology, even if I think it is boring or misleading?’ Instead, I try to focus on questions like ‘is this manuscript accomplishing what it sets out to accomplish?’ and ‘are there ways to my comments can make it better?’ My goal is to judge the manuscript on its own terms.”

Relatedly, Sara Mitchell argues that reviewers should focus on “criticisms internal to the project rather than moving to a purely external critique.” This is explored more fully in the piece by Krupnikov and Levine, where they argue that simply writing “external validity concern!” next to any laboratory experiment hardly addresses whether the article’s evidence actually answers the questions offered; in a way, the attitude they criticize comes uncomfortably close to arguing that any question that can be answered using laboratory experiments doesn’t deserve to be asked, ipso facto.

My own perspective on what a peer review ought to be has changed during my career. Like Tom Pepinsky, I once thought my job was to “protect” the discipline from “bad research” (whatever that means). Now, I believe that a peer review ought to answer just one question: What can we learn from this article? [fn 1]

Specifically, I think that every sentence in a review ought to be:

  1. a factual statement about what the author believes can be learned from his/her research, or
  2. a factual statement of what the reviewer thinks actually can be learned from the author’s research, or
  3. an argument about why something in particular can (or cannot) be learned from the author’s research, supported by evidence.

This feedback helps an editor learn the marginal contribution that the submitted paper makes to our understanding, informing his/her judgment for publication. It also helps the author understand what s/he is communicating in her piece and whether claims must be trimmed or hedged to ensure congruence with the offered evidence (or more evidence must be offered to support claims that are central to the article).

Things that I think that shouldn’t be addressed in a review include:

  1. whether the reviewer thinks the contribution is sufficiently important to be published in the journal
  2. whether the reviewer thinks other questions ought to have been asked and answered
  3. whether the reviewer believes that an alternative methodology would have been able to answer different or better questions
  4. whether the paper comprehensively reviews extant literature on the subject (unless the paper defines itself as a literature review)

In particular, I think that the editor is the person in the most appropriate position to decide whether the contribution is sufficiently important for publication, as that is a part of his/her job; I also think that such a decision should be made (whenever possible) by the editorial staff before reviews are solicited. (Indeed, in another article I offer simulation evidence that this system actually produces better journal content, as evaluated by the overall population of political scientists, compared to a more reviewer-focused decision procedure.) Alternatively, the importance of a publication could be decided (as Danilo Friere alludes) by the discipline at large, as expressed in readership and citation rates, and not by one editor (or a small number of anonymous reviewers); such a system is certainly conceivable in the post-scarcity publication environment created by online publishing.

Of course, as our suite of contributions to TPM make clear, most of us do not receive reviews that are focused narrowly on the issues that I have outlined. Naturally, this is a frustrating experience. I think it is particularly trying to read a review that says something like, “this paper makes a sound scientific contribution to knowledge, but that contribution is simply not important enough to be published in journal X.” It is annoying precisely because the review acknowledges that the paper isn’t flawed, but simply isn’t to the reviewer’s taste. It is the academic equivalent of being told that the reviewer is “just not that into you.” It is a fundamentally unactionable criticism.

Unfortunately, I believe that authors are likely to receive more, not less, of such feedback in the future regardless of what I or anyone else may think. The reason is that journal acceptance rates are so low, and the proportion of manuscripts that make sound contributions to knowledge is so high, that other criteria must necessarily be used to select from those papers which will be published from the set of those that could credibly be published.

Consider that in 2014, the American Journal of Political Science accepted only 9.6% of submitted manuscripts and International Studies Quarterly accepted about 14%. The trend is typically downward: at Political Research Quarterly, acceptance rates fell by 1/3 between 2006 and 2011 (to just under 12 percent acceptance in 2011). I speculate that far more than 10-14% of the manuscripts received by AJPS, ISQ, and PRQ were scientifically sound contributions to political science that could have easily been published in those journals–at least, this is what editors tend to write in their rejection letters!

When (let us say, for argument’s sake) 25% of submitted articles are scientifically sound but journal acceptance rates are less than half that value, it is essentially required that editors (and, by extension, reviewers) must choose on criteria other than soundness when selecting articles for publication. It is natural that the slippery and socially-constructed criterion of “importance” in its many guises would come to the fore in such an environment. Did the paper address questions you think are the most “interesting?” Did the paper use what you believe are the most “cutting edge” methodologies? “Interesting” questions and “cutting edge” methodologies are aesthetic judgments, at least in part, and defined relative to a group of people making these aesthetic judgments. Consequently, I fear that the peer review process must become as much a function of sociology as of science because of the increasingly competitive nature of journal publication. Insomuch that I am correct, I think would prefer that these aesthetic judgments come from the discipline at large (as embodied in readership rates and citations) and not from two or three anonymous colleagues.

Still, as long as there are tiers of journal prestige and these tiers are a function of selectivity, I would guess that the power of aesthetic criteria to influence the peer review process has to persist. Indeed, I speculate that the proportion of sound contributions in the submission pool is trending upward because of the intensive focus of many PhD programs on rigorous research design training and the ever-increasing requirements of tenure and promotion committees. At the very least, the number of submissions is going up (from 134 in 2001 to 478 in 2014 at ISQ), so even if quality is stable selectivity must rise if the number of journal pages stays constant. Consequently, I fear that a currently frustrating situation is likely to get worse over time, with articles being selected for publication in the most prominent journals of our discipline on progressively more whimsical criteria.

What can be done? At the least, I think we can recognize that the “tiers” of journal prestige do not necessarily mean what they might have used to in terms of scientific quality or even interest to a broad community of political scientists and policy makers. Beyond this, I am not sure. Perhaps a system that rewards authors more for citation rates and less for the “prestige” of the publication outlet might help. But undoubtedly these systems would also have unanticipated and undesirable properties, and it remains to be seen whether they would truly improve scholarly satisfaction with the peer review system.

Footnotes

[1] Our snarkier readers may be thinking that this question can be answered in just one word for many papers they review: “nothing.” I cannot exclude that possibility, though it is inconsistent with my own experience as a reviewer. I would say that, if a reviewer believed nothing can be learned from a paper, I would hope that the reviewer would provide feedback that is lengthy and detailed enough to justify that conclusion.