By Sara McLaughlin Mitchell
[Ed. note: This post is contributed by Sara McLaughlin Mitchell, Professor and Chair of the Department of Political Science at the University of Iowa.]
As academics, the peer review process can be one of the most rewarding and frustrating experiences in our careers. Detailed and careful reviews of our work can significantly improve the quality of our published research and identify new avenues for future research. Negative reviews of our work, while also helpful in terms of identifying weaknesses in our research, can be devastating to our egos and our mental health. My perspectives on peer review have been shaped by twenty years of experience submitting my work to journals and book publishers and by serving as an Associate Editor for two journals, Foreign Policy Analysis and Research & Politics. In this piece, I will 1) discuss the qualities of good reviews, 2) provide advice for how to improve the chances for publication in the peer review process, and 3) discuss some systemic issues that our discipline faces for ensuring high quality peer review.
Let me begin by arguing that we need to train scholars to write quality peer reviews. When I teach upper level graduate seminars, I have students submit a draft of their research paper about one month before the class ends. I then assign two other students as peer reviewers for the papers anonymously and then serve as the third reviewer myself for each paper. I send students examples of reviews I have written for journals and provide general guidelines about what improves the qualities of peer reviews. After distributing the three peer reviews to my students, they have two weeks to revise their papers and write a memo describing their revisions. Their final research paper grade is based on the quality of their work at each of these stages, including their efforts to review classmates’ research projects.[1]
Writing High Quality Peer Reviews
What qualities do good peer reviews share? My first observation is that tone is essential to helping an author improve their research. If you make statements such as “this was clearly written by a graduate student” or “this paper is not important enough to be published in journal X” or “this person knows nothing about the literature on this topic”, you are not being helpful. These kinds of blanket negative statements can only serve to discourage junior (and senior!) scholars from submitting work to peer reviewed outlets.[2] Thus one should always consider what contributions a paper is making to the discipline and then proceed with ideas for making the final product better.
In addition to crafting reviews with a positive tone, I also recommend that reviewers focus on criticisms internal to the project rather than moving to a purely external critique. For example, suppose an author was writing a paper on the systemic democratic peace. An internal critique might point to other systemic research in international relations that would help improve the authors’ theory or identify alternative ways to measure systemic democracy. An external critique, however, might argue that systemic research is not useful for understanding the democratic peace and that the author should abandon this perspective in favor of dyadic analyses. If you find yourself writing reviews where you are telling authors to dramatically change their research question or theoretical perspective, you are not helping them produce publishable research. As an editor, it is much more helpful to have reviews that accept the authors’ research goals and then provide suggestions for improvement. Along these lines, it is very common for reviewers to say things like “this person does not know the literature on the democratic peace” and then fail to provide a single citation for research that is missing in the bibliography.[3] If you think an author is not engaging with an important literature for their topic, help the scholar by citing some of that work in your review. If you do not have time to add full citations, even providing authors’ last names and the years of publication can be helpful.
Another common strategy that reviewers take is to ask for additional analyses or robustness checks, something I find very useful as a reader of scholarly work. However, reviewers should identify new analyses or data that are essential for checking the robustness of the particular relationship being tested, rather than worrying about all other control variables out there in the literature or all alternative statistical estimation techniques for a particular problem. A person reviewing a paper on the systemic democratic peace could reasonably ask for alternative democracy measures or control variables for other major systemic dynamics (e.g. World Wars, hegemonic power). Asking the scholar to develop a new measure for democracy or to test her model against all other major international relations systemic theories is less reasonable. I understand the importance for checking the robustness of empirical relationships, but I also think we can press this too far when we expect an author to conduct dozens of additional models to demonstrate their findings. In fact, authors are anticipating that reviewers will ask for such things and they are preemptively responding by including appendices with additional models. In conversations with my junior colleagues (who love appendices!), I have noted that they are doing a lot of extra work on the front end and getting potentially fewer publications from these materials when they relegate so much of their work to appendices. Had Bruce Russett and John Oneal adopted this strategy, they would have published one paper on the Kantian tripod for peace, rather than multiple papers that focused on different legs of the tripod. I also feel that really long appendices are placing additional burdens on reviewers who are already paying costs to read a 30+ page paper.[4]
Converting R&Rs to Publications
Once an author receives an invitation from a journal to revise and resubmit (R&R) a research paper, what strategies can they take to improve their chances for successfully converting the R&R to a publication? My first recommendation is to go through each review and the editors’ decision letter and identify each point being raised. I typically move each point into a document that will become the memo describing my revisions and then proceed to work on the revisions. My memos have a general section at the beginning that provides an overview of the major revisions I have undertaken and then this is followed by separate sections for the editors’ letter and each of the reviews. Each point that is addressed by the editors or reviewers is presented and then I follow that with information about how I revised the paper in light of that comment and the page number where the revised text or results can be found. It is a good idea to identify criticisms that are raised by multiple reviewers because these issues will be very imperative to address in your revisions. You should also read the editors’ letter carefully because they often provide ideas about which criticisms are most important to address from their perspective. Additional robustness checks that you have conducted can be included in an appendix that will be submitted with the memo and your revised paper.
As an associate editor, I have observed authors failing at this stage of the peer review process. One mistake I often see is for authors to become defensive against the reviewers’ advice. This leads them to argue against each point in their memo rather than to learn constructively from the reviews about how to improve the research. Another mistake is for authors to ignore advice that the editors explicitly provide. The editors are making the final decision on your manuscript so you cannot afford to alienate them. You should be aware of the journal’s approach to handling manuscripts with R&R decisions. Some journals send the manuscript to the original reviewers plus a new reviewer, while other journals either send it back only to the original reviewers or make an in-house editorial decision. These procedures can dramatically influence your chances for success at the R&R stage. If the paper is sent to a new reviewer, you should expect another R&R decision to be very likely.[5]
Getting a revise and resubmit decision is exciting for an author but also a daunting process when one sees how many revisions might be expected. You have to determine how to strike a balance between defending your ideas and revising your work in response to the criticisms you have received in the peer review process. My observation is that authors who are open to criticism and can learn from reviewers’ suggestions are more successful in converting R&Rs to publications.
Peer Review Issues in our Discipline
Peer review is an essential part of our discipline for ensuring that political science publications are of the highest quality possible. In fact, I would argue that journal publishing, especially in the top journals in our field, is one of the few processes where a scholars’ previous publication record or pedigree are not terribly important. My chances of getting accepted at APSR or AJPS have not changed over the course of my career. However, once I published a book with Cambridge University Press, I had many acquisitions editors asking me about ideas for future book publications. There are clearly many books in our discipline that have important influences on the way we think about political science research questions, but I would contend that journal publications are the ultimate currency for high caliber research given the high degree of difficulty for publishing in the best journals in our discipline.[6]
Having said that, I recognize that there are biases in the journal peer review process. One thing that surprised me in my career was how the baseline probability for publishing varied dramatically across different research areas. I worked in some areas where R&R or conditional acceptance was the norm and in other research areas where almost every piece was rejected.[7] For example, topics that have been very difficult for me to publish journal articles on include international law, international norms, human rights, and maritime conflicts. One of my early articles on the systemic democratic piece (Mitchell, Gates, and Hegre 1999) was published in a good IR journal despite all three reviewers being negative; the editor at the time (an advocate of the democratic peace himself) took at a chance on the paper. Papers I have written on maritime conflicts have been rejected at six or more journals before getting a single R&R decision. My work that crosses over into international law also tends to be rejected multiple times because satisfying both political science and international law reviewers can be difficult. Other topics I have written on have experienced more smooth sailing through journal review processes. Work on territorial and cross-border river conflicts has been more readily accepted, which is interesting given that maritime issues are also geopolitical in nature. Diversionary conflict and alliance scholars are quite supportive of each other’s work in the review process. Other areas of my research agenda fall in between these extremes. My empirical work on militarized conflict (e.g. the issue approach) or peaceful conflict management (e.g. mediation) can draw either supportive or really tough reviewers, a function I believe of the large number of potential reviewers in these fields. I have seen similar patterns in advising PhD students. Some students who were working in emerging topics like civil wars or terrorism found their work well-received as junior scholars, while others working on topics like foreign direct investment and foreign aid experienced more difficulties in converting their dissertation research into published journal articles.
Smaller and more insulated research communities can be helpful for junior scholars if the junior members are accepted into the group, as the chances for publication can be higher. On the other hand, some research areas have a much lower baseline publication rate. Anything that is interdisciplinary in my experience lowers the probability of success, which is troubling from a diversity perspective given the tendency for women and minority scholars to be drawn to interdisciplinary research. As noted above, I have also observed that certain types of work (e.g. empirical conflict work or research on gender) face more obstacles in the review process because there are a larger number of potential reviewers, which also increases the risks that at least one person will dislike your research. In more insulated communities, the number of potential reviewers is small and they are more likely to agree on what constitutes good research. Junior scholars may not know the baseline probability of success in their research area, thus it is important to talk with senior scholars about their experiences publishing on specific topics. I typically recommend a portfolio strategy with journal publishing, where junior scholars seek to diversify their substantive portfolio, especially if the research community for their dissertation project is not receptive to publishing their research.
I also think that journal editors have a collective responsibility to collect data across research areas and determine if publication rates vary dramatically. We often report on general subfield areas in annual journal reports, but we do not typically break down the data into more fine-grained research communities. The move to having scholars click on specific research areas for reviewing may facilitate the collection of this information. If reviewers’ recommendations for R&R or acceptance vary across research topics, then having this information would assist new journal editors in making editorial decisions. Once we collect this kind of data, we could also see how these intra-community reviewing patterns influence the long term impact of research fields. Are broader communities with lower probabilities of publication success more effective in the long run in terms of garnering citations to the research? We need additional data collection to assess my hypothesis that baseline publication rates vary across substantive areas of our discipline.
We also need to remain vigilant in ensuring representation of women and minority scholars in political science journals. While women constitute about 30% of faculty in our discipline (Mitchell and Hesli 2013), the publication rate by women in top political science journals is closer to 20% of all published authors (Bruening and Sanders 2007). Much of this dynamic is driven by a selection effect process whereby women spend less time on research relative to their male peers and submit fewer papers to top journals (Allen 1998; Link, Swann, and Bozeman 2008; Hesli and Lee 2011). Journal editors need to be more proactive in soliciting submissions by female and minority scholars in our field. Editors may also need to be more independent from reviewers’ recommendations, especially in low success areas that comprise a large percentage of minority scholars. It is disturbing to me that the most difficult areas for me to publish in my career have been those that have the highest representation of women (even though it is still small!). We cannot know whether my experience generalizes more broadly without collection of data on topics for conference presentations, submissions of those projects to journals, and the average “toughness” of reviewers in such fields. I believe in the peer review process and I will continue to provide public goods to protect it. I also believe that we need to determine if the process is generating biases that influence the chances for certain types of scholars or certain types of research to dominate our best journals.
References
Allen, Henry L. 1998. “FacultyWorkload and Productivity: Gender Comparisons.” In The NEA Almanac of Higher Education.Washington, DC: National Education Association.
Breuning, Marijke, Jeremy Backstrom, Jeremy Brannon, Benjamin Isaak Gross, and Michael Widmeier. 2015. “Reviewer Fatigue? Why Scholars Decline to Review their Peers’ Work.” PS: Political Science & Politics 48(4): 595-600.
Breuning, Marijke, and Kathryn Sanders. 2007. “Gender and Journal Authorship in Eight Prestigious Political Science Journals.” PS: Political Science and Politics 40(2): 347–51.
Djupe, Paul A. 2015. “Peer Reviewing in Political Science: New Survey Results.” PS: Political Science & Politics 48(2): 346-352.
Hesli, Vicki L., and Jae Mook Lee. 2011. “Faculty Research Productivity: Why Do Some of Our Colleagues Publish More than Others?” PS: Political Science and Politics 44(2): 393–408.
Link, Albert N., Christopher A. Swann, and Barry Bozeman. 2008. “A Time Allocation Study of University Faculty.” Economics of Education Review 27(4): 363–74.
Miller, Beth, Jon Pevehouse, Ron Rogowski, Dustin Tingley, and Rick Wilson. 2013. “How To Be a Peer Reviewer: A Guide for Recent and Soon-to-be PhDs.” PS: Political Science & Politics 46(1): 120-123.
Mitchell, Sara McLaughlin, Scott Gates, and Håvard Hegre. 1999. “Evolution in Democracy-War Dynamics.” Journal of Conflict Resolution 43(6): 771-792.
Mitchell, Sara McLaughlin and Vicki L. Hesli. 2013. “Women Don’t Ask? Women Don’t Say No? Bargaining and Service in the Political Science Profession.” PS: Political Science & Politics 46(2): 355-369.
Footnotes
[1] While I am fairly generous in my grading of students’ peer reviews given their lack of experience, I find that I am able to discriminate in the grading process. Some students more effectively demonstrate that they read the paper carefully, offering very concrete and useful suggestions for improvement. Students with lower grades tend to be those who are reluctant to criticize their peers. Even though I make the review process double blind, PhD students in my department tend to reveal themselves as reviewers of each other’s work in the middle of the semester.
[2] In a nice piece that provides advice on how to be a peer reviewer, Miller et al (2013:122) make a similar point: “There may be a place in life for snide comments; a review of a manuscript is definitely not it.”
[3] As Miller et al (2013:122) note: “Broad generalizations—for instance, claiming an experimental research design ‘has no external validity’ or merely stating ‘the literature review is incomplete’—are unhelpful.”
[4] Djupe’s (2015: 346-347) survey of APSA members shows that 90% of tenured or tenure-track faculty reviewed for a journal in the past calendar year, with the average number of reviews varying by rank (assistant professors-5.5, associate professors-7, and full professors-8.3). In an analysis of review requests for the American Political Science Review, Breuning et al (2015) find that while 63.6% of review requests are accepted, scholars declining the journal’s review requests often note that they are too busy with other reviews. There is reasonable evidence that many political scientists feel overburdened by reviews, although the extent to which extra appendices influence those attitudes is unclear from these studies.
[5] I have experienced this process myself at journals like Journal of Peace Research which send a paper to a new reviewer after the first R&R is resubmitted. I have only experienced three or more rounds of revisions on a journal article at journals that adopt this policy. My own personal preference as an editor is to make the decision in-house. I have a high standard for giving out R&Rs and thus feel qualified to make the final decision myself. One could argue, however, that by soliciting advice from new reviewers, the final published products might be better.
[6] Clearly there are differences in what can be accomplished in a 25-40 page journal article versus a longer book manuscript. Books provide space for additional analyses, in-depth case studies, and more intensive literature reviews. However, many books in my field that have been influential in the discipline have been preceded by a journal article summarizing the primary argument in a top ranked journal.
[7] This observation is based on my own personal experience submitting articles to journals and thus qualifies as more of a hypothesis to be tested rather than a settled fact.