Peering at Open Peer Review

December 08, 2015

By Danilo Freire

Introduction

Peer review is an essential part of the modern scientific process. Sending manuscripts for others to scrutinize is such a widespread practice in academia that its importance cannot be overstated. Since the late eighteenth century, when the Philosophical Transactions of the Royal Society pioneered editorial review,[1] virtually every scholarly outlet has adopted some sort of pre-publication assessment of received works. Although the specifics may vary, the procedure has remained largely the same since its inception: submit, receive anonymous criticism, revise, restart the process if required. A recent survey of APSA members indicates that political scientists overwhelmingly believe in the value of peer review (95%) and the vast majority of them (80%) think peer review is a useful tool to keep themselves up to date with cutting-edge research (Djupe 2015, 349). But do these figures suggest that journal editors can rest upon their laurels and leave the system as it is?

Not quite. A number of studies have been written about the shortcomings of peer review. The system has been criticised for being too slow (Ware 2008), conservative (Eisenhart 2002), inconsistent (Smith 2006; Hojat, Gonnella, and Caelleigh 2003), nepotist (Sandström and Hällsten 2008), biased against women (Wold and Wennerås 1997), affiliation (Peters and Ceci 1982), nationality (Ernst and Kienbacher 1991) and language (Ross et al. 2006). These complaints have fostered interesting academic debates (e.g. Meadows 1998; Weller 2001), but thus far the literature offers little practical advice on how to tackle peer review problems. One often overlooked aspect in these discussions is how to provide incentives for reviewers to write well-balanced reports. On the one hand, it is not uncommon for reviewers to feel that their work is burdensome and not properly acknowledged. Further, due to the anonymous nature of the reviewing process itself, it is impossible to give the referee proper credit for a constructive report. On the other hand, the reviewers’ right to full anonymity may lead to sub-optimal outcomes as referees can rarely be held accountable for being judgmental (Fabiato 1994).

In this short article, I argue that open peer review can address these issues in a variety of ways. Open peer review consists in requiring referees to sign their reports and requesting editors to publish the reviews alongside the final manuscripts (DeCoursey 2006). Additionally, I suggest that political scientists would benefit from using an existing, or creating a dedicated, online repository to store referee reports relevant to the discipline. Although these ideas have long been implemented in the natural sciences (DeCoursey 2006; Ford 2013; Ford 2015; Greaves et al. 2006; Pöschl 2012), the pros and cons of open peer review have rarely been discussed in our field. As I point out below, open peer reviews should be the norm in the social sciences for numerous reasons. Open peer review provides context to published manuscripts, encourages post-publication discussion and, most importantly, makes the whole editorial process more transparent. Public reports have important pedagogical benefits, offering students a first-hand experience of what are the best reviewing practices in their field. Finally, open peer review not only allows referees to showcase their expert knowledge, but also creates an additional incentive for scholars to write timely and thoughtful critiques. Since online storage costs are currently low and Digital Object Identifiers[2] (DOI) are easy to obtain, the ideas proposed here are feasible and can be promptly implemented.

In the next section, I present the argument for an open peer review system in further detail. I comment on some of the questions referees may have and why they should not be reticent to share their work. Then I show how scholars can use Publons,[3] a startup company created for this purpose, to publicize their reviews. The last section offers some concluding remarks.

Opening Yet Another Black Box

Over the last few decades, political scientists have pushed for higher standards of reproducibility in the discipline. Proposals to increase openness in the field have often sparked controversy,[4] but they have achieved considerable success in their task. In comparison to just a few years ago, data sets are widely available online,[5] open source software such as R and Python are increasingly popular in classrooms, and even version control has been making inroads into scholarly work (Gandrud 2013a; Gandrud 2013b; Jones 2013).

Open peer review (henceforth OPR) is largely in line with this trend towards a more transparent political science. Several definitions of OPR have been suggested, including more radical ones such as allowing anyone to write pre-publication reviews (crowdsourcing) or by fully replacing peer review with post-publication comments (Ford 2013). However, I believe that by adopting a narrow definition of OPR – only asking referees to sign their reports – we can better accommodate positive aspects of traditional peer review, such as author blinding, into an open framework. Hence, in this text OPR is understood as a reviewing method where both referee information and their reports are disclosed to the public, while the authors’ identities are not known to the reviewers before manuscript publication.

How exactly would OPR increase transparency in political science? As noted by a number of articles on the topic, OPR creates incentives for referees to write insightful reports, or at least it has no adverse impact over the quality of reviews (DeCoursey 2006; Godlee 2002; Groves 2010; Pöschl 2012; Shanahan and Olsen 2014). In a study that used randomized trials to assess the effect of OPR in the British Journal of Psychiatry, Walsh et al. (2000) show that “signed reviews were of higher quality, were more courteous and took longer to complete than unsigned reviews.” Similar results were reported by McNutt et al. (1990, 1374), who affirm that “editors graded signers as more constructive and courteous […], [and] authors graded signers as fairer.” In the same vein, Kowalczuk et al. (2013) measured the difference in review quality in BMC Microbiology and BMC Infectious Diseases and stated that signers received higher ratings for their feedback on methods and for the amount of evidence they mobilised to substantiate their decisions. Van Rooyen and her colleagues ((1999; 2010)) also ran two randomized studies on the subject, and although they did not find a major difference in perceived quality of both types of review, they reported that reviewers in the treatment group also took significantly more time to evaluate the manuscripts in comparison with the control group. They also note authors broadly favored the open system against closed peer review.

Another advantage of OPR is that it offers a clear way for referees to highlight their specialized knowledge. When reviews are signed, referees are able to receive credit for their important, yet virtually unsung, academic contributions. Instead of just having a rather vague “service to profession” section in their CVs, referees can precise information about the topics they are knowledgeable about and which sort of advice they are giving to prospective authors. Moreover, reports assigned a DOI number can be shared as any other piece of scholarly work, which leads to an increase in the body of knowledge of our discipline and a higher number of citations to referees. In this sense, signed reviews can also be useful for universities and funding bodies. It is an additional method to assess the expert knowledge of a prospective candidate. As supervising skills are somewhat difficult to measure, signed reviews are a good proxy for an applicant’s teaching abilities.

OPR provides background to manuscripts at the time of publication (Ford 2015; Lipworth et al. 2011). It is not uncommon for a manuscript to take months, or even years, to be published in a peer-reviewed journal. In the meantime, the text usually undergoes several major revisions, but readers rarely, if ever, see this trial-and-error approach in action. With public reviews, everyone would be able to track the changes made in the original manuscript and understand how the referees improved the text before its final version. Hence, OPR makes the scientific exchange clear, provides useful background information to manuscripts and fosters post-publication discussions by the readership at large.

Signed and public reviews are also important pedagogical tools. OPR gives a rare glimpse of how academic research is actually conducted, making explicit the usual need for multiple iterations between the authors and the editors before an article appears in print. Furthermore, OPR can fill some of the gap in peer-review training for graduate students. OPR allows junior scholars to compare different review styles, understand what the current empirical or theoretical puzzles of their discipline are, and engage in post-publication discussions about topics in which they are interested (Ford 2015; Lipworth et al. 2011).

One may question the importance of OPR by affirming, as does Khan (2010), that “open review can cause reviewers to blunt their opinions for fear of causing offence and so produce poorer reviews.” The problem would be particularly acute for junior scholars, who would refrain from offering honest criticism to senior faculty members due to fear of reprisals (Wendler and Miller 2014, 698). This argument, however, seems misguided. First, as noted above, thus far there is no empirical evidence that OPR is detrimental to the quality of reviews. Therefore, we can easily turn this criticism upside down and suggest it is not OPR that needs to justify itself, but rather the traditional system (Godlee 2002; Rennie 1998). Since closed peer reviews do not lead to higher-quality critiques, one may reasonably ask why should OPR not be widely implemented on ethical grounds alone. In addition, recent replication papers by young political scientists have been successful at pointing out mistakes in scholarly work of seasoned researchers (e.g. Bell and Miller 2015; Broockman, Kalla, and Aronow 2015; McNutt 2015). These replications have been specially helpful for graduate students building their careers (King 2006), and a priori there is no reason why insightful peer reviews could not have the same positive effect on a young scholar’s academic reputation.

A second criticism of OPR, closely related to the first, says it causes higher acceptance rates because reviewers become more lenient in their comments. While there is indeed some evidence that reviewers who opt for signed reports are slightly more likely to recommend publication (Walsh et al. 2000), the concern over excessive acceptance rates seems unfounded. First, higher acceptance can be a positive outcome if it reflects the difference between truly constructive reviews and overly zealous criticisms from the closed system (Fabiato 1994, 1136). Moreover, if a manuscript is deemed relevant by editors and peers, there is no reason why it should not eventually be accepted. Further, if the reviews are made available, the whole publication process can be verified and contested if necessary. There is no need to be reticent about low-quality texts making the pages of flagship journals.

Implementation

Open peer reviews are intended to be public by design, thus it is important to devote some thought to the best ways to make report information available. Since a few science journals have already implemented variants of OPR, they are a natural starting point for our discussion. The BMJ[6] follows a simple yet effective method to publicize its reports: all accompanying texts are available alongside the published article in a tab named “peer review.” The reader can access related reviews and additional materials without leaving the main text page, which is indeed convenient. Similar methods have also been adopted by open access journals such as F1000Research,[7] PeerJ[8] and Royal Society Open Science.[9] Most publications let authors decide whether reviewing information should be available to the public. If a researcher does not feel comfortable sharing the comments they receive, he or she can opt out of OPR simply by notifying the editors. In the case of BMJ and F1000Research, however, OPR is mandatory.

In this regard, editors must weigh the pros and cons of making OPR compulsory. While OPR is to be encouraged for the reasons stated above, it is also crucial that authors and referees have a say in the final decision. The model adopted by the Royal Society Open Science, in turn, seems to strike the best balance between openness and privacy and could, in theory, be used as a template for political science journals. Their model allows for four possible scenarios: 1) if both author and referees agree to OPR, the review is made public 2) if only the referee agrees to OPR, his or her name is disclosed only to the author 3) if only the author agrees to OPR, the report is made public but the referee’s name is not disclosed to the author or the public 4) if neither the author or referees agree to OPR, the referee report is not made public and the author does not know the referees’ name.[10] Their method leaves room for all parts to reach an agreement about the publication of complementary texts and yet favors the open system.

If journals do not want to modify their current page layouts, one idea is to make their referee reports available on a data repository modeled after Dataverse or figshare.[11] This guarantees not only that reports are properly credited to the reviewers, but that any interested reader is able to access such reviews even without an institutional subscription. An online repository greatly simplifies the process of attributing a DOI number to any report at the point of publication and reviews can be easily shared and cited. In this model, journals would be free to choose between uploading the reports to an established repository or creating their own virtual services, akin to the several publication-exclusive dataverses that are already being used to store replication data. The same idea could be implemented by political science departments to illustrate the reviewing work of their faculty.

Since such large-scale changes may take some time to be achieved, referees can also publish their reports online individually if they believe in the merits of OPR. The startup Publons[12] offers a handy platform for reviewers to store and share their work, either as member of an institution or independently. Publons is free to use and can be linked to an ORCID profile with only few clicks.[13] For those concerned with privacy, reviewers retain total control over whatever they publish on Publons and no report is made public without explicit consent from all individuals involved in the publication process. More specifically, the referee must first ensure whether publicizing his or her review is in accordance with the journal policies and if the reviewed manuscript has already been either published in an academic journal and given a DOI. If these requirements are met, the report is made available online.

Publons also has a verification system for review records that allows referees to receive credit even if their full reports cannot be hosted on the website. For instance, reports for rejected manuscripts are not publicized in order to maintain the author’s privacy,[14] and editorial policies often request reviewers to keep their reports anonymous. Verified review receipts circumvent these problems.[15] First, the scholar forwards his or her review receipts (i.e. e-mails sent by journal editors) to Publons. The website checks the authenticity of the reports, either automatically or by contacting editors, and notifies the referees that their reports have been successfully verified. At that moment, reviewers are free to make this information public.

Finally, Publons can be used as a post-publication platform for scholars, where they can comment on a manuscript that has already appeared in a journal. Post-publication reviews are yet uncommon in political science, but they may offer a good chance for PhD students to underscore their knowledge of a given topic and enter Publons’ list of recommended reviewers. Since graduate students do not engage in traditional peer review as often as university professors, post-publication notes are one of the most effective ways for a junior researcher to join an ongoing academic debate.

Discussion

In this article I have tried to highlight that open peer review is an important yet overlooked tool to improve scientific exchange. It ensures higher levels of trust and reliability in academia, makes conflicts of interest explicit, lends credibility to referee reports, gives credit to reviewers, and allows others to scrutinize every step of the publishing process. While some scholars have stressed the possible drawbacks of the signed review system, open peer reviews rest on strong empirical and ethical grounds. Evidence from other fields suggests that signed reviews have better quality than their unsigned counterparts. At the very least, they promote research transparency without reducing the average quality of reports.

Scholars willing to share their reports online are encouraged to sign in to Publons and create an account. As I have also tried to show, there are several advantages for researchers who engage in the open system. As more scholars adopt signed reviewers, institutions may follow suit and support open peer reviews. The move towards more transparency in political science has increased the discipline’s credibility to their own members and the public at large; open peer review is a further step in that direction.

Acknowledgements

I would like to thank Guilherme Duarte, Justin Esarey and David Skarbek for their comments and suggestions.

Notes

  1. Several authors affirm that the first publication to implement a peer review system similar to what we have today was the Medical Essays and Observations, edited by the Royal Society of Edinburgh 1731 (Fitzpatrick 2011; Kronick 1990; Lee et al. 2013). However, the current format of “one editor and two referees” is surprisingly recent and was adopted only after the Second World War (Rowland 2002; Weller 2001, 3–8). An early predecessor to peer review was the Arab physician Ishaq ibn Ali Al-Ruhawi (CE 854–931), who argued that physicians should have their notes evaluated by their peers and, eventually, be sued if the reviews were unfavourable (Spier 2002, 357). Fortunately, his last recommendation has not been strictly enforced in our times.
  2. https://www.doi.org/
  3. http://publons.com/
  4. See, for instance, the debate over replication that followed King (1995) and the ongoing discussion on data access and research transparency (DA-RT) guidelines (Lupia and Elman 2014).
  5. As of 16 November 2015, there were about 60,000 data sets hosted on the Dataverse Network. See: http://dataverse.org/
  6. As of 16 November 2015, there were about 60,000 data sets hosted on the Dataverse Network. See: http://dataverse.org/
  7. http://www.bmj.com/
  8. http://f1000research.com/
  9. http://peerj.com/
  10. http://rsos.royalsocietypublishing.org/
  11. http://rsos.royalsocietypublishing.org/content/open-peer-review-royal-society-open-science/
  12. http://figshare.com/
  13. http://publons.com/
  14. https://orcid.org/blog/2015/10/12/publons-partners-orcid-give-more-credit-peer-review
  15. https://publons.freshdesk.com/support/solutions/articles/5000538221-can-i-add-reviews-for-rejected-or-unpublished-manuscripts
  16. https://publons.com/about/reviews/#reviewers/

References

Bell, Mark S, and Nicholas L Miller. 2015. “Questioning the Effect of Nuclear Weapons on Conflict.” Journal of Conflict Resolution 59 (1): 74–92.

Broockman, David, J Kalla, and P Aronow. 2015. “Irregularities in LaCour (2014).”

DeCoursey, T. 2006. “Perspective: the Pros and Cons of Open Peer Review.” Nature.

Djupe, Paul A. 2015. “Peer Reviewing in Political Science: New Survey Results.” PS: Political Science & Politics 48 (02): 346–352.

Eisenhart, Margaret. 2002. “The Paradox of Peer Review: Admitting Too Much or Allowing Too Little?” Research in Science Education 32 (2): 241–255.

Ernst, E, and T Kienbacher. 1991. “Chauvinism.” Nature 352: 560.

Fabiato, Alexandre. 1994. “Anonymity of Reviewers.” Cardiovascular Research 28 (8): 1134–1139.

Fitzpatrick, Kathleen. 2011. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press.

Ford, Emily. 2013. “Defining and Characterizing Open Peer Review: a Review of the Literature.” Journal of Scholarly Publishing 44 (4): 311–326.

———. 2015. “Open Peer Review at Four STEM Journals: an Observational Overview.” F1000Research 4.

Gandrud, Christopher. 2013a. “GitHub: a Tool for Social Data Set Development and Verification in the Cloud.” The Political Methodologist 20 (2): 7–16.

———. 2013b. Reproducible Research with R and R Studio. CRC Press.

Godlee, Fiona. 2002. “Making Reviewers Visible: Openness, Accountability, and Credit.” JAMA 287 (21): 2762–2765.

Greaves, S, J Scott, M Clarke, L Miller, T Hannay, A Thomas, and P Campbell. 2006. “Overview: Nature’s Peer Review Trial.” Nature.

Groves, Trish. 2010. “Is Open Peer Review the Fairest System? Yes.” Bmj 341.

Hojat, Mohammadreza, Joseph S Gonnella, and Addeane S Caelleigh. 2003. “Impartial Judgment by the ‘Gatekeepers’ of Science: Fallibility and Accountability in the Peer Review Process.” Advances in Health Sciences Education 8 (1): 75–96.

Jones, Zachary M. 2013. “Git/GitHub, Transparency, and Legitimacy in Quantitative Research.” The Political Methodologist 21 (1): 6–7.

Khan, Karim. 2010. “Is Open Peer Review the Fairest System? No.” Bmj 341.

King, Gary. 1995. “Replication, Replication.” PS: Political Science & Politics 28 (03): 444–452.

———. 2006. “Publication, Publication.” PS: Political Science & Politics 39 (01): 119–125.

Kowalczuk, MK, F Dudbridge, S Nanda, SL Harriman, and EC Moylan. 2013. “A Comparison of the Quality of Reviewer Reports from Author-Suggested Reviewers and Editor-Suggested Reviewers in Journals Operating on Open or Closed Peer Review Models.” F1000 Posters 4: 1252.

Kronick, David A. 1990. “Peer Review in 18th-Century Scientific Journalism.” Jama 263 (10): 1321–1322.

Lee, Carole J, Cassidy R Sugimoto, Guo Zhang, and Blaise Cronin. 2013. “Bias in Peer Review.” Journal of the American Society for Information Science and Technology 64 (1): 2–17.

Lipworth, Wendy, Ian H Kerridge, Stacy M Carter, and Miles Little. 2011. “Should Biomedical Publishing Be ‘Opened up’? Toward a Values-Based Peer-Review Process.” Journal of Bioethical Inquiry 8 (3): 267–280.

Lupia, Arthur, and Colin Elman. 2014. “Openness in Political Science: Data Access and Research Transparency.” PS: Political Science & Politics 47 (01): 19–42.

McNutt, Marcia. 2015. “Editorial Retraction.” Science 348 (6239): 1100.

McNutt, Robert A, Arthur T Evans, Robert H Fletcher, and Suzanne W Fletcher. 1990. “The Effects of Blinding on the Quality of Peer Review: a Randomized Trial.” Jama 263 (10): 1371–1376.

Meadows, Arthur Jack. 1998. Communicating Research. San Diego: Academic Press.

Peters, Douglas P, and Stephen J Ceci. 1982. “Peer-Review Practices of Psychological Journals: the Fate of Published Articles, Submitted Again.” Behavioral and Brain Sciences 5 (02): 187–195.

Pöschl, Ulrich. 2012. “Multi-Stage Open Peer Review: Scientific Evaluation Integrating the Strengths of Traditional Peer Review with the Virtues of Transparency and Self-Regulation.” Frontiers in Computational Neuroscience 6: 33.

Rennie, Drummond. 1998. “Freedom and Responsibility in Medical Publication: Setting the Balance Right.” JAMA 280 (3): 300–302.

Rooyen, Susan van, Tony Delamothe, Stephen JW Evans, and others. 2010. “Effect on Peer Review of Telling Reviewers That Their Signed Reviews Might Be Posted on the Web: Randomised Controlled Trial.” BMJ 341: c5729.

Ross, Joseph S, Cary P Gross, Mayur M Desai, Yuling Hong, Augustus O Grant, Stephen R Daniels, Vladimir C Hachinski, Raymond J Gibbons, Timothy J Gardner, and Harlan M Krumholz. 2006. “Effect of Blinded Peer Review on Abstract Acceptance.” Jama 295 (14): 1675–1680.

Rowland, Fytton. 2002. “The Peer-Review Process.” Learned Publishing 15 (4): 247–258.

Sandström, Ulf, and Martin Hällsten. 2008. “Persistent Nepotism in Peer-Review.” Scientometrics 74 (2): 175–189.

Shanahan, Daniel R, and Bjorn R Olsen. 2014. “Opening Peer-Review: the Democracy of Science.” J Negat Results Biomed 13 (2).

Smith, Richard. 2006. “Peer Review: a Flawed Process at the Heart of Science and Journals.” Journal of the Royal Society of Medicine 99 (4): 178–182.

Spier, Ray. 2002. “The History of the Peer-Review Process.” TRENDS in Biotechnology 20 (8): 357–358.

Van Rooyen, Susan, Fiona Godlee, Stephen Evans, Nick Black, and Richard Smith. 1999. “Effect of Open Peer Review on Quality of Reviews and on Reviewers’ Recommendations: a Randomised Trial.” Bmj 318 (7175): 23–27.

Walsh, Elizabeth, Maeve Rooney, Louis Appleby, and Greg Wilkinson. 2000. “Open Peer Review: a Randomised Controlled Trial.” The British Journal of Psychiatry 176 (1): 47–51.

Ware, Mark. 2008. “Peer Review in Scholarly Journals: Perspective of the Scholarly Community-Results from an International Study.” Inf. Services and Use 28 (2): 109–112.

Weller, Ann C. 2001. Editorial Peer Review: Its Strengths and Weaknesses. Information Today, Inc.

Wendler, David, and Franklin Miller. 2014. “The Ethics of Peer Review in Bioethics.” Journal of Medical Ethics 40 (10): 697–701.

Wold, Agnes, and C Wennerås. 1997. “Nepotism and Sexism in Peer Review.” Nature 387 (6631): 341–343.