Naseema Gangat, MBBS, is a professor of medicine in the Division of Hematology at Mayo Clinic in Rochester, Minnesota.
Scientific publishing is heavily influenced by peer review, which has been the cornerstone of scientific communication for more than three centuries.1 Circa 1731, the Royal Society of Edinburgh initiated the peer review process for their medical essays and observations, stating, “Memoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the author.” Subsequently, the Royal Society of London formulated the “Committee on Papers” in 1752 to review manuscripts for its Philosophical Transactions journal. Since then, peer review has been widely adopted by journals to safeguard against errors and maintain the highest scientific quality. Nonetheless, the effectiveness of peer review remains to be objectively quantified. According to a systematic review on the effects of editorial peer review, published in JAMA, “Editorial peer review, although widely used, is largely untested and its effects are uncertain.”2 Moreover, the process as it stands is unequivocally flawed.3
The topic of peer review was on my mind last year when I attended Elton John’s Farewell Yellow Brick Road Tour. Despite its imperfections, peer review is the “yellow brick road,” or gold standard, for evaluation of scientific publications. Below, I’ve outlined a few points for remediation, which I like to think of as the Farewell Yellow Brick Road (of Peer Review) Tour.
Inside Elton John’s World at Selfridges to celebrate the Farewell
Yellow Brick Road, Final U.K. Tour.
Sheer: The Diaphanous Peer Review
Currently, the mainstream peer review format is single-blinded, in which reviewers are aware of author identities, but, by contrast, the authors do not have access to a reviewer’s identity. Interestingly, randomized trials have shown no significant difference in review quality when blinding reviewers to the identify of authors.4,5 On the contrary, transparency in the review process has long been desired, and whether open review will take over is unclear. In this regard, the degree of transparency — open identities with or without open reports — requires further evaluation. For instance, in 2016, Nature Communications began offering open reviews whereby authors opt for a blind or open review at the time of submission, and reviewers decide whether to remain anonymous, with reviewer comments and author responses published alongside the paper.6 In 2020, Nature followed suit in promoting public peer review exchanges, and as a result, approximately half of published papers were accompanied by referee reports. In a randomized trial on open review, the majority (55%) of authors were in favor; however, use of an open review process significantly increased the likelihood of reviewers declining to review (35% vs. 23% of identified and anonymous reviewers, respectively), but there was no effect on the quality of review7 when assessed using a validated review quality instrument.8
Another extension of the open review process that is emerging in the era of open-access publishing and public preprint servers is post-publication review through web platforms like F1000 (formerly Faculty of 1000), ResearchGate, PubPeer, and PubMed Commons, and social media such as X.
Next-Gen Peers
The exponential growth of scientists in the 21st century, coupled with medical subspecialization, has restricted the pool of “peers.” Simultaneously, there are growing pressures not only on the part of researchers to publish high-impact articles to achieve tenure but also on editors, who need to provide a timely response back to authors, and publishers, who are financially inclined. With the uncontrolled growth of journals and scientific papers, seeking out reviewers continues to be a struggle.
How do we compensate reviewers and incentivize the task at hand? Journals often waive subscription fees for reviewers or provide public credit through Clarivate’s Web of Science Reviewer Recognition service as an indirect form of payment. However, this by no means provides time for a meticulous peer review. Expanding the reviewer pool remains an option, with the potential integration of artificial intelligence (AI) to identify a wide range of reviewers. Furthermore, AI could be leveraged to assist with peer review, particularly with its ability to accurately detect fabrication and plagiarism.9 Presently, there are limited data on the effectiveness of AI in peer review, and one study that assessed agreement between reviewer and ChatGPT-4 comments found a concordance rate of about 30%.10 The cascading of journal articles with transferrable reviews, together with portable peer reviews whereby authors provide reviews from a prior journal, offers an opportunity to recycle reviews and combat reviewer burnout. Equally important is formal training in the review process, including editorial fellowships such as those offered by Blood and Blood Advances, which is a vital step toward training the next generation of reviewers.
Taken together, let’s bid Farewell to the Yellow Brick Road; a new paradigm in the peer review process is desperately needed because the current state no longer matches reality. Whether open peer review will be widely embraced and implemented and prove effective remains to be determined. In the future, AI-enhanced peer review has innovative potential. Until then, I will keep working on my pending reviews!
Naseema Gangat, MBBS
Associate Editor
References
- Kronick DA. Peer review in 18th-century scientific journalism. JAMA. 1990;263(10):1321-1322.
- Jefferson T, Alderson P, Wager E, et al. Effects of editorial peer review: a systematic review. JAMA. 2002;287(21):2784-2786.
- Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006;99:178-182.
- van Rooyen S, Godlee F, Evans S, et al. Effect of blinding and unmasking on the quality of peer review: a randomized trial. JAMA. 1998;280(3):234-237.
- Justice AC, Cho MK, Winker MA, et al. Does masking author identity improve peer review quality? A randomized controlled trial. JAMA. 1998;280(3):240-242.
- Transparent peer review for all. Nat Commun. 2022;13:6173.
- van Rooyen S, Godlee F, Evans S, et al. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ. 1999;318:23-27.
- van Rooyen S, Black N, Godlee F. Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts. J Clin Epidemiol. 1999; 52:625-629.
- Nath KA, Conway M, Fonseca R. AI in peer review: publishing’s panacea or a Pandora’s box of problems? Mayo Clinic Proceedings. 2024;99(1):10-12.
- Liang W, Zhang Y, Cao H, et al. Can large language models provide useful feedback on research papers? a large-scale empirical analysis. arXiv. 2023. doi: 10.48550/arXiv.2310.01783.
The content of the Editor’s Corner is the opinion of the author and does not represent the official position of the American Society of Hematology unless so stated.
Have a comment about this editorial? Let us know what you think; we welcome your feedback. Email the editor your response, along with your full name and professional affiliation if you’d like us to consider publishing it, at [email protected].