Most researchers would love to publish their work in a journal with a high impact factor, a fast review process and a low probability of being rejected. But publishers have been telling us this is not possible. The consequence is that either we publish in a lower impact journal and get a fast, quite likely positive decision, or we accept a high risk of rejection in a high impact journals and prepare for an excruciating review and rejection cascade from one journal to another. New evidence suggests that this strong belief, that has become common knowledge myth, is largely false.
Analysis on impact factor and rejection correlations
In an earlier blog focused on debunking the myth of selecting for impact, we reported an analysis of 570 journals, which found practically no correlation between impact factors and rejection. Certainly, the journals with the highest impact factors had high rejection rates, but so did many journals with low and lowest impact factors. Furthermore, many journals with good impact factors had low rejection rates. We concluded that rejection rates above 30% are unreasonable and do not serve research and researchers well.
Rejection rates for a sample of 570 journals with impact factor
Figure 1: 570 journals with publicly stated rejection rates (see list below). Impact factors from Thomson Reuters Web of Science (2014). (Log scale) – reprinted from https://blog.frontiersin.org/2015/12/21/4782/
So what is wrong with the conventional wisdom, and how has Frontiers built some of the highest-cited journals in their disciplines in the world with reasonable rejection rates?
The first problem with the conventional wisdom is that reviewers generally cannot judge and pick high-impact papers (Campanario, 2009; Kravitz, Franks et al., 2010; Herron, 2012; Eyre-Walker and Stoletzki, 2013) – in other words, many rejected manuscripts are just as important as accepted ones. The judgement on “high impact” is highly subjective and hence prone to errors. This is the main reason why high rejection rates do not improve a journal’s impact.
A second reason is that the impact of a paper depends not just on its quality, but on the type of article, its disciplinary area and importantly on the prestige of the authors (Kokko and Sutherland, 1999; Krell, 2002; Alberts, 2013) – review articles by top authors in cancer research attract more citations than replication studies in metallurgy by unknown authors. So journals that publish the wrong kind of article, in the wrong discipline, by the wrong authors, will always get low impact factors; while journals that publish the right kind of article, in the right discipline, by the right authors will get high ones.
Impact-neutral specialist review process
None of these considerations explain the high citations in Frontiers journals. Rejection rates in Frontiers journals are around ~27% for submissions outside of research topics, most manuscripts are published within 3 months of submission, and yet, Frontiers citations rates are amongst the very highest when compared to journals in the same categories. How is this possible?
We can start with the mandate Frontiers gives its editors and reviewers. Traditional subscription journals instruct their editors and reviewers to choose manuscripts presenting “important”, “novel” results that are of interest to a broad readership. In other words, papers that are likely to attract citations and improve the journal’s impact factor. This is what one may call a strategic selective review. Frontiers does the opposite.
In Frontiers’ impact-neutral, specialist review process, editors and reviewers are asked to improve the paper unless it has irreparable errors. Reviewers are not asked to judge the “importance” and “novelty”. We believe this is a subjective evaluation, which we leave for a crowdsourced post-publication review.
To facilitate this new approach, Frontiers has created a novel Interactive Review Forum – an online system where, instead of sending formal requests for revision, reviewers and editors discuss issues with authors for as many iterations as may be necessary. In other words, Frontiers editors and reviewers actively contribute to the quality of submissions. They also take joint responsibility for the published manuscript. Unlike the anonymous reviewers of traditional journals, Frontiers reviewers and editors publicly sign off on the papers they accept.
A new way of making decisions in scholarly publishing
What makes all this possible is the unusual distribution of decision-making power in the Frontiers review process. In traditional journals, the decision to send a paper to review is generally made by the editor – often an employee of the publisher – who is also responsible for the final decision to accept or reject. In Frontiers, it is the responsibility of a completely independent Associate Editor. There are over 8000 Associate Editors with Frontiers who are drawn from universities all over the world.
The subsequent review is completely formalized, avoiding the concentration of power in a few hands. Thus, final acceptances and rejections both require unanimity between the associate editor and the reviewers, and subsequent validation by a chief editor who guarantees the integrity of the process. If a reviewer disagrees with the majority opinion, he or she can withdraw from the process, registering a personal recommendation to reject, after which the Associate Editor may chose either recommend the article for rejection or – if in disagreement with the reviewer – to invite a new reviewer to replace the reviewer who has withdrawn.
In no case can an editor make an accept/reject decision on her own. In other words, it is difficult for a Frontiers editor or reviewer to block a paper she disagrees with, but easy to reject papers when there is a consensus that the paper has irremediable flaws. The end results are faster decisions, reasonable rejection rates, and better quality.
Proudly publishing all sound and correct research
In traditional subscription journals, editors and reviewers faced with a set of high-quality papers will reject all those that do not meet their criteria for impact – including many good ones; Frontiers editors and reviewers will accept them – including work that is difficult to publish in traditional journals. This includes papers that challenge the conventional wisdom or their own views, papers that report negative results, replication studies and so forth. Many of these papers turn out to be very important indeed.
The take-home message: selective strategic review as practiced by traditional journals is a slow and unreliable process that ends up rejecting or delaying the publication of high-quality papers that deserve to be published and that readers deserve to read. It is a process that wastes researcher’s time and leads to an enormous loss of research opportunities.
There are enough examples to fill volumes. The story of serial rejections of the CRISPR discovery is just one popular example. In brief, it is unfair, inefficient and wasteful. The specialist impact-neutral review, on the other hand, allows authors to publish high-quality work without delay and with significant improvements suggested by the editors while simultaneously allowing readers to make their own choices about the work that interests them most.
Of course Frontiers’ review is not perfect. For instance, many of our editors have asked us to clarify the procedures for rejecting a low-quality paper. Some of the procedures described here are responses to these requests. For examples, we have recently introduced a simplified way for reviewers to “recommend rejection” that allows Associate Editors to act faster on this.
But we are moving in the right direction and other open-access publishers have introduced their own innovations with goals very similar to our own. We see light at the end of the tunnel. For the first time, it is becoming possible for authors to publish their work rapidly and gain the impact they deserve without jumping through the hoops of traditional review. This is a disruptive development, with huge implications for the whole scholarly publishing industry. Frontiers is proud to be part of it.
Aarssen, L. W., T. Tregenza, A. E. Budden, C. J. Lortie, J. Koricheva and R. Leimu (2008). Bang for your buck: rejection rates and impact factors in ecological journals. The Open Ecology Journal 1 (1).
Alberts, B. (2013). Impact Factor Distortions. Science 340 (6134): 787-787. doi: 10.1126/science.1240319.
Campanario, J. M. (2009). Rejecting and resisting Nobel class discoveries: accounts by Nobel Laureates. Scientometrics 81 (2): 549-565. doi: 10.1007/s11192-008-2141-5.
Eyre-Walker, A. and N. Stoletzki (2013). The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations. PLoS biology 11 (10): e1001675.
Herron, D. M. (2012). Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. Surgical endoscopy 26 (8): 2275-2280. doi: 10.1007/s00464-012-2171-1.
Kokko, H. and W. J. Sutherland (1999). What do impact factors tell us? Trends in Ecology & Evolution 14 (10): 382-384. doi: 10.1016/S0169-5347(99)01711-5.
Kravitz, R. L., P. Franks, M. D. Feldman, M. Gerrity, C. Byrne and W. M. Tierney (2010). Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PLoS One 5 (4): e10072. doi: 10.2307/255467.
Krell, F.-T. (2002). Why impact factors don’t work for taxonomy. Nature 415 (6875): 957-957. doi: 10.1038/415957a.