Selecting for impact: new data debunks old beliefs

One of the strongest beliefs in scholarly publishing is that journals seeking a high impact factor (IF) should be highly selective, accepting only papers predicted to become highly significant and novel, and hence likely to attract a large number of citations. The result is that so-called top journals reject as many of 90-95% of the manuscripts they receive, forcing the authors of these papers to resubmit in more “specialized”, lower impact factor journals where they may find a more receptive home.

Unfortunately, most of the 20,000 or so journals in the scholarly publishing world follow their example. All of which raises the question: does the strategy work? There is evidence that proves it doesn’t.

In Figure 1, we plotted the impact factors of 570 randomly selected journals indexed in the 2014 Journal Citation Reports (Thomson Reuters, 2015), against their publicly stated rejection rates.

 rejection-rate.png

 Figure 1: 570 journals with publicly stated rejection rates (for sources, see below and to see complete data, click here). Impact factors from Thomson Reuters Journal Citation Reports (2014). (Y-axis is on a Log scale).

As you can see, Figure 1 shows there is absolutely no correlation between rejection rates and impact factor (r= 0.0023; we assume the sample of 570 journals is sufficiently random to represent the full dataset, given that it spans across fields and publishers). In fact, many journals with high rejection rates have low impact factors and many journals with low rejection rates have impact factors that are higher than the bulk of journals with rejection rates of 70-80%. Clearly, selecting “winners” is hard and the belief that obtaining a high impact factor simply requires high rejection rates is false.

Of course, some journals with 90-95% rejection rates do achieve very high impact factors – they are depicted in the top right of the graph. Critics believe they may achieve this by giving priority to well-established authors, and on reports likely to win broad acceptance (i.e. safely within the dogmas of science) – an assurance of immediate citations from the community. As a result, many breakthrough papers are rejected by the high impact factor journals where they are first submitted (see refs 1-3). Another reason could be that it is not possible to achieve a high impact factor in the specialized journals because the paper is visible only in one of the silos of academic disciplines. Indeed, the highest impact factor journals are those that pre-select papers for their “general interest”.

But regardless of these considerations, the hard facts remain. A vast number of high quality papers are being sacrificed to engineer high impact factors, yet the strategy fails for the vast majority of journals (some of the lowest impact factors have been obtained by journals rejecting 60-70% of the papers; 80% of the 11,149 journals indexed in the Journal Citation Report (JCR) have impact factors below 1 – read our Summary Impact Blog. More importantly, some journals do achieve impact factors in the top 90th percentiles without trying to pre-select the most impactful papers and without high rejection rates, showing that impact neutral peer review can work.

A recent ranking of journal provides further evidence that impact neutral peer-review can work. The journals published by Frontiers, the youngest digital-age OA publisher, have risen rapidly to the top percentiles in impact factor (see our Summary Impact Blog). More importantly, the total citations generated from these journals have started to outpace decade-old and even century-old journals. Total citations of articles in a journal reflect the amount of new research that is partly built on the knowledge within these articles. For example, in the JCRs Neurosciences category, the field of Frontiers in Neuroscience generated more citations in 2014 (reported in the 2015 JCR) than all other open-access journals combined in this category and 3rd highest number of total citations compared to all journals in this category (including all subscription journals). Another example is Frontiers in Psychology. In only 4 years, this journal has become not only the largest psychology journal in the world, but it also generates the 2nd  highest number of citations in the discipline of psychology (2nd to Frontiers in Human Neuroscience). The other Frontiers journals (in Pharmacology, Physiology, Microbiology and Plant Science) follow a similar pattern (see Summary Impact Blog).

In Frontiers, our “impact neutral” peer review is a rigorous specialist review. The main difference from “impact selective” peer review is that the editors and reviewers are not asked to try and predict the significance of the paper. Frontiers uses its Collaborative Peer Review and its online interactive forum to intensify the interaction between the authors and specialist reviewers. The high quality editorial boards (see our Editorial Board Demographics) help match the most specialized reviewer to submitted papers.

Based on our experience of conducting impact neutral peer reviews for the last 8 years, rejection rates up to 30% are justifiable to ensure only sound research is published. We also conclude that the specialized collaborative peer review process provided by Frontiers is a highly effective strategy to achieve high quality and highly influential journals in different disciplines. We are excited to see how high total citations can go when focusing peer-review on enhancing quality, rather than on rejecting papers.

This doesn’t end here…

What happens when we remove the field component by normalizing impact factors by field? Does the absence of a correlation still hold true? Find out in Part 2.


References

  1. J. M. Campanario, “Rejecting and resisting Nobel class discoveries: accounts by Nobel Laureates,” Scientometrics, vol. 81, pp. 549-565, 2009/11// 2009.
  2. R. Walker and P. Rocha da Silva, “Emerging trends in peer review—a survey,” Frontiers in neuroscience, vol. 9, 2015 2015.
  3. A. Eyre-Walker and N. Stoletzki, “The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations,” PLoS biology, vol. 11, p. e1001675, 2013.
  4. Sources for rejection rates (by publisher):

 

36 Comments on Selecting for impact: new data debunks old beliefs

  1. Maybe you should look at the predictive power of negotiations with Thomson Reuters on their Impact Factor? Numerous anecdotes show that such negotiations work wonder:
    http://blogarchive.brembs.net/comment-n817.html

  2. could you please make freely accessible your data and your methods? thanks!

  3. Could you make the data available for download somewhere, please?

  4. I’m curious about the highest journals on the far left; I didn’t think journals with IFs of 7 or 8 and rejection rates of 1 or 2% existed. The data available for download doesn’t seem to name any journals though. I guess if this presumably well-kept secret gets out, their rejection rates will rise as they get flooded by submissions from people who haven’t signed up to DORA and are desperate for an IF>7 paper 😉

  5. I know it’s a lot to ask, but if it’s available, would it please be possible to also include journal name in the data file?

  6. Note that lack of correlation may suggest lack of causation, but it certainly does not prove this. We don’t how know much worse these various journals would be if they were to drop their rejection rates.

    Thank you very much for posting the data. Unfortunately, they are close to worthless without the journal names because all we can do is verify the original regression — one cannot look at trends across journal type, size, etc. Could you post a version with the journal names or ISSN numbers ? It would be fine if this new version omitted the Impact Factor data if that is necessary for copyright reasons.

    • Thanks for this valuable and interesting analysis. I think the visualization convincingly shows that rejection rate doesn’t associate with impact factor for journals with impact factors < 5. For journals with impact factors ≥ 5, I think you will see a stronger positive association with rejection, especially with a larger sample size.

      I would like to thank you for posting the data under a CC-BY license. However, I was disappointed to see that journal identities are omitted from the dataset, “since the goal is not to point at any particular journal, but to better understand the general relationship.” Since the data is already public just not aggregated, I don’t see the benefit of hiding journal identity. Particularly, I appreciate your painstaking collection of rejection rates and would be interested in incorporating this into my future data science. However, without journal names (and preferably a stable identifier like ISSN or NLM ID), most use cases are lost.

      • James Lloyd // February 7, 2016 at 2:56 am //

        Completely agree that the journal names should be included along with other identifiers so the data could be used to its full potential. It would be interesting to break it down by subject area as people I have spoken to have question if this could be a factor.

  7. Looking at the data, they may be some effect for IF above 6…

    If one would add the price of the subscription, it may be even more clear that price is not linked to rejection rate. That is quite the thing, since most big players give this excuse (the accepted paper need to pay for the rejected ones) to ask for enormous APC!

    quite interesting, isn’t it?

  8. This is so interesting. I am somehow shocked by this but not surprised.

    But I would love to see the journal names in the table so I can look at the data more closely. Field would also be really useful but I understand that might be difficult to add. A friend suggested that some journals might get a lot of submissions of auto-generated fake papers. Do you think this could bias some results.

  9. Subsetting the data to exclude journals with very low rejection rates reveals a significant (though weak) correlation. So not “absolutely no correlation” but the conclusion from the analysis is pretty much unchanged, I reckon. See https://twitter.com/robjohnnoble/status/686910201220431872

  10. Well, I certainly hope none of the people above arguing for looking at subsections of the data are treating their own data like this:
    – get some data, find no result
    – keep removing data points until you find a result.

    If this is how you do research, there is no need to collect any data at all: if you’re only going to select the data points you like, you might as well just paint your graphs with photoshop and you’ll get whatever you want.

    It is easy for anybody to select the data above in a way to get a highly significant correlation i any way one may want. That’s not science, though, that’s art. Drawing your data is easy. Doing science is hard.

    • Agreed. I’m not suggesting that such a subset analysis leads to any conclusion other that the data might not be entirely patternless, and therefore it might be worth exploring the question further. Perhaps there is an interesting trend that’s masked by the effect of a covariate, such as journal specialty or publication model. Probably not, but it might be worth looking into.

    • Bjorn, this is not a particularly compelling defense of a study that has sub-sampled data from the JCR (a random sample? really? why not use all data? how was the subsample generated?) and now will not tell us what journals are included in that data set.

      • I don’t need to defend this particular post, as looking at subsamples is not even a critique. All I’m saying is that cherry-picking data points from a sample without very hard-hitting, iron-clad arguments constitutes a slippery slope at best, and misconduct at worst. Do it at your own peril.

    • timsmitstim // June 15, 2016 at 9:11 am // Reply

      Bjorn, There could also be something like the ecological fallacy working here. It might well be that the hypothesis perfectly holds within each subsample but disappears when looking at all data combined. (Sorry for this late comment)

      • Of course. In this case, however, it’s probably going to be very difficult to define even one subsample that all here would agree on, let alone find an objective way to subsample. 🙂 I, for one, would imagine I could poke a hole in any subsample of the above you may propose 🙂

  11. This is crazy. The upper-left “outliers” are all Elsevier journals (except for one Frontiers journal)!

    5.84, Rej Rate 0%: Curr Op Colloid Surface Sci – Elsevier
    7.117, Rej rate 1%: Curr Op Biotech – Elsevier
    7.037, 13%: Frontiers Neuroendocrinology
    7.776, 37%: Advances in Colloid and Interface Science, Elsevier.
    7.237, 22%: Progress in NMR – Elsevier
    8.235, 57% : Gondwana Research – Elsevier
    9.992, 58%: Prog Neurobiol – Elsevier
    12.239, 31%: Coordination Chemistry Revs – Elsevier

    And that outlier in the center of the figure? 27.417, 58% : Progress in Material Science, published by – go ahead, guess!

    I couldn’t identify the semi-Glam data point at 81% / IF 15. But if the figures are correct, I’m willing to speculate about the publisher!

    (The top-right outliers are the usual suspects – Nature, Science, NEJM, JAMA etc.)

    • Most are invited review journals (solicited papers) so not representative of journals that take direct/unsolicited submissions

  12. In what fields/disciplines are these journals? I can’t help but wonder if selective sociology journals with lower impact factors than non-selective medical journals explain the lack of relationship. The data you link to doesn’t include the titles of the journals (or even the field/discipline).

  13. One possible confounding factor is that citation rates differ enormously between different disciplines. Mathematics, for example, is a very low citation density field, whereas Neuroscience is a high citation density field. It would be interesting to see what happens if the data were re-analyzed discipline-by-discipline.

  14. Bjorn, I think we’re largely in agreement there.

    My overarching concern here is that Frontiers have themselves published a subsample of the JCR, and without knowing what data are included in that subsample (as you and I have both requested) it is hard to draw definitive conclusions from their results.

    In Rob’s defense, given that we don’t know how Frontiers selected the data in the first place, I think that a bit of exploratory data analysis to see how robust these data are to various manipulations seems entirely worthwhile from the perspective of deciding how seriously to take their conclusions.

    • I agree with you both. The point of subsampling is to question the robustness of the original conclusion and to generate hypotheses not presented in the blog post. For example, might the pattern be distorted by the inclusion of foreign-language journals (which have smaller readerships and hence lower impact factors) or review-only journals (which have low rejection rates due to article commissions)? My guess is that the conclusion will hold up, and there really isn’t much correlation even among comparable journals (i.e. those that might compete for an article), but without access to the full data set we can’t be sure.

  15. Maybe the impact factor does not work?

    Because it only looks at journals, not at other sources such as conferences.

    IF is useless at least in my domain.

  16. I wonder if a journal that only publish invited reviews would not get a very high impact with a low rejection rate. Because if the invited review is invited, why reject it:-) Reviews are often receiving high citation rates and review journals like Trends in etc etc or Progress in etc etc are purely review journal with high IF. Maybe this has been discussed in the thred before or I have misunderstood. If not, there will be no rush to those journals since only invited reviews are published.

  17. The impact factor calculation as implemented by various sources can vary in terms how it is constructed (various ‘citable elements’ can be classified in different categories to change the overall metric) and methods are rarely described transparently, so it is not an objective measurement. Relative to how it is used and the importance this number is granted in academics, this metric has several glaring deficiencies:

    1. It rates the entire journal, rather than individual articles. (and then is used prospectively to rate individuals publishing in that journal).

    2. It includes review articles, which really should be separated into a different category from primary research.

    3. It is based purely on citation numbers (which are then inferred to imply quality). This simply measures the popularity or ‘bandwagon’ potential of a journal, not its impact on science.

    At best it is only one of many descriptive statistics for a journal, and fails to capture many of the properties of important research articles.

  18. On the other hand, it would be interesting to measure (if possible) how many rejected papers would had givem much more references than others accepted. It is well known that it is not a blind selection, and papers including authors of important institutions are preferred to others.

  19. Scrutinizing the methodology by which the authors derived their findings is not helpful here. There is some truth to the claims in the article. The mindset of high rejection rates by journal as a means to assure quality has to be checked. Editors and reviewers should review scientific/research soundness of manuscript, but not to the extent of deliberate fault finding. Also, the habit of over-scrutinizing manuscripts from developing countries is still prevalent.

  20. I run the analysis on the data provided in SPSS and my results differ considerably. I get an R Square value of 0.021 and a p value (two tailed) of 0.001. I then ran the test in Excel since the original graph seems to be done with that software. Same results. What my analysis shows is that there is a weak but significant correlation. Maybe the author could provide more information on how the R Square value of 0.0023 was obtained?

  21. Excellent reading article thanks for sharing

17 Trackbacks / Pingbacks

  1. The relationship between journal rejections and their impact factors – ScienceOpen Blog
  2. Why do we rejoice in rejecting perfectly valid research? - Ross Mounce
  3. Recent news in journals | neuroecology
  4. Frontiers predicts enhanced services will be the future of scholarly publishing – Frontiers Blog
  5. Changing ways: High impact, fast decisions and reasonable rejection rates in scholarly publishing – Frontiers Blog
  6. Las altas tasa de rechazo de artículos por parrte de una revista no garantiza un buen posicionamiento de impacto | Universo Abierto
  7. Throes of Rejection: No link between rejection rates and impact? | quantixed
  8. Frontiers APCs: Structure and Rationale – Frontiers Blog
  9. Las altas tasa de rechazo de artículos por parte de una revista no garantiza un buen posicionamiento de impacto | Universo Abierto | Francisca Cuéllar Gragera
  10. New Data Debunks Old Beliefs: Part 2 – Frontiers Blog
  11. 1 – Selecting papers for impact: new data debunks old beliefs (2015)
  12. Open Access to science papers will be default by 2020, say European ministers – Frontiers Blog
  13. Selecting the Right Academic Journal for Your Manuscript | Academic Language Experts- English
  14. Achieving quality at scale – a feasible challenge in publishing? – Frontiers Blog
  15. Frontiers publishes its 50,000th article – Frontiers Blog
  16. Wiley Interface - Research submission tips
  17. 8 top tips to make sure your industry-sponsored research submission grabs a journal editor’s attention - Wiley Corporate Solutions

Leave a Reply

%d bloggers like this: