Achieving quality at scale: here’s how we do it with our Collaborative Peer Review
Research output is currently increasing exponentially across the globe. There are more researchers (7 -9 million) being trained, more data being produced and more articles (2.5 million) being published than ever before. Scholarly publishing is challenged to adopt to the exponential increase in scientific output and needs to come up with new tools and technologies so that society can benefit from this wealth in research knowledge.
In the traditional publishing model, which originated in the age of printed issues and limited page space, Chief Editors select a few papers out of a bulk of submissions based on potential impact; a subjective method often equated with simultaneously ensuring quality. However, this selectivity for potential impact creates rejection cascades, where the publication of valid research is being delayed by months and often years. This is costly for the research community because researchers, reviewers and editors spend time re-writing, re-reviewing and re-evaluating manuscripts. This massively slows down science and in addition data shows that high rejection rates are not correlated to high impact factors and citation rates for journals. These rejection cascades are not in the best interest of research, societies or economies.
A break from the traditional peer review is necessary and several journals have taken up the cause in the past decades to offer a peer review that is focused on publishing correct and valid research, without subjectively selecting for potential impact, such as PLOS One and many BMC journals, for example. This is also the Frontiers’ review approach. In addition, we have built a technology platform that is actively focused on safeguarding and enhancing the quality of all valid scholarly articles through our Collaborative Review Forum and our Digital Editorial Office.
Inherently, an impact-neutral review approach means publishing a high volume of articles. The current academic culture associates quality with low publishing volumes and high rejection rates. In this piece, we ask and answer: Is it feasible to maintain quality while handling thousands of papers?
How to ensure quality at scale
Maintaining quality at scale is a challenge, but Frontiers has several key strategies to ensure the highest quality standards possible.
Expert Editorial Boards. First and foremost is the quality of the people involved. Frontiers Chief Editors are leading experts in their field – researchers based at respected institutions from around the world. They are individually selected by Frontiers and our editorial staff are in close contact with them to discuss the journals’ development and manuscripts in review. Chief Editors recommend their trusted colleagues to be appointed as Associate Editors to the Editorial Board, who in turn are in charge of the peer review process. This distributed editorial network of more than 63’000 researchers allows for a broad representation of the community in deciding over what is published in their field.
Transparency and accountability. In addition to the right people on the Editorial Boards, building in accountability and transparency into the processes is also important to safeguard quality. Each journal clearly displays the members of its Editorial Board on its homepage with a link to their profiles on Loop, the Frontiers research network. During the review process Frontiers requires the declaration of any potential conflicts of interest from all editors and reviewers handling a manuscript. Authors are also required to declare any ethical considerations of their study at the time of submission.
Oversight facilitated by technology. At Frontiers, the editors have access to the manuscripts at any time through their online Digital Editorial Office, an important oversight tool custom-built for Chief Editors. The Digital Editorial Office is part of the Frontiers publishing platform that also contains the Collaborative Review Forum. We believe technology must facilitate publishing in the 21st century and digital tools can support quality control efforts. For this reason we built and maintained our award-winning publishing platform in-house since 2007, a pioneering effort in the publishing industry. The platform enables us and our editors to handle large volumes of articles without compromising on quality nor efficiency.
Rigorous and standardized review process. During our review, reviewers are asked to complete a standardized and comprehensive questionnaire, in place since 2008, allowing them to focus on important aspects for the evaluation of a manuscript.
An example of a question in the review form provided is: “Please comment on the Discussion section. Potential aspects to consider: adequate discussion of research questions or hypothesis (posed in introduction); conclusions supported by data; exhaustive discussion of previously published material (in context to current study).
Reviewers and handling editors are given clear criteria for how to endorse/accept a manuscript or recommend rejection. Until today Frontiers editors have rejected nearly 10,000 submissions due to objective errors, ethical issues, out of scope and other reasons in line with an impact-neutral review mandate.
Two examples of reasons to recommend rejection are: a) The authors are unwilling or unable to address my concerns sufficiently to make this manuscript suitable for publication; b) There are serious concerns about ethical issues in the manuscript that cannot be rectified through author revisions.
Full-length article types, such as Original Research Articles, require two reviewers to endorse the publication before the handling editor can accept a submission. The reviewers can also see each other’s reports and post comments in the review forum until they reach a final recommendation. The handling editor and reviewers who endorsed publication are named on the published article making them publicly accountable for the paper. Recommendations for rejection are evaluated by the Chief Editors to ensure a fair process for the authors, in particular if the rejection was recommended before peer review.
In-house quality checks. In addition to the editors and reviewers’ assessments of manuscripts, Frontiers carries out a number of in-house quality checks to ensure publishing standards are upheld and has a dedicated team to handle all quality and ethics issues that arise.
- Pre-review, our internal Ethics and Integrity team runs manuscripts through a list of quality checks, rejects obviously flawed manuscripts directly and brings other manuscripts that do not pass the checks fully to the attention of Chief Editors and Associate Editors. For example, our editorial team verifies all submissions for potential plagiarism with the Crossref Similarity Check powered by iThenticate. Any manuscript with over 15% similarity content receives an additional manual verification from our Ethics and Integrity team who then contacts the authors or editors with the results, depending on the severity of the situation.
- Post-review, we have a final validation stage in our peer-review process to give authors, editors, and our editorial staff a chance to run a final check on the review process and the manuscript files in order to address any pending issues before production starts (missing documents, figure quality, copyright forms, plagiarism, conflict of interest, quality and integrity of the review process itself).
- The team checks for conflicts of interest based on matching affiliations and discusses with Editors how to proceed when potential conflicts of interest arise.
- We alert Editors to potentially controversial or sub-quality manuscripts that may require additional scrutiny and check on manuscripts where reviewers have recommended rejection or where review reports appear short or not diligent.
- We verify the manuscript images at various stages (submission, production) for evidence of manipulation and contact authors want signs of tampering are detected.
- The Ethics and Integrity team also follows up on any concerns and complaints we receive about published articles.
What if something goes wrong? Despite all these measures, no system guarantees complete protection. Misconduct may occur as well as honest errors. Therefore, post-publication corrections, including retractions, are critical in maintaining the integrity of the scholarly record. Up to date, we had to retract 19 articles out of 50,000 published due to issues discovered or reported post-publication. This is in line and often below the average retraction rates of other high-quality scholarly publishers.
There are different ways we get alerted to concerns regarding published articles, for example through comments on papers, directly by email (which is the best way for concerned readers to ensure a timely follow up by the editorial office) or through social media. We have a comprehensive complaints procedure on our website also describing our retraction policy.
What is the evidence that it works
Impact Analysis. Using commonly available citation metrics as an indication of the usage and the quality of journals, Frontiers has achieved exceptional quality while publishing 50,000 articles to date. Out of the 11,365 journals listed in the latest Journal Citation Report 2015 (Thomson Reuters, 2016), the impact factor calculated for the 19 Frontiers journals currently listed was on average in the 88th percentile.
When compared to long-standing and highly selective traditional journals, Frontiers journals are amongst the most-cited journals in Neuroscience, Psychology, Plant Science, Microbiology, Immunology, and Microbiology. You can read the comprehensive impact analysis here.
Author and reviewer surveys. We heavily rely on direct feedback from the communities that we serve. A survey answered by over 10,000 authors, who had a manuscript under review at Frontiers between 2013 and 2015, revealed that 92% rated our collaborative review forum as good or excellent, as did 90% of the 813 surveyed reviewers. Most importantly, 92% of authors also thought the collaborative review actually helped to improve their paper.
Review time and efficiency. Data from June 2016, covering all papers submitted since the launch of Frontiers in 2007, show that 50% of all manuscripts submitted to Frontiers went from submission to acceptance within 40-90 days, while 75% were handled within 25-125 days. This efficiency at such high volumes is unparalleled in scholarly publishing and only achievable due to our strong focus on technology, while human experts take the critical decisions at the required time points.
Our analysis highlights that our unique combination of technology-assisted review and editorial boards consisting of experts who have a mandate to focus on objective criteria, and authors, who are offered a fair chance to rebut comments and revise their paper, results in a peer-review process that can be a rewarding experience and at the same time safeguards the quality of manuscripts and yields highly cited articles and journals.
Attempting to solve quality at scale does come with its fair share of challenges and concerns, but it is possible. The Frontiers model has been designed to evolve and improve with feedback and each issue encountered. With our focus on innovation, we constantly pre-empt new publishing challenges based on feedback from the academic communities we serve. To achieve quality at scale, we merge our automated, evolvable and scalable platform with expertise from our editorial boards and editorial office staff. This powerful combination allows us to offer a unique peer review process that is constructive, transparent, efficient and rigorous, as well as collaborative.
We would like to thank the many who have provided constructive feedback to help us evolve the Frontiers review process so far beyond the state of art in scholarly publishing.
Liked and endorse the system of review by you.
J S Bhattal
On Mon, Aug 22, 2016 at 7:06 AM, Frontiers Blog wrote:
> mirjamcurno posted: ” Why publish more papers Research output is currently > increasing exponentially across the globe. There are more researchers being > trained (in 2014: 7 million), more data being produced (in 2014: 7 > exabytes) and more articles being published (in 2014: 2 mi” >