Introduction

The peer-review system is one of the cornerstones of quality, integrity and reproducibility in research, and its existence has enabled the scholarly publishing system to function for hundreds of years. Yet, it is beset by challenges ranging from perceived bias to lack of transparency, and the system is groaning beneath the weight of the vastly increasing quantity of research being published every year. For the past two centuries the volume of peer-reviewed articles published globally has increased by 3.5% per year, with currently around 2.5 million articles, published in an estimated 28,000 peer-reviewed English-language journals, in 2014. Of course, the burden on peer review is much higher than this, as most papers receive more than one review, and many more papers are reviewed than published.

Furthermore, papers are increasingly long and complex, requiring more in-depth and time-consuming review. The brevity of Watson and Crick’s 1953 Nature paper on the structure of DNA, which was just one page long with only one figure, would be unthinkable now. With reproducibility measures, reviewers and editors are asked to scrutinize more aspects of the papers, including statistics, methods and reagents description. However, the pool of researchers engaging in peer review has not grown to match the increasing demand.

No one has identified a ‘magic pill’ which will solve these challenges for all research formats, across all disciplines, but a number of publishers have experimented with ways to improve peer review. Nature Publishing Group and our sister company Palgrave Macmillan (now both part of Springer Nature, effective May 2015) have been doing so for over a decade, but the last couple of years have seen a ramp-up in these activities as peer reviewers struggle with the increasing quantity of manuscripts and as new technology and business models (such as open access) provide new imperatives for the model to evolve.

Academics overwhelmingly think peer review is important, and surveys have consistently shown that they prefer papers to have been rigorously reviewed. In our 2014 Author Insights survey of over 30,000 authors, 93% of science authors said that the quality of peer review was one of the top three factors they look for when deciding which journal to publish in, and 89% of humanities and social science (HSS) authors said the same. The full data set for this survey is available on Figshare.

In this survey, authors told us that they want us to innovate when it comes to peer review:

  • 70% of authors are frustrated with the time peer review takes
  • 77% think traditional peer review could be made more efficient
  • 67% think publishers should experiment with alternative peer-review methods.

Science: double-blind

In the scientific disciplines, the dominant form of peer review is single-blind: reviewers are anonymous but know the authors’ identity. In humanities and social science journals, however, double-blind peer review is the norm, in which both authors and reviewers are unknown to each other.

Nature Publishing Group journals in the natural sciences have, until recently, abided by the tradition of their disciplines and used single-blind peer review. But since March 2015, the Nature journals have started offering double-blind or single-blind as standard options (with the exception of Nature Communications, which is slated to join the initiative at a later date). The reason for this shift is simple: popular demand.

There has been much scepticism over the years, including amongst editors, about double-blind peer review. Chief amongst their concerns were the lack of demonstrable evidence that blinding reviewers really improves the quality of reviews, and the realistic observation that in many specialist fields, awareness of works in progress makes attempts at disguising the authors’ identity futile.

But balanced against this scepticism, in survey after survey, scientists have expressed their conviction that double-blind peer review is a good system. In one of the largest studies on peer review – a 2009 international and cross-disciplinary survey of more than 4,000 researchers – 76% of respondents indicated that double-blind was an effective peer-review system likely to prevent ad hominem biases, such as those based on gender, seniority, reputation and affiliation. Nature Publishing Group has seen similar preferences expressed in our own research. Our Author Insights survey found that 78% of respondents thought that double-blind peer review was a good or very good idea.

Importantly, as the editorial announcing the new initiative stated, ‘this sentiment is widely echoed in conversations that our editors have had with young scientists worldwide. These conversations demonstrate a widespread perception that biases based on authorship affect single-blind peer review.’ While the Nature journals editors do not necessary agree with the impact of these potential biases and are committed to do their best to prevent and mitigate them, these conversations have contributed greatly to making us reconsider the proposition.

After trialling the offer of double-blind peer review on two journals in 2013, Nature Geosciences and Nature Climate Change, all Nature journals publishing primary research have joined in and started offering their authors the option to withhold their identities to reviewers since March 2015. Preliminary data shows that uptake has been limited in the first couple of months – less than 14% of submissions overall are going the double-blind route – but these are early days and as awareness grows, it is likely that uptake will grow too.

Some have expressed concerns that unless double-blind peer review is mandatory, prejudices will remain. But as the editorial explains, ‘Clearly, keeping their identities from reviewers will not always be possible, especially in small and specialist fields. We also continue to promote policies that support researchers who wish to release data early and to discuss their work with their peers before publication, through conferences or by posting research on preprint servers. These routes to publication also compromise anonymity. That is why the double-blind process is optional on all titles. We expect that some will choose it out of concern about biases, others purely on principle.’

Over time, the experience of authors, reviewers and editors will tell, and the initiative is kept under review to ensure that there is no negative effect on the quality of reviews.

Open peer review

The other main variant of peer review is open peer review. Opinions about open peer review in surveys are more mixed – in the 2009 survey, 20% of participants considered open peer review effective. Advocates argue that the transparency promotes more civil and thoughtful reviewer comments, while critics fear it promotes less critical attitudes. Different variations of ‘open peer review’ carry different levels of openness. Some journals, such as the BMJ and Frontiers journals, reveal the names of reviewers upon publication. Other publications such as the EMBO Journal publish the reviewer reports but keep the reviewers’ identity confidential. Others publish both the names of reviewers and the reports (for example, F1000Research). Nature experimented with a completely open version of peer review in 2006. At the time authors were offered a parallel track of peer review alongside the traditional single-blind. If they opted in, their manuscript was posted on the Nature website and comments solicited from the broader community, with reviewers identifying themselves. The uptake from authors and reviewers was low, and even when editors solicited input from potential reviewers, the comments received were not sufficiently substantive.

But as publishers continue to experiment with different versions of open peer review, attitudes in the community are changing, and Nature journals remain open to future experimentation.

Our colleagues in the business, humanities and social sciences at Palgrave Macmillan have also experimented with open peer review for monographs. In an April 2013 survey of the Palgrave Macmillan research panel, 67% of academics polled agreed or strongly agreed that, ‘Publishers should experiment with alternative peer review methods’ (n = 403). Interviews with our own in-house editors, and with Palgrave Macmillan authors, suggested that ‘traditional’ single- or double-blind peer review, while effective in many ways, has limitations – the most oft-cited example was a small pool of regular reviewers being used time and time again, limiting the range of perspectives on work. But prior to our January 2014 trial of open peer review for a selection of monograph proposals, no publisher had tried open peer review for scholarly book proposals, and there had been very few experiments in this area at all.

Palgrave Macmillan’s small-scale trial took the form of interactive, crowdsourced open peer review: proposals and sample chapters from just ten titles in economics, sociology and cultural and media studies were posted on a blog-based website and opened to public comment for six weeks.

The proposals and sample chapters had already gone through Palgrave’s traditional single-blind review and the books contracted for publication. The open peer review took place soon after this, at a point where most authors were in the process of writing their books.

After the trial closed, Palgrave Macmillan editors discussed the feedback with authors, and authors had the option of taking comments into account as they completed their manuscripts. In interviews, authors tended to see the comments as a nice supplement to the traditional reviews. Several saw the open feedback as akin to getting an early indication of the reception to their book.

Academics are theoretically open to commenting openly – in a mini-survey post-trial, 75% of our survey respondents agreed or strongly agreed that they would consider commenting in a future open peer review trial (n = 94). However, it may be that unless invitations to review openly are framed as a direct review request, or unless the publisher is plugging into an existing online network, it will be very difficult to generate open comments. In our interviews the theme which came up again and again is how busy academics are: an indirectly solicited review is always going to rank behind their own research, teaching, service, and paid or directly requested reviews – which is to say, a very low priority.

Peer review decision times

From submission to publication, the publishing process takes time. According to an article analysing the time taken from submission to and publication in a journal, by Björk and Solomon (2013): ‘The shortest overall delays occur in science technology and medical (STM) fields and the longest in social science, arts/humanities and business/economics. Business/economics with a delay of 18 months took twice as long as chemistry with a 9 month average delay.’ For a Publishing Research Consortium survey from 2009, authors reported average review times of about three months. On average, authors regarded review times of 30 days or less as satisfactory, but satisfaction levels dropped sharply beyond three months, and fewer than 10% were satisfied with review times longer than six months. Publishers have long been experimenting with different methods to speed up the publishing process, including the peer review element.

The increase in papers and research means that the typical reviewer is spending an increasing amount of their time peer reviewing others’ work. The typical reviewer spends five hours per review and reviews some eight articles a year, according to the 2015 STM Association report. For many publishers, the slowest step in the publication process remains the evaluation of manuscripts by anonymous experts. As previously mentioned, we found that 70% of our authors are frustrated with the time peer review takes.

In March 2015, Nature Publishing Group conducted a one-month pilot on Scientific Reports, where authors could opt-in for a guaranteed time to ‘first decision’ of three weeks, for an additional fee. The service was provided in partnership with Research Square and their established peer-review system, Rubriq. The experiment was capped at 40 manuscripts from a pool of approximately 1,800 submissions per month.

Over the four weeks the pilot service was available:

  • we received 25 requests from authors to pay for fast-tracked peer review
  • these authors represent a range of institutions, countries and levels: professors (16 authors), doctors (eight authors) and one PhD student
  • geographically, the highest number (ten manuscripts) were from China, but there were submissions from the UK, US, Germany, Finland and Sweden, as well as further Asia Pacific countries, including Japan, Taiwan, Korea and Singapore.

Members of the editorial board of Scientific Reports raised some concerns about the service including: potential for discrimination against authors who are unable to pay additional fees, concerns that this may impact the quality of the reviews or encourage unethical behaviour on the part of authors or reviewers. We plan to gather feedback from trial participants, peer reviewers, editorial board members and the wider scientific community to better understand these concerns.

Portable peer review

As the burden on the scientific community of providing peer review increases, we and others are exploring ways of making reviewer reports more portable so that the same papers do not need to be reviewed over and over by different referees. One of the first cross-industry attempts has been the Neuroscience Peer Review Consortium, started in 2008. One of our journals, Nature Neuroscience, is part of the consortium, a cross-publisher group of journals who since 2008 have been exchanging peer reviewer reports and identities at the author’s request, and with referee permission. In 2013 BioMed Central (BMC), eLife, the Public Library of Science (PLOS) and the European Molecular Biology Organisation announced that they would give authors of papers they reject the option of making referees’ reports available to other publishers.

For almost ten years we have offered a transfer service within the Nature journals, whereby at the author’s request, a manuscript rejected by a Nature journal can be transferred, along with the reviewers’ reports and identities, to another Nature journal. The editors of the receiving journal can then take the same reviews into account in reaching a decision. This option has seen an increasing success with the launch of Nature Communications, a multi-disciplinary journal seeking to publish high quality papers of interest to specialists across the life, physical and earth sciences disciplines. Nature Communications has successfully published many high quality manuscripts that did not fulfil the criteria of Nature or a sister journal, by relying on the same reviewer reports, and requesting minor modifications. Our editors are increasingly trying to guide authors who seek to resubmit a rejected manuscript and since the autumn of 2014, Nature journal editors are able to consult with each other, with the purpose of giving authors a better recommendation of where to resubmit a declined paper.

Of course author choice is central to all these initiatives. At Nature Publishing Group, the choice of where to resubmit and whether to transfer reviewer reports is always the author’s choice. And it is not uncommon for authors to seek a fresh start and bypass the transfer options altogether.

Crediting peer reviewers

There are some examples of publishers and independent providers of peer review paying their reviewers, or offering them payment in kind (for example, Palgrave Macmillan pays its monograph reviewers or offers books, while Collabra by the University of California Press pays its peer reviewers or offers them article processing charge [APC] discounts). Ware and Mabe note that although there is ‘very little demand’ for paid peer review, ‘there does appear, however, to be demand for greater formal recognition for the work of reviewers.’

Because many journals continue to operate anonymous peer review, it may be very difficult to provide proper credit to reviewers. At the Nature journals, reviewers can download a certificate of their peer review activity stating the number of times they have reviewed. We mitigate breaches in anonymity by aggregating the data over many journals. Furthermore, Nature-branded journals send an annual letter of thanks to reviewers who have reviewed three times or more across all Nature journals in the previous calendar year. Just under 4,000 of the 31,000 reviewers from 2014 have received this letter, recognizing their contributions, and offering them a free online subscription to the Nature title of their choice. Crediting peer reviewers is a challenge we continue to try to solve in different ways. However, it would be difficult to create one system which recognizes the myriad standards in peer review activity across disciplines and journals, and credits reviewers accordingly.

Publons is one possible solution, which puts the power for credit back in the hands of the researchers, allowing reviewers to upload their own peer review activity, disclosing as much as they like, and in compliance with the confidentiality policies of the journals. Some journals, such as Gigascience and PeerJ, automatically upload their open peer review information to Publons. Meanwhile, ORCID, F1000Research and CASRAI, in consultation with many publishers including Nature Publishing Group, are working on a new cross-industry solution to credit peer reviewers by tying peer review activity to ORCID records using digital object identifiers (DOIs). According to Haak 2014, ‘the group will be reviewing data field requirements for citation and data exchange and workflows for associating the review and reviewer with persistent identifiers. They are scheduled to submit draft recommendations to an external review circle in June, prior to finalizing their report.’

Conclusion

Our goal continues to be to serve scientists and the advance of science. We see a number of challenges with the peer-review system as it operates today, and believe that at the rate research is growing, the current system is unsustainable in the long term. We remain committed to innovating to credit peer reviewers, and to expediting the peer-review process while not compromising its integrity or overloading busy researchers.

Ultimately, our aim is to provide a better peer-review service to our authors and more support for our editorial board and reviewers. We want to find ways to strike the right balance in offering services to authors, editors and peer reviewers to meet their needs, and we are committed to finding those solutions in collaboration with the research community: authors, reviewers and third parties, including other publishers.

Competing interests

The author has declared no competing interests.