In addition to the many suggestions being made from within academe, we are starting to see potential options being developed by the private sector.
In this model, an author selects four target journals of interest from their lists and posts the manuscript, which is then reviewed by their team of reviewers with a promised peer review results in 4-6 weeks. “Our reviewer feedback helps those journals decide whether they want your paper…once referred, over 80% of papers are accepted for journal publication.” Axios’ average time from submission to journal publication is three months, excluding any needed time for revisions. Focused on the fields of ecology and evolution, the editorial board’s “rigorous external peer review” and referral to “the appropriate journal” is a sign that once a journal asks the authors to revise and submit, the journal has effectively said that: i) the paper is within their scope, ii) that it is not fatally flawed, and iii) that it could be published in their journal.”
This site uses “an author-led process, publishing all scientific research within a few days. Open, invited peer review is conducted after publication, focusing on scientific soundness rather than novelty or impact…Open peer review removes the secrecy and anonymity that can bias the way scientists critique each others’ work. In F1000Research, signed referee reports
and author responses are published alongside each article. Authors can publish revised versions of their articles at no extra cost. All articles that pass peer review are indexed in PubMed.” Author fees are significant with research articles charged at $1,000 USD, down to a charge of $250 USD for correspondence and other shorter pieces.
Authors are involved in identifying peer reviewers: “The peer-review process for F1000Research is a collaborative process between the authors and the editorial staff. We expect authors to play an important role in the selection of suitable referees for their article and to work closely with us until an appropriate number of reviews have been obtained.” Focused on the life sciences and clinical research, this London-based cousin of Faculty of 1000 was featured in the Charleston Advisor last year.
An open access mega-journal published focused on the biomedical sciences founded by former PLoS ONE’s Peter Binfield and former Mendeley official Jason Hoyt, the company’s “mission is to efficiently publish the world’s knowledge. We do this through Internet-scale innovation and Open Access licensing to save your time, your money, and to maximize recognition of your contributions.” Launched in 2012 with the support of a panel of top-notch researchers (including Nobel winners), publishers such as SAGE and O’Reilly and arrangements with 131 research institutions offering a “pre-pay model fit for any sized institution. When your researchers come to us they’ll be able to utilize these pre-paid plans.” In their model, authors (or their institutions) pay a one-time membership fee that allows them to publish in the journal in perpetuity—although authors are required to participate by reviewing or commenting on at least one submitted paper per year. Readers and libraries pay nothing, and (as a break from other OA publications, fees are not levied per article but by researcher, which results in much lower fees due to the average numbers of articles published per researcher in their careers. PeerJ was also reviewed in the Charleston Advisor in 2014.
Founded in 2012 by Finnish researchers, this for-profit service focuses on the needs of authors as they move their manuscripts through the publication process. “Revenue for Peerage of Science comes from other organizations that want to purchase the peer review service to use in their decision-making, such as publishers, funding organizations, and universities.” In this model, authors submit their manuscripts, with authors deciding on the deadlines for the acceptance process, which is all handled by their manuscript management system. The company maintains a stable of qualified, non-affiliated ‘peers’ who can choose to review the manuscript—and then these peer reviews are further reviewed to both increase and quantify “the quality of peer review.” This review process has already been approved by a host of journals (who subscribe themselves to the Peerage) or the author can choose to export the peer reviews to a journal of their choice. The list of participating journals isn’t long, but includes some impressive titles. Participation in reviewing is by invitation only: “Only scientists who have published a peer reviewed scientific article in an established international journal as first or corresponding author will be validated as Peers.” Peer lists are accessed by clicking on a world map to identify reviewers by country—which is an odd approach to selection. In 2012 the product won the ALPSP Award for Publishing Innovation.
With a tag line of “get credit for peer review,” this service offers to “help you record, showcase, and verify all your peer review activity. Simply upload your peer review history, choose how much information to disclose, and you’re all set to use your official reviewer record in promotion and funding applications. We collect peer review information from reviewers and from publishers, and produce comprehensive reviewer profiles with publisher-verified peer review contributions that researchers can add to their resume. Reviewers control how each review is displayed on their profile (blind, open, or published), and can add both pre-publication reviews they do for journals and post-publication reviews of any article. Publons is completely free for academics.”
Publon—which stands for “the fundamental unit of publishable content—’the elementary quantum of scientific research which justifies publication’. Whenever you see ‘publon’, think ‘research paper’”—provides merit points for various activities of reviewers. The company claims to have more than two thousand reviewers servicing the health and science disciplines.
Calling itself the “online journal club,” this company “seeks to create an online community that uses the publication of scientific results as an opening for fruitful discussion among scientists. With PubPeer, scientists can comment on almost any scientific article published with a DOI or preprint in the arXiv.” The value is in the ability to comment on publications with peers “both positively and negatively” providing “another dimension to an article’s ‘impact’ that is independant (sic) of the name of the journal in which it was published. The comments on Pubpeer (sic) are seen by journals and other media outlets and eventually influence future and past publications.”
Their system is organized by a searchable centralized database that can use used to find articles, topics, or researchers of interest. Authors or interest groups are notified whenever their article receives comments. “The chief goal of this project is to provide the means for scientists to work together to improve research quality, as well as to create improved transparency that will enable the community to identify and bring attention to important scientific advancements.” In their system, commenters remain anonymous unless they choose to identify themselves: “Blind peer review has been employed by most major scientific publications in order to allow reviewers the ability to critically assess the work of their peers without fear of retaliations. We believe this to generally be a good system and would like to encourage it’s usage on PubPeer. However, we recognize that there are some situations in which a commenter might feel compelled to reveal his/her identity (e.g. if they are an author of the publication and would like to explain how experiments were conducted) and we have allowed commenters to reveal their identity on an article-by-article basis.”
This independent peer review service, originally covering the biological and medical sciences, has recently expanded its base to include humanities, engineering, and other disciplinary areas. The company was formed on the principles that reviewers deserve competition, that “peer review should be timely, objective, rigorous, and transparent in its evaluation criteria,” and that by streamlining the process of peer review, they can save precious time for researchers and those who may benefit from the content of newly published research.
Although the company gives nodding acknowledgment of post-publication peer review and the growing stable of alternative metrics, Rubiq believes that “there should still be pre-publication review to check for plagiarism, conflicts of interest, ethical issues, and other critical factors, as well as to help sort and prioritize large bodies of research.” As a non-profit, the company stresses that “our independence enables us to improve the publishing process for all stakeholder groups. Our goal is that great science finds its best audience as fast as possible.”
Thomson Reuters’ service has provided “scholarly publishers, societies, and associations with online, flexible workflow solutions since the mid-1990s. In fact, we’re the only peer review system with patented technology.” This long-time standard has a stable of established publishers, societies, and association clients, “state-of-the-art technology and data security, continuous investment in the product and future development, and the reassurance that we are a company that is here to stay.” Over time, ScholarOne has expanded to include manuscripts, books, and proceedings.
SciOR, based at Queen’s University Canada, is a “new online community of researchers in the natural sciences, health sciences, and social sciences—using a unique model for promoting: Efficient and accountable author-directed open (non-blind) peer review; effective reviewer participation incentives and reputation metrics; and rapid dissemination of discovery and commentary.” Potential members register with the system, indicating their review subject areas and then any member is able to post their paper titles and abstracts for potential reviewers to review. Remuneration to the reviewer is at the determination of the author and reviewer. “A particularly helpful reviewer might in some cases evolve into a co-author of the paper, at the author’s discretion.” Once the manuscript is ready for publication, “papers posted as ‘available for publication’ can be browsed by registered journal editors only. Authors may also invite editors of their choice to register and view the authors’ posting as a submission for publication; or if preferred, authors may submit the reviewed paper—together with copies of its reviews and NCOI declarations from SciOR—directly to a journal through its usual online submission process.”
Another new service, LIBRE— “liberating research”, appears to still be in beta at this time.
Are We Ready for a Change?
Joachim Savelsberg, University of Minnesota Professor of Law and Sociology, and current co-editor of Law & Society Review, believes that a proactive editor is the best source for evaluating manuscripts: “I might welcome any additional information from openly available reviews. But I am interested in the fit of a paper for the particular target group, in my current case the law and society scholarly community. I am also interested in knowing who wrote a review to judge the competency with which it was produced. Also, could it be that those who need to earn some extra income (not necessarily the most dedicated and qualified scholars) would dominate this review business?”
Hames noted in a recent keynote lecture that “being labeled as ‘peer reviewed’ doesn’t mean that the work reported can be considered the absolute ‘truth’ and free of all errors. It means that the report has been looked at and critically assessed by appropriate experts, i.e., people with the relevant expertise and without any conflicting interests that might bias their assessment, hopefully to the best of their ability, and considered suitable for publication. Before publication, authors have usually been asked to address deficiencies, explain discrepancies and clarify any ambiguities, so papers (and the work behind them) get improved as a result. Peer review is, however, only as good and effective as the people managing the process.” [Emphasis added.]
All of this brings up a very key issue, as described well by British physician and researcher Richard Smith: “to my continuing surprise, almost no scientists know anything about the evidence on peer review. It is a process that is central to science—deciding which grant proposals will be funded, which papers will be published, who will be promoted, and who will receive a Nobel prize. We might thus expect that scientists, people who are trained to believe nothing until presented with evidence, would want to know all the evidence available on this important process. Yet not only do scientists know little about the evidence on peer review but most continue to believe in peer review, thinking it essential for the progress of science. Ironically, a faith based rather than an evidence based process lies at the heart of science.”
We assume that peer review is learned in the educational process. Sir Mark Walport reported to the UK House of Commons inquiry that peer review is “part of the training of a scientist…for example, journal clubs…Can more be done to train peer reviewers? Yes I think it probably can.” Can and should, but perhaps it’s time to look at focusing on this key area. Most of us have received odd assessments from peer reviewers. As a former editor, I see the need for greater oversight and interaction between editors and peer reviewers and a more hands-on role in advocating for the best submissions, the best reviews, and the best articles as a key obligation they make to their publishers, readers, and authors. Journal publishers have a serious responsibility for overview as well, because of potential harm to their “brand” by assertions of fraud one would expect a greater role on their part.
At the same hearings, David Sweeney noted a “volume problem.” “Obviously, more research is being done and more findings are being produced. We think that the amount that needs to go through the full weight of the peer review system need not continue to increase. Indeed, we are seeing initiatives in that. As part of our assessment exercise, we require four pieces of work over seven years from academics. In most disciplines, they will publish much more than that, but they do not submit it to the exercise because we are interested in selectively looking at only the best work. We would want to encourage academics to disseminate much of their work in as low burden a way as possible, but submit the very best work for peer review both through journals and then, subsequently, to our system. That is the only way to control the cost of the publication system. We must look for variegated ways of disseminating and quality-assuring the results.”
Mark Walport noted the key responsibility of research organizations for research integrity: “We believe very strongly that the responsibility for the integrity of researchers lies with the employers, so by and large that is the universities for university academics. It is clearly the research institutes for people employed by research institutes…It is the nature particularly of scientific research that errors are found out, and it can’t be in the interests of any good university not to have the research done to the highest possible standard…There is no incentive to cover up.”
And, we can’t forget the responsibility of universities and their funders and governments who are metric crazed looking for statistical “proof” of the value of monies being spent as though we were talking about sales data. As David Colquhoun noted in a 2011 Guardian article: “University PR departments encourage exaggerated claims, and hard-pressed author go along with them. Not long ago, Imperial College’s medicine department were told that their ‘productivity’ target for publications was to ‘publish three papers per annum including one in a prestigious journal with an impact factor of at least five.’ The effect of instructions like that is to reduce the quality of science and to demoralise the victims of this sort of mismanagement.”
The Critical Role of the Editor
Savelsberg sees the role of editors as most critical: “In my experience, as editors we read the papers and the reviews. We understand which reviews are precious and which ones are not or less so. We know that different reviewers use different standards of evaluation. We thus take their reviews seriously (albeit some more than others), but in the end we are responsible for the journal (not the reviewers) and we decide based on our reading in combination with what reviewers say. I think readers (and authors) face a grave issue of having to reduce mass/complexity. They depend on good criteria. Reputation of authors and of journals are among them. I’d still believe that those who follow such criteria will be most successful.”
Sugimoto sees value in many of these new startup enterprises and believes their “ideas are certainly promising, but they should be done in full acknowledgement of value-add that a (good) editor provides. A high-quality editor is in the position to select particular reviewers that will best be able to reflect on various aspects of the manuscript and lead to subsequent improvement. Given rising interdisciplinarity across all fields, this often means identifying reviewers who can speak to various dimensions of the paper. An editor may also intentionally select a difficult reviewer, in hopes of finding all the weaknesses in a given submission prior to publication. It should be remembered that reviewing is not only about making recommendations for acceptance, but also for improving the quality of the manuscript. Of course, there can be malpractice among editors, whereby reviewers are selected in full knowledge that they will make a recommendation with which the editor concurs. In such cases, removing editorial selection certainly improves the outcome for scholarly publishing. The other benefit of sharing reviews among journals is reducing the overall burden of reviewers.”
“The high-quality journals will continue to maintain standards,” Harnad believes. “The only research for which quality matters is research on which others are trying to build high quality research. When something important (in the sense of something others want to build something upon) is erroneous, it collapses under the weight of any attempt to build on it. The minority of research that is important, solid, and cumulative is also self-corrective. It has higher-standard peer review, but even if that misses something, it comes out in the next iteration, when someone tries to build on it and it collapses.”
James Galipeau, Senior Research Associate at the Knowledge Synthesis Group at the Ottawa Hospital Research Institute has studied quality of reporting in research and peer reviewing and finds that “research on training of peer reviewers found that none of the interventions really had any effect on the quality of reviews. This is very concerning, considering that peer review is supposed to be the main failsafe by which we determine what is ‘good science.’ It seems almost unfathomable that we ask researchers to act as guardians of the integrity of research without offering them anything substantial (or in the best of cases, very little) in return, including proper training, adequate resources, incentives for their time, or rewards for quality work. Every year literally hundreds of billions of dollars of funding money in health and other fields ride on the hope that untrained, unsupported, and undervalued peer reviewers will produce high quality, comprehensive, knowledgeable, and timely reviews.”
“What is needed before we can have a meaningful discussion on the merits of new forms or configurations of peer review,” Galipeau continues, “is the development of a minimum set of evidence-based core competencies for the skill of peer review. Once we know how to train peer reviewers, then we can have a baseline of skills and characteristics, which will allow us to draw comparisons between types of peer review. Without a relatively similar skill set, it’s very difficult to tell which effects are due to the format of the peer review, and which are due to the skills (or lack thereof) of the reviewers. Once we have established a minimum set of evidence-based core competencies for peer reviewers, then we can train reviewers against these competencies to ensure that, at a minimum, they have the knowledge, skills, and abilities needed to conduct an adequate review. We can also then create a system of certification, whereby we can know, trust, and attest to the fact that peer reviewers have met the minimum criteria to adequately perform a review. While certification likely won’t bring an end to poor peer reviews, it certainly has the potential to raise the average quality of reviews.”
On March 6, 2015, we celebrate the 350th anniversary of the first published scholarly journal. This tradition of scholarly publishing has served research, academe,, and society well in this time. With respect to this tradition and legacy, it is only fitting to finally and openly give fair consideration to everything that is good, bad as well as unchanging in this system so that we can make needed changes that will allow us to move forward with confidence into the next 350 years.
Nancy K. Herther is Librarian for American Studies, Anthropology, Asian American Studies & Sociology at the University of Minnesota, Twin Cities campus. [email protected]