ѻý

Is Journal Peer-Review Now Just a Game?

<ѻý class="mpt-content-deck">— Milton Packer wonders if the time has come for instant replay
MedpageToday
image

Many believe that there is something sacred about the process by which manuscripts undergo peer-review by journals. A rigorous study described in a thoughtful paper is sent out to leading experts, who read it carefully and provide unbiased feedback. The process is conducted with honor and in a timely manner.

It sounds nice, but most of the time, it does not happen that way.

I have experienced the peer-review process from the perspectives of an author, reviewer, and editor. It is an enormously challenging and unpredictable journey.

Let's start with the authors. If the authors are aware of the strengths and weaknesses of their manuscript, the paper is not written in a dispassionate manner. Instead, it is crafted for publication in a specific journal. Authors fashion and direct their paper to the journal most likely to be receptive to its content and message. Successful authors are good matchmakers -- and good marketers. In their cover letter, they often sell the paper to the editors.

The editors receive an enormous number of papers. Top journals receive dozens of manuscripts each day. The editors perform an initial cursory review, and reject many papers in an electronic instant. More than half of the submissions do not survive the screening process. The authors are notified quickly about the bad news, but they receive no feedback. Many relatively unknown authors believe this process is biased against them.

If the paper passes the initial hurdle, the editor sends it out for "peer-review" by external experts. But who are these peers? It is nearly impossible to find experts who are knowledgeable and insightful and who are also willing to take the time to perform a thorough and thoughtful review. Luminaries routinely decline most invitations to review manuscripts. Their busy schedules do not afford the opportunity to do a good job, and many believe that there is little return for their investment of time and effort.

In a recent experience as an editor, I suggested four well-known experts as reviewers, but all of them declined. Then I proposed four different knowledgeable investigators, and one accepted. After three rounds of suggestions, I finally found two or three people who would agree to do the review.

Successful editors have identified a core group of people who are willing to do reviews on a regular basis, but sadly, the size of the group is not very large. All too often, editors rely heavily on early- or mid-career investigators, who are anxious to curry favor with the journal -- in the hope that the brownie points they earn might be useful when they submit their own work at a later time.

Do these early- or mid-career investigators do a decent job? Many of them are outstanding, but in too many circumstances, the young reviewer just does not know the field well or may be unfamiliar with the appropriate methods. They read the paper as a novice, and their lack of expertise and experience shows.

When I act as a reviewer, I am sent copies of the comments of the other people who evaluated the paper. In a distressingly high proportion of instances, I am astonished by the dismal quality of the other reviews. Some reviewers do not appear to have spent any time with the paper at all. Many submit formulaic responses, which fail to provide any useful critique or feedback. Many reviewers miss major and obvious deficiencies, whereas others nitpick a paper to death. Even for a single reviewer, the rigor of the evaluation process is not consistent, probably because the time they can spend varies according to their schedule.

At the end, the review is subject to the luck of the draw, and the editor is faced with reviews that vary enormously in quality. If the evaluations are patently deficient, they are discounted. Some promise a review, but fail to deliver. Others send private comments to the editor, but none to the authors. If the editor is fortunate, at least one reviewer actually does a thorough job.

The really amazing part? Typically, the reviewers perform their evaluation without any ability to know if the authors accurately described their methods or their data. Often, the reviewers are akin to judges at a beauty pageant. They know what looks good, but sometimes, they might wonder if it is real.

As an author, I am often amazed by the reviews that I receive. Many reviewers seem not to have taken time to read the paper. Some offer tangential ideas that have little to do with the manuscript, presumably to show that they thought about the subject. Others insist that their own work be cited. (The self-serving nature of some comments can be appalling.) Often, I cannot determine how the poor editors ever made an informed decision.

Nevertheless, if the journal has expressed interest in the paper, the authors now must revise the manuscript and write a letter that responds to the feedback of the reviewers. Regardless of the insanity of any individual comment, the authors must respectfully (often obsequiously) embrace each sentence and thank the reviewer for it. All too often, in their zeal to please the reviewers, the authors revise the paper in a way that makes it much worse than the original.

After the entire process is complete, the authors may receive an acceptance letter. This letter offers them the opportunity to have their paper published, but often, only if they pay a fee (even in top-ranked journals). And when the paper finally appears online or in print, the authors have the privilege of seeing their work ignored.

Does this sound like a ridiculous game? It is.

As in the it is a "strange game," in which "the only winning move is not to play."

Do you think I am exaggerating? Do you believe it is not a game?

British journal editors do not refer to those who provide external feedback as "reviewers." They call them "referees." In the U.S., the term "referees" is used to identify people who watch a game closely to ensure that the rules are adhered to. But when I act as a reviewer, I do not think of myself as an umpire. Umpires are not allowed to publicly celebrate a great play or moan when a player errs. The peer-review process can (and is supposed to) do these things.

I love writing papers. I adore looking at data, researching the literature, and working with fellows and young faculty. I am delighted when I discover new ways of synthesizing and presenting a concept. And I am truly delighted when my paper is accepted. But given everything that I have just written in this essay, I am clearly a glutton for punishment.

The peer-review process is horribly broken. My dear friend Harlan Krumholz, MD, of Yale University, has written about this for years. In a famed paper (), he argued that journals are too slow, too expensive, too limited, too unreliable, too parochial, and too static.

In his efforts to forge a solution, Harlan has been the leading advocate for preprint servers, a platform that allows authors to post a version of a scholarly paper that precedes peer review. The idea has real merit, and its following is growing rapidly.

If you think of peer-review as a game, than the preprint server is easy to understand. If you are fond of football or baseball, just think of it as "instant replay." You can see the paper immediately and repeatedly in its original state from every possible angle. But in the case of preprint servers, the "instant replay" takes place before the play has actually occurred.

Got it? Peer-review is a game -- with a dollop of talent and an abundance of chance. So, today?

Disclosures

Packer recently consulted for Actavis, Akcea, Amgen, AstraZeneca, Boehringer Ingelheim, Cardiorentis, Daiichi Sankyo, Gilead, J&J, Novo Nordisk, Pfizer, Sanofi, Synthetic Biologics, and Takeda. He chairs the EMPEROR Executive Committee for trials of empagliflozin for the treatment of heart failure. He was previously the co-PI of the PARADIGM-HF trial and serves on the Steering Committee of the PARAGON-HF trial, but has no financial relationship with Novartis.