A Peer Reviewer’s Guide to Not Getting Your Biomedical Research Paper Rejected.
Now, if you are a world-renowned researcher reporting on a cure for cancer or COVID-19, your paper is going to be accepted, even if the manuscript is badly written and sloppily put together. The peer reviewers and editors wherever you submit your paper will work hard with you to correct any flaws in the presentation and get this important information published quickly. Likewise, if your methodology is seriously flawed or your results would only be of interest to two people outside of your lab, your paper is going to be rejected even if it is flawlessly presented.
The vast majority of manuscripts submitted to biomedical research journals however fall somewhere in the vast middle ground. The experiments are competently, but not brilliantly, designed and they are not exceptionally innovative. The data presented advances the understanding of their field but does not transform it. These are not great papers, but they are at least pretty good papers, or, in any case, not completely worthless papers. And this is where the mood of the reviewer matters.
The line between a manuscript being assessed as good enough to publish (accept with revisions) and one being deemed not good enough to publish (reject), is both subjective and malleable. Authors of scientific papers need to realize that a decision to accept a manuscript involves a lot more work on the part of the reviewer than rejecting it. The kind of detailed line by line analysis of the paper necessary to identify any points that would require clarification and/or correction prior to publication is tedious and time consuming. On the other hand, if a reviewer decides to reject the manuscript, all he/she needs to do is click on the “reject” box and write a sentence or two stating the reason for ejection. The more you annoy your reviewers with trivial mistakes, the harder you make them work to understand your data and its significance, the more appealing the reject box will look. Reviewers accept that no manuscript they receive is ever going to be perfect. They have written scientific manuscripts themselves and they know that no matter how many times you proofread your work, you are bound to miss something. A few mistakes in your manuscript, even pretty big ones, can be easily brushed aside if the results and conclusions are sound. However, but the effects of multiple errors are cumulative, causing reviewers become more and more irritated with the writer with each mistake.
Over the last couple of years, I have been reviewing quite a few manuscripts submitted to middle tier research journals. These Journals are not the kinds of famous names that members of the general public have heard about, but they publish good solid scientific papers and are well respected publications within their relatively narrow fields. As I reviewed more and more submissions, I began to notice common patterns of annoying mistakes in many manuscripts. Often, these mistakes required the authors and reviewers to go through two or more rounds of revision before the manuscript was finally accepted. In other cases, manuscripts got killed off, even when the actual data they presented looked pretty good. On more than one occasion, my cursor has hovered over the recommendation options, knowing that if I chose the “accept with major revisions” selection, I would likely be asked to re-review the revised manuscript in another four to six weeks, and possibly one more time after that, and dreading the prospect. I also knew however that choosing “reject” would mean that I would never have to look at the thing again. If plowing my way through the manuscript the first time felt like torture, the “reject” option looked much more appealing. In short, the more annoyingly little errors reviewers find in your manuscript, the less likely they are to give you a second chance to fix these mistakes.
As a service to the research community therefore; in the hope of sparing researches the pain of having their manuscripts rejected for reasons unrelated to scientific merit, and reviewers (such as myself), the pain of having to read badly prepared manuscripts, I present the following tips for authors of peer reviewed scientific papers.
- I cannot emphasize this first point enough. If your first language is not English, please have someone whose first language is English proofread your manuscript before you submit it. It is not necessary for this native English speaking proofreader to have any expertise in the subject matter, or even to be a scientist. The only requirement is that the person has a firm grasp of English sentence structure, grammar and punctuation. Being able to speak with colleagues in English is not the same thing as being able to write well in English. Without the context cues in a spoken conversation, correct grammar, punctuation, and word order are essential for clear communication. Furthermore, spell checkers will tell you when something you typed is not a word, but they will not tell you when the what you just typed is not the right word. Different languages structure their sentences differently and there are many words in English that sound the same, or nearly the same, but have very different meanings (homophones). On more than one occasion, I have received manuscripts containing whole pages of text that were unintelligible because the words made no sense in the order in which they were presented. In other cases, sentences, as written, indicated the opposite of what was shown in the accompanying figure and were clearly not what the authors had intended to convey. No paper is going to accepted if results are misstated or cannot be understood.
- Unless your model system is so common that people far outside of your field are going to be familiar with it, assume that the reader does not know what it is, because this will likely be true for at least one of your reviewers. In order to assess the relevance of a paper’s findings to a wider scientific audience, editors routinely select at least one reviewer who works somewhat outside of the field of study. For example, my research focuses mostly on spinal cord injury, but I have reviewed papers looking at peripheral nerve regeneration, stroke, and eye disease where I have experience with some of the cells and/or key techniques used in the study. Because not all of your readers will be thoroughly familiar with your model system(s), you should describe the model(s) you are using, the condition(s) being modeled, and any key features of the model that relevant to your experimental design early in the Introduction. You do not have to provide a lot of detail, but the Introduction should contain enough information to help readers understand why you performed your experiments the way you did. For example, if you are using a genetic model of a disease that is sex-linked and fatal before adulthood, that would explain why the experiments were performed only on neonatal male animals. Reviewers need to understand your model system in order to assess the validity of your experimental design and your interpretation of the relevance of your findings. It is generally a good idea to cite a key review or initial paper describing your model system. Trust me, your reviewer does not want to have to find, download, and read three or four papers before they can even begin to evaluate your manuscript. It puts us in a bad mood and predisposes us to hate the paper.
- Indicate the ages of animals on which experiments were performed or from which cells or tissues were harvested, as well as the strain and species in the Methods section. The methods section should be concise but must contain enough detail to allow someone else to replicate your experiments. Age and sex are both key variables in the way that animals and cells respond to treatments.
- Be very clear about how your experiments were performed. If the protocols are complex, include a diagram to clarify the timeline or sequence of events. If your reviewers do not understand what you did, they are not likely to understand your results. If they misunderstand your results, they will reject your conclusions and your manuscript.
- Always, include N values for all experiments. Biological systems contain a great deal of variability and experiments with low N numbers have a higher risk of showing a statistically significant difference between groups that is actually a false positive. Without N values, the strength of the conclusions cannot be assessed, and the data is essentially meaningless.
- Make sure that your methods section does not still describe methods for experiments that are no longer part of the paper. As you write your paper, you may rand that the results are too complicated to explain it all in a single report and decide to put some experimental data in the next paper. Also, some people begin writing methods sections by cutting and pasting methods from other manuscripts or grant proposals. You may have ended up with a methods section that contains descriptions of experiments, for which no data is presented in the paper. Few things are more frustrating to a reviewer. It forces the reviewer to go over and over the manuscript text and figures, just in case they missed something on the first three readings.
- Include data on all appropriate control groups, including, wherever useful, intact untreated control animals. The variability in the normal condition can be important to interpreting the standard deviations or standard errors of the sham and treatment groups. Knowing what the normal, uninjured, wild type, untreated condition looks like also allows readers to assess the magnitude of any therapeutic effect, relative to the need.
- Include information about whether treatments were randomized and if those collecting and analyzing the results were blinded with respect to treatment conditions. If not, why not. In some cases, analysis cannot be blinded because treatments cause visible changes in animals or cell cultures, but this is not critical if the response being assessed is unambiguous, such as survival time.
- Don’t forget the scale bars. All images should have scale bars and all figure legends should indicate the what cells, tissue or biomaterials are being shown and their orientation. This is especially important when there are no visible landmarks to orient the reader. You know where you took the tissue and how the blocks were oriented for sectioning but reviewers are not psychic.
- Do not make your figures unnecessarily complicated. Each figure should make a single point, or at most, a few points. Construct each figure around a key point you wish to highlight. Not all data needs to be illustrated in a figure. Some numbers or descriptions of observations can be presented only within the text.
- Describe results in the same order as the data is presented in the figures. Having to jump from figure 1 to figure 5 and then back to figure 2, 3 and then 7 makes reviewers crazy.
- Be consistent with your use of abbreviations, color coding, symbols, scaling of graphs, etc. If filled black circles are used for the intact untreated control group in the line graph in Figure 1, then filled black circles should be used for this control group in line graphs of all subsequent figures. Do not label the same control or treatment group data differently in different figures. The control PBS vehicle treatment condition should not be labeled “Control” in some figures and “Cont”, “C”, “Vehicle”, or “Veh” in others. Likewise, do not use the same color scheme to represent different things in different places. If red and blue bars are uses to distinguish between the results for male and female animals in one graph, these colors should not be used to represent different age groups or treatment dosages in the next. Do not change the Y axes for PCR data for different genes and make it harder to assess the relative magnitudes of the effects on expression of different genes. Reviewers hate having to keep reading and rereading figure legends just to try and understand what is being presented. Even worse, if the reviewer gets confused and misreads your data, they might reject your paper because they think your conclusions are wrong.
- Make sure that your figure legends do, in fact, match your figures. Figures often undergo multiple changes as manuscripts are revised and rerevised. The legend you originally wrote for figure 2 may no longer have anything to do with what is in the figure 2 you submitted to the reviewers.
- Do Not overstate your findings. If the data you collected was a surrogate marker for functional recovery, do not say that your treatment restored function. This is a big No No. Inaccurately worded statements like this can be used out of context by unscrupulous people to sell unproven treatments to desperate patients.
- When discussing the therapeutic potential of any particular treatment or treatment strategy, remember to address both the magnitude of any beneficial effects and their robustness. Doubling the numbers of regenerated CNS axons relative to the injury alone condition alone sounds impressive, but not if the percentage of regenerated axons in the treatment condition is still less than 1% of the uninjured nerve tract because you would generally need to restore at least 5% of normal connectivity to for a subject to regain any functionality. Likewise, the potential utility of any intervention would be vastly decreased if was only effective during a very narrow time window, or only with precise dosing or for subjects in a narrow age range. Most scientific advances are only incremental. It is fine if your results fall in this category, but you need to be clear about what kind of increment it is.
Most of the advice I have offered about preparing manuscripts for publication, also applies to preparing grant proposals. And for the same reasons. Reviewing grant proposals is also very time consuming. Furthermore, reviewers for government agencies or large nonprofits are often asked to review dozens of proposals in a short period of time. Do not make your reviewers hate you by making their jobs more difficult. Annoying your reviewers will only lower your grant score.