The manuscript is not comprehensive in its presentation of
methods and results
When Robert Boyle was performing experiments with the air pump in the
17th Century, his discoveries were communicated to
other “natural philosophers”4 using live demonstration.
Where that was not possible, scientific findings and discourse were
generally communicated through personal letters. When the scientific
paper became popular, its purpose was to give readers the impression
that they were there with the author in witnessing the experiment.
Detail was important. Today, we are often taught that the purpose of a
report is not just to communicate the findings, but also to provide the
reader with the information needed to repeat the study. In my experience
as an editor (and reviewer), I have reviewed several manuscripts that
inadequately describe the methods of study or omit important information
required to adequately interpret the presented results, much less repeat
the study. It may be unclear as to how participants were selected, how
the data acquired, the experimental procedures (e.g. randomization and
blinding), and the validity and reliability of the approach to
measurement. Statistical analyses are often poorly described. It is not
enough to say which test were used – one must also state what is being
compared and how. In addition, important metrics related to the quality
of the analyses are often not reported (e.g. what are the assumptions of
the statistical tests used and were those assumptions met). Likewise,
the results might not include important information (e.g.
characteristics of study participants, number who dropped out,
statistical data, including standard errors and confidence intervals,
etc.). I suspect that much of the under-reporting on methods and results
is due to concern about maximum word count. If so, it is recommended
that the author prioritize the justification for the study, and the
methods and results over discussion of findings and editorializing.