Should We Get Rid of the Scientific Paper?
Dr. Patrick D. Huff
Executive Research Director ★ Scientist & Organizational Developer ★ Author & Speaker ★ Influencer & LION
The big idea: should we get rid of the scientific paper?
Ritchie, Stuart, 2022, The big idea: should we get rid of the scientific paper? The Guardian, New York; Eds., Review, and comment by Dr. Patrick D. Huff, Ed.d. (2782 words)
Background
Stuart starts his article by positing that the format of the APA-styled scientific paper as established in the United States is slow, encourages hype, and is difficult to correct. He argues that perhaps it is time to consider a radical overhaul of the publishing process to improve science, the reporting of evidence, findings, and recommendations.
He then asks his readers when they last reviewed a scientific paper. By this, he means having reviewed a hard copy. Stuart continues by stating that an older academic in his previous university department used to keep all his scientific journals. So, on entering his office, you would be greeted by a wall of various issues of papers like the Journal of Experimental Psychology, Psychophysiology, and Journal of Neuropsychology.
So, what are scientific journals and their purpose? In academic publishing, a scientific journal is a periodical publication intended to further the progress of science, usually by reporting new research[i] .
Much has changed upon the creation of the internet and cloud storage. Stuart argues that few modern-day academics continue to have such scientific papers in printed format as was popular beginning with the first scientific journals established in 1665[ii] . Most academics now curate these papers via the internet in the form they are presently submitted, reviewed, and published online. During the pandemic, it was often devoured on social media, an essential part of the unfolding story of Covid-19. Hard copies of journals are increasingly viewed as curiosities or not viewed at all.
Stuart maintains that "although the internet has transformed the way we read such papers, the overall system for how we publish science remains largely unchanged. We still have scientific papers; we still send them off to peer reviewers; we still have editors who give the ultimate thumbs up or down as to whether a paper is published in their journal."
Problems and Insights
Stuart argues that this system comes with big problems. He states, "Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce "better" results, and sometimes even commit fraud in order to impress those all-important gatekeepers. This drastically distorts our view of what really went on." This has been subject to continuing scrutiny within academic and commercial sectors as the need to speed up shared discoveries proportional to the speed of the internet and the advancement of competitive scientific discoveries increases.[i]
On review of Stuart's claims, Huff would argue that Stuart's observations have merit when reviewed against the scientific methods applied and the associated academic submission, review, and publication process.
The present terms of acceptance and the publication of academic studies or papers are primarily bound by the same mechanisms that have governed the process since the late 1600s. However, Huff adds that Stuart has omitted at least one critical component of this process.
The conceived research, study, findings, and subsequent publication approvals must fit what the targeted reviewing committee (peer group) believes to be a popular topic and read within the community.[iv] In other words, the paper must be sufficiently powerful enough to attract a wide readership, raise the scientific journal's esteem, and further empower the sales and profits that support and promote the journal.[v] So, what Stuart omits in his article is the age-old profit motive of these journals to review, endorse, and support the publication of any scientific paper or report that will serve to benefit the journal and not necessarily the researchers and scientists that are doing the actual research (hard work) in their communities of interest.??
Expanding Stuart's Insight
Independent of Huff's insights, Stuart suggests possible fixes to change the way journals work. For example, the decision to publish could be based only on the methodology of a study rather than on its results. Stuart states that this is already happening to a modest extent in a few journals. Maybe scientists could publish all their research by default, and journals would curate rather than decide which results get out. Stuart elaborates by stating that perhaps the scientific community could simply eliminate scientific papers.
On reflection, Huff argues this is a startling suggestion that could seriously undermine the validity of the entire scientific methodology and subsequent publication process. After all, given the present faults of the system, journal peer reviews tend to sort and filter out papers that fail to demonstrate merit and validity. Or so it would seem.
Although Huff's research group has been engaged in active independent scientific studies since early 2010, he has observed what he argues are emerging phenomena in the industry. It appeared that research and studies associated with certain high-interest or visibility issues were being influenced by third party (funding) interests, leading to the papers or reports being published with conclusions and recommendations that are not supported by empirical scientific method or, worst yet, sound data.
If Stuart's rapid and openly transparent innovations were to be adopted, Huff suggests that much of the scientific work being published in today's online forums would be quickly observed as flawed or subject to critical errors or omissions. In many cases, this change in practice would likely lay open to suspicion many well-respected research organizations and institutes that are actively producing low-quality or erroneous investigative papers and reports. To serious scientists, such practices undermine everything they strive to achieve in their careers, e.g., honor, integrity, authority, respect of peers, and contributing critical discoveries in their fields and communities. On reflection then, Huff would argue that eliminating scientific review and publication processes as a "big idea" would likely lead to being a "big mistake." Nevertheless, Huff agrees that innovation needs to occur in the industry to increase efficiencies, raise the quality and validity of investigations, and increase the speed and delivery of critical discoveries to the scientific community.
Huff's Experience
Since 2010, Huff's team has found a substantial increase in the discovery of such papers being published in journals that were, upon close examination of the supporting data (evidence), where much of the data reported did not pass the scrutiny of data error or pattern manipulation analytics (validity testing). In short, these reports were discovered to be based on manipulated data yielding false findings and conclusions.
Huff states that his team began to observe a significant level of erroneous data reporting starting in 2019 and early in 2020 while attempting to investigate the initial stages and effects of the recent pandemic. The degree of false reporting emerging from prestigious institutions became so extensive that Huff's group elected to stop their investigation into the pandemic due to the considerable amount of misinformation or erroneous data published in the scientific community. They thought a thorough investigation would be better conducted once the prevailing scientific publications were corrected by those in the community who felt such disclosures must be exposed and corrected to restore the industry's creditability together with those associated.
Empirical Investigation Practice Standards
The obsession with creating and publishing scientific papers is the stuff that professional careers, recognition, standings, and funded appointments are made. So, Stuart is right when he states, "Scientists are obsessed with papers – specifically, with having more papers published under their name, extending the crucial publications section of their CV." Stuart argues that although it might seem outrageous, the scientific community could do without this process as it presently stands. He continues by stating this obsession represents at least one aspect of the problem. Paradoxically, the sacred (traditional) status of published, peer-reviewed papers makes it harder to validate the contents (evidence, findings, and conclusions) by checking all the boxes. This is due to limiting conditions that many journals place on their publications that enforce strict requirements on the amount of documentation, data, evidence, or written content.
Huff adds that though this is the case with many respected journals, it has been his experience that many supporting peer reviewers are grossly insufficient in the resources needed to complete in-depth reviews and the degree of analysis necessary to validate the reports approved for publication. In short, many journals and their staff are either under-resourced, or their reviewers fail to do their job. Huff has observed such conditions associated with his direct involvement while serving as a scientific paper reviewer. In many cases, flawed research papers are not reported or corrected for several years after publication. As a result, many researchers entering the profession are subjected to making the reoccurring mistake of citing these papers as valid when their peers have previously discovered the studies suffered critical errors or omissions. The lack of vetting and openly disclosing flawed research can be likened to suffering the effects of a slow and reoccurring deadly cancer and a significant disservice to the scientific community. Fortunately, scientists within any given field of research tend to pay close attention to previously published research that is suspect.
Scientific Papers and Publication - An Arduous Undertaking
Stuart aptly observes the "messy reality of scientific research." Studies almost always throw up weird, unexpected numbers (outcomes) that complicate any simple interpretation. In addition, it is true that institutional papers tend to enforce traditional guidelines to include the page or total word count. By doing this, scientists are often forced to oversimplify or, as Stuart puts it, "dumb things down."
Journal Review and Publication Interests and Focus
Stuart states that If a scientist is working towards a significant discovery or a milestone observation that is associated with a published paper, "the temptation is ever-present to file away a few of the jagged edges" to tell a better story. He informs us that in his experience, many scientists admit, in his surveys, doing just that by editing their papers in ways that will make them more attractive-looking by somewhat distorting the underlying science along the way.
Emerging Access and Publication Via the Internet – Studies and Reports
领英推荐
Emergence of online papers and reports
Stuart states that some scientific fields are using online notebooks instead of journals. He argues this emerging practice is beginning to replace the traditional review and publication process (fossils) with living documents.
Rapid revisions and corrections
Huff concurs with Stuart on many of his observations, including supporting the scientific community's increased adoption of rapid online papers or research publications. This practice includes the publication of interim (initial) evidence collection, data, and analysis with the appropriate disclaimers. Unfortunately, it is common for scientific papers to contain errors of various types.
This is especially true of large or complex investigations that involve hundreds or thousands of pages of documentation. As many of Huff's senior advisors in the industry have argued, researchers or investors should not allow some of the mundane grammatical, structure (format), or minor omissions to hold back the publication of critical studies.
To their point, the evidence collected and initial findings may be too important to hold back publication as the information may assist others in parallel research. In fact, in some cases, an early or initial pre-publication review of a study may assist the core investigator(s) in revealing possible oversights or errors in their methodology or approach. Catching certain types of research design or procedural mistakes along these lines can save valuable resources and often lead to increasing the validity of findings and recommendations of an investigation.
Error?Detection Applications
Not unlike Stuart, Huff's research team regularly uses algorithms to uncover or detect data and scripting errors. Stuart states it is not uncommon to discover that more than 50% of the papers he reviews contain at least one statistical error, and more than 15% had errors serious enough to overturn the results.
Under the traditional publication methods, correcting this kind of mistake involves an extensive undertaking and allocation of unbudgeted resources. In short, the chief investigator must write to the journal, get the attention of the assigned editor, and get them to issue a new, short paper that formally details the correction.
Stuart states that many scientists who request corrections find themselves "stonewalled or otherwise ignored by journals." Imagine the number of errors that litter critical scientific publications that have not been corrected because they are viewed as being too much of a hassle. Huff argues this complexity in the profession is often resolved (when discovered) when the members of another research team make such observations and by connecting with the members of the corrupted study to confirm the errors and findings.
Considering the Data
Stuart observed that back in the day, sharing the raw data that formed the basis of a paper with that paper's readers was more or less impossible. Instead, it can be done in a few clicks by uploading the data to an open repository. And yet, he states we act as if we live in a world of yesteryear. Papers still hardly ever have the data attached. This practice prevents reviewers and readers from seeing the complete picture.
He continues, stating that we can change papers by going into mini-websites (sometimes called "notebooks") that openly report the results of a given study. This gives everyone a view of the complete process that includes the ability to observe the data, inspect the analysis used, and review the write-up.
Stuart suggests that study datasets should be appended to these websites along with all the statistical code used to analyze. Given this, anyone could reproduce the complete analysis to confirm or validate the findings. This process would allow the swift introduction of corrections and increased review efficiencies, including the date and time of all updates logged.
Stuart states that this online practice would significantly improve the status quo where the analysis and writing of papers go on entirely in private, with scientists or their supporting organizations electing whether to make their results public.
Issues with Investigation Transparency
Stuart argues that throwing light on the entire research process can assist in revealing ambiguities and hard-to-explain contradictions in results. This is the nature of scientific investigations. He continues by stating there are also other potential benefits of this hi-tech way of publishing science. For example, if you were running a long-term study on the climate or child development, it would be a breeze to add new data as it appears.
Traditional approaches to empirical studies or investigations present significant barriers to introducing changes. Some have to do with basic writing skills. For example, Stuart suggests it is easy to write a Word document with your results and send it to a journal as we do now. However, it is harder to make a notebook website that weaves together the data, code, and interpretation. Moreover, how would peer review operate in this scenario? It's been suggested that scientists hire "red teams" or people who would be assigned to pick holes in your findings and dig into the online research notebook (website) to test or conduct a deep cross-examination of the data and analysis used. As this would be a great way to ensure the validity of the data and findings, who would pay and how the system worked would be up for internal and contributing colleague debate or investor (stakeholder) negotiation.
Huff sights that, in his experience, this sort of open transparency could only be applied to research operations and procedures within a single department that is supported by a single organization or collaborative of organizations that share a mutual disclosure and cooperation agreements or memorandums of understanding. Opening investigation purpose, goals, procedures, progress data, and analysis before publicly completing such investigations to colleagues (others) would open the chief investigator to realign the preapprove study's budget and allocate resources to excesses that would likely delay, distract, or abandon the investigation. This aside, Huff suggests that although Stuart's suggestions are good and would likely improve the process of scientific studies, it would at the same time introduce a new set of perplexing complications for chief researchers to instrument and manage. In addition, as Stuart has implied, the principal investigator would need to incorporate additional skills and information systems management talent to their team. In Huff's case, this would mean adding other human resources to a team that already consists of 38 principle colleagues.
Huff's research team already utilizes online document sharing and edit tracking software (functionalities) that support rapid and efficient internal team reviews, error detection, and corrections. This software application includes project team members' ability to open comments and share their thoughts regarding design, approach, or possible adjustments to the methodology used. If other research teams have been included in the project, this process is also open to their input. Online technologies and team play applications developed and made available to the public for research have significantly improved the quality, validity, active internal peer review, and rapid publication of empirical studies, papers, and reports.
Conclusions
Stuart closes his arguments and observations by stating we have made astonishing progress in so many science areas. However, he says that we remain stuck with the old, flawed model of publishing research.
Indeed, Stuart states that even the name "paper" harkens back to a bygone age. Given this, Stuart concludes that some fields of science have already moved in the direction he has described, using online notebooks instead of journals. According to Stuart, this emerging practice introduces the curation of what he refers to as "living documents instead of living fossils." Stuart concludes, "It is time for the rest of science to follow suit."?On review, Huff's research and publication experience would support many of Stuart's recommendations.?
Additional Concluding Insight
Huff concurs with many of Stuart's suggested innovations to the field of scientific investigation that would certainly contribute to improving traditional practices. Certain aspects of the traditional scientific review and publication process need to be advanced to deliver critical discoveries in a rapid and timely manner to fellow investigators and the broader community. However, given the current process takes too long to benefit others in the scientific community, Huff cautions that due to the nature of certain types of research, funding mechanisms, proprietary implications, secure governmental or national interests, many of Stuart's observations could only be applied to those groups that are conducting open or public studies that are not the subject of privacy, proprietary, or sensitivity requirements, including an organization or institution's contractual security protocols and operating restrictions. These transparency limitations or security controls have applied to approximately 85% of Huff's work. To Stuart's credit, Huff states that any emerging technologies or modifications that will improve the validity of our investigations, findings, and recommendations are welcome as long as the innovations do not undermine the integrity of the core research effort. In conclusion, Huff would argue Stuart's suggestion of eliminating the process or use of scientific papers would be akin to throwing the baby out with the bath water. That said, he accepts and encourages scientists to forge new innovations that will improve the rapid production and vetting of quality and highly valid investigations with the focus of speeding up the delivery of critical discoveries to the scientific community.
[I] Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors". Medical History. 53 (1): 1–4. 2009. doi:10.1017/s0025727300003288. PMC 2629173. PMID 19190746.
[ii] Nick Bontis; Alexander Serenko (2009). "A follow-up ranking of academic journals". Journal of Knowledge Management. 13 (1): 17. CiteSeerX 10.1.1.178.6943. doi:10.1108/13673270910931134
[iii] Pontille, David; Torny, Didier (2010). "The controversial policies of journal ratings: Evaluating social sciences and humanities". Research Evaluation. 19 (5): 347. doi:10.3152/095820210X12809191250889
[iv] Nick Bontis; Alexander Serenko (2009). "A follow-up ranking of academic journals". Journal of Knowledge Management. 13 (1): 17. CiteSeerX 10.1.1.178.6943. doi:10.1108/13673270910931134
[v] Pontille, David; Torny, Didier (2010). "The controversial policies of journal ratings: Evaluating social sciences and humanities". Research Evaluation. 19 (5): 347. doi:10.3152/095820210X12809191250889