The Reproducibility Problem in Science—What’s a Scientist to do? (Part 3 in a series of 3)
Reverend Bayes, probably

The Reproducibility Problem in Science—What’s a Scientist to do? (Part 3 in a series of 3)

In the final instalment of this Series, we’ll finish up this examination with possible solutions and remedies.

Well, that was disappointing

Preregistration or “publicly” registering a plan of research is one way in which psychology scientists and others are working towards a solution to the problem of reproducibility. Basically, a “…registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through peer-review, publication of the findings is provisionally guaranteed, based on whether the authors follow the proposed protocol. One goal of registered reports is to circumvent the publication bias toward significant findings that can lead to implementation of questionable research practices and to encourage publication of studies with rigorous methods.”

But, early returns, at least for psychology, are a bit disappointing. In an article by David Adam, entitled “A solution to psychology’s reproducibility problem just failed its first test he described the experience of Aline Claesen, a psychologist at the Catholic University of Leuven (Belgium), who with her team, looked into 27 preregistration plans for studies to be considered by the journal Psychological Science. Claesen’s findings? Every single one of the 27 “…researchers deviated from their plan—and in every paper but one, they did not fully disclose these deviations.”

To be fair, this is a nascent approach that may have some kinks to work out, and it will take time to adopt this new approach to become a norm and more universal to more and more journals. Dan Simons, a psychologist and faculty at University of Illinois, Champaign-Urbana noted, “My guess is that most [authors] were well-intentioned and just didn’t know how to do it very well.”

New Rules for the Road

  • Diener and Biswas-Diener suggest Dissemination of Replication Attempts, for example:
  • Center for Open Science: Psychologist Brian Nosek, a champion of replication in psychology, has created the Open Science Framework, where replications can be reported.
  • Association of Psychological Science: Has registered replications of studies, with the overall results published in Perspectives on Psychological Science.
  • Plos One: Public Library of Science—publishes a broad range of articles, including failed replications, and there are occasional summaries of replication attempts in specific areas.
  • The Replication Index: Created in 2014 by Ulrich Schimmack, the so-called "R Index" is a statistical tool for estimating the replicability of studies, of journals, and even of specific researchers. Schimmack describes it as a "doping test.”
  • Open Science Framework: an open source software project that facilitates open collaboration in science research.
  • Psych File Drawer: Created to address the file drawer problem and allows users to upload results of serious replication attempts in all research areas of psychology. Archives attempted replications of specific studies and whether replication was achieved.

They go on to make the point that “The fact that replications, including failed replication attempts, now have outlets where they can be communicated to other researchers is a very encouraging development, and should strengthen the science considerably. One problem for many decades has been the near-impossibility of publishing replication attempts, regardless of whether they’ve been positive or negative.”

Open Science has Six Principles as guides for better scientific inquiry:

  • “Open Methodology: document the application of methods as well as the entire process behind as far as practicable and relevant documentation.
  • “Open Source: Use open source technology (software and hardware) and open your own technologies.
  • “Open Data: Make data freely available.
  • “Open Access: Publish in an open manner and make it accessible to everyone (Budapest Initiative).
  • “Open Peer Review: Transparent and traceable quality assurance through open peer review.
  • “Open Educational Resources: Use Free and Open Materials for Education and University Teaching

Ioannidis offers some helpful tips in the form of Corollaries to keep in mind when conducting, reviewing or reading studies:

  • “Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
  • “Corollary 2: The smaller the effect sizes in a scienti?c ?eld, the less likely the research ?ndings are to be true.
  • “Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
  • “Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
  • “Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
  • “Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.”

He also notes that small sample sizes can result in heightened effects using significance thresholds. So, as with most things, bigger is better! When replications studies are done, they should use sample sizes in excess of those in the initial investigation (duh).

Sanjay Basu, a faculty member of both Harvard Medical School and the Imperial College London, recommends three approaches:

  1. “Democratize data: While individual health data are and should be private, datasets can, with appropriate consent, be de-identified and shared while ensuring appropriate informed consent and protecting individual privacy. The demand from participants in clinical research studies to make their data available in this way has generated surprising revelations about the results of major drug trials and increased the capacity to make better decisions about health. Data sharingcode sharing, and replication repositories are typically free to use.
  2. Embrace the null. Null results are much more likely to be true—and are more common—than ‘significant’ results. The excessive focus on publishing positive findings is at odds with the reality of health: that most things we do to improve our health probably don’t work and that it’s useful to know when they don’t. Researchers should focus on how confident they are about their results rather than on whether their results should simply be labeled ‘significant’ or not.
  3. "Be patient. The 19th-century physician William Osler once said that, “The young physician starts life with 20 drugs for each disease, and the old physician ends life with one drug for 20 diseases.” New revelations take time to replicate, and new interventions—particularly new drugs—have safety issues that may become apparent only years after they come on the market. Older therapies may be less effective but may also be most reliably understood. If we demand that new therapies stand the test of time, we offer ourselves the opportunity to be safer as we balance innovation with healthy skepticism.”

p-hiking

David Colquhoun makes the point that using use of a p-value of 0.05 (the universally accepted lowest level of presuming you’re onto something), means that 30% of the time, you are indeed, incorrect. He recommends to “…insist on p≤0.001. And never use the word ‘significant’.”

Can Reverend Bayes help us out? Probably.

Some folks have recommended that we shift from the orthodoxy of p-values altogether and instead use Bayesian methods. Oh, if were only that easy. Our pal, Colquhoun has pointed out that while he fosters the idea that the term “significance” no longer be used, while still keeping p-value calculations and specifying confidence intervals, the addition of the risk of a false positive should be added. Matthews suggests that one way around the prior probability problem is to use what’s known as the reverse Bayesian approach. “The aim now is to extend the results to a range of p-values, and to present programs (in R), and a web calculator, for calculation of false positive risks, rather than finding them by simulation.

Maybe the Reproducibility Problem is really a Falsification Void

Karl Popper is known for his perspective on falsification is the sine qua non of scientific understanding rather than reproducibility. That is the belief that we can never really “prove” anything. It reminds me of Nassim Nicholas Taleb’s point with black swans—they do not exist until we see that they do.

Or, maybe we just need to circle back to our methods…

Feynman elegantly proffered having scientific integrity, and “if you’re doing an experiment, you should report everything you think might make it invalid…the idea is to try to give all of the information to help others judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.” He also said

Don’t fool yourself,
and you’re the easiest one to fool.

Point taken, Richard, point taken.

 #         #         #

If you'd like to learn more or connect, please do at https://DrChrisStout.com. You can follow me on LinkedIn, or find my Tweets as well. Tools and my podcast are available via https://ALifeInFull.org.

If you liked this article, you may also like:

The Reproducibility Problem in Science—Shame on us? (Part 2 in a series of 3)

The Reproducibility Problem—Can Science be Trusted? (Part 1 in a series of 3)

Can AI Really Make Healthcare More Human—and not be creepy?

How to Protect Yourself from Fad Science

Technology Trends in Healthcare and Medicine: Will 2019 Be Different?

Commoditization, Retailization and Something (Much) Worse in Medicine and Healthcare

Fits and Starts: Predicting the (Very) Near Future of Technology and Behavioral Healthcare

Why I think 2018 will (Finally) be the Tipping Point for Medicine and Technology

Healthcare Innovation: Are there really Medical Unicorns?

Can (or Should) We Guarantee Medical Outcomes?

A Cure for What Ails Healthcare's Benchmarking Ills?

Why Global Health Matters

Can A Blockchain Approach Cure Healthcare Security's Ills?

Why Medicine is Poised for a (Big) Change

Is This the Future of Medicine? (Part 5)

Bringing Evidence into Practice, In a Big Way (Part 4)

Can Big Data Make Medicine Better? (Part 3)

Building Better Healthcare (Part 2)

Is Technology the Cure for Medicine’s Ills? (Part 1)

Access to Healthcare is a US Problem, Too

Darwin Enicks

Chief Technology Officer

5 年

Manufacturing has alot to say about this topic as well. Lack of good understanding and control of Laboratory processes; knowing your key inputs and outputs of a process. Etc... Variables and variation, natural versus special causes

要查看或添加评论,请登录

Dr. Chris Stout的更多文章

  • Tools for Change – November 2024

    Tools for Change – November 2024

    Tools for Being Smarter Boost your emotional intelligence. Direct and Indirect - a thought-provoking article on the…

    2 条评论
  • John Marks’ Journey from Provocateur to Peacemaker

    John Marks’ Journey from Provocateur to Peacemaker

    It seems that almost everywhere in the world, there is conflict, distrust, and unrest – Gaza, Ukraine, Syria, and even…

    2 条评论
  • Tools for Change – October 2024

    Tools for Change – October 2024

    Welcome to the latest edition the monthly newsletter, Tools for Change – A Convenient Digest and Curated Collection of…

    11 条评论
  • Ben Guttmann on the Power of Simplicity

    Ben Guttmann on the Power of Simplicity

    We are not as smart as we think we are. We're busy and distracted in a world that is incentivized to continually make…

  • Tools for Change – September 2024

    Tools for Change – September 2024

    Well Said “Be less curious about people and more curious about ideas.” - Marie Curie “There is no better teacher than…

    9 条评论
  • Daniel Goleman, PhD, on Emotional Intelligence and Optimal Performance

    Daniel Goleman, PhD, on Emotional Intelligence and Optimal Performance

    You have likely seen moments of peak performance—perhaps an athlete plays a perfect game or a business that has a…

    5 条评论
  • Daniel Goleman, PhD, on Emotional Intelligence and Optimal Performance

    Daniel Goleman, PhD, on Emotional Intelligence and Optimal Performance

    You have likely seen moments of peak performance—perhaps an athlete plays a perfect game or a business that has a…

    11 条评论
  • Tools for Change – August 2024

    Tools for Change – August 2024

    Give People a Reason to Find You Interesting… Welcome to the latest edition the monthly newsletter, Tools for Change –…

    11 条评论
  • Topaz Adizes on How a Single Conversation Can Change a Relationship Forever

    Topaz Adizes on How a Single Conversation Can Change a Relationship Forever

    Have you ever found yourself in the presence of someone you love dearly and had nothing to say? The silence, not…

    3 条评论
  • Tools for Change – July 2024

    Tools for Change – July 2024

    Give People a Reason to Find You Interesting… Welcome to the latest edition the monthly newsletter, Tools for Change –…

    12 条评论

社区洞察

其他会员也浏览了