Don't Trust the Science
Treston Wheat, PhD
Geopolitical Risk | Security Expert | Professor | Strategic Intelligence | Policy Wonk Extraordinaire
During recent political debates, especially around the pandemic and climate change, the phrase “trust the science” became commonplace to justify policy choices. Setting aside the fundamental epistemological problem of “trusting” science, the statement encapsulates a particular kind of “appeal to experts ,” even when those experts make nakedly partisan talking points not rooted in strong, methodologically-sound research. Claims to expertise in public health or gender or climatology supersede policy debates surrounding tradeoffs of costs and benefits. However, the truth is that a significant amount of academic research has cognitive biases, methodological errors and inconsistencies, and replication problems. Essentially, good analysts should never trust the science. Rather, analysts need to have a healthy skepticism when it comes to academic research because in truth one should never simply “trust the science.”
Producing the Literature
When people use the phrase “the science,” what they are referring to is the accumulative academic literature and research reports concerning a specific topic. Expertise is derived from a careful study of and contributions to the academic literature and research. However, the very process of how the literature is produced should be the starting point for being skeptical when experts make claims. Those who have worked in academia are all too familiar with this process, but for those who have not gone through it, a description is necessary to understand the limitations of it.
Academic literature starts with professors or researchers choosing a topic to study. Depending on the depth and complexity of the research, they will have to gain access to grant money to complete the project. Grants, though, are typically given to Zeitgeist issues and not just topics of general interest or that are most important. Instead, it’s what grant-making institutions or governments are most interested in, severely limiting the research produced. After the research is done and the article is written, the professor or researcher submits their article to a peer-reviewed journal. Although the journals have reviewers blindly review articles (meaning the reviewer doesn’t know who wrote the article), reviewers impose their research preferences onto articles and will turn down compelling research because the writer didn’t cite the articles the reviewer wanted or used a different methodology. In addition, like grants, researchers must choose topics and approaches that are desired by the journals, meaning that excellent research not aligned with what journals or reviewers prefer are typically rejected out of hand.
The barriers for research to be published should immediately raise skepticism because gatekeepers choose which research gets funded and published. Some brilliant researchers have failed to get their research funded or papers distributed due to these issues.
Replication Crisis
A significant amount of research in both the social sciences and natural sciences faces a replication problem that has devolved into a crisis . This means that the results of a study cannot be republicated (repeated using the same data and/or methods) by other researchers. An assessment in Nature provides some disappointing results in a survey of 1,576 researchers. According to the survey, 70% of researchers had tried and failed to reproduce the research of another scientist while more than 50% have failed at reproducing their own research. Interestingly, almost a third of respondents still thought the literature should be trusted even while acknowledging the crisis.
Various studies have continued to show a problem. In 2015, a study tried to replicate 100 psychology articles, but only 39 of them could be replicated. In 2018, another study attempted to replicate 28 research articles but could only replicate 14 of them. That same year Nature Human Behaviour tried to replicate 21 studies from Nature and Science but could only replicate 13 of them.
When it came to citations, one study looked at which kind of papers are more likely to be cited by other researchers, and it found that those papers that could not be replicated were 153x more likely to be cited than replicable papers. In addition, another problem is that a large part of academic research is never cited in other literature at all. For example, a study on publications between 1990-2007 showed that during that time period 12% of medical articles, 27% of natural science articles, 32% of social science articles, and 82% of humanities articles were not cited five years after publication.
The issue is that just because articles are published does not mean good science was done, and analysts should consider whether novel research studies actually shift paradigms or add knowledge. Crucially, idiosyncratic choices by researchers can contribute to this problem. In 2021, a study looked at that very problem; the study had 161 researchers in 73 research teams use the same data and try to answer the same question. Their conclusion is worth quoting at length:
“We find excessive variation of outcomes. When combined, the 107 observed research decisions taken across teams explained at most 2.6% of the total variance in effect sizes and 10% of the deviance in subjective conclusions. Expertise, prior beliefs and attitudes of the researchers explain even less. Each model deployed to test the hypothesis was unique, which highlights a vast universe of research design variability that is normally hidden from view and suggests humility when presenting and interpreting scientific findings.”
领英推荐
Political Science Isn’t a Real Science
Political science as a discipline is one of the most egregious violators of sound research, which is important because so much security analysis is based on the discipline’s research. Not only does a significant amount of political science research have endogeneity problems and is based on a deeply flawed epistemology, even the basic data used in political science have problems. International relations will assess the likelihood of war based on a number of variables, but the definition used for war is anachronistic. The Correlates of War (COW) dataset defines a militarized interstate dispute (their loquacious way of saying “war”) as 1,000 battlefield deaths. However, this completely neglects basic changes in modernity, such as the invention of penicillin that saves lives that would have previously been lost. Researchers have failed to update the definition to be time dependent. E.g., the Falklands War isn’t considered a militarized interstate dispute (war) because it has fewer than 1,000 battlefield deaths. As such, researchers cannot accurately assess conflict if they exclude data that fails to meet an arbitrary and anachronistic definition.
Another major problem with such datasets is the research is based on dyads of all nation-states. This means the COW will look at all countries’ dyadic relationships to determine if they have gone to war or not. Basic logic shows how nonsensical this is. Why are Estonia and Lesotho in a dyadic relationship? The dyad of those two countries not going to war any particular year dilutes the data so that it is practically meaningless. That is only one data set that has fundamental problems. Other major datasets with problems include the Polity IV data series and V-Dem. Before analysts cite such research they need to understand how the research takes place, including how terms are defined and data codified.
Developing a Healthy Skepticism
What is healthy skepticism? Healthy skepticism is questioning research and conclusions not merely to question them but because said research might have methodological problems or obvious cognitive biases. It is recognizing that even well-designed structures or excellent research can still have unintended problems. It is also a recognition that truth exists external to the self, and that the scientific process is an effective tool to discover that truth while only being one of many tools available. Healthy skepticism is not blanket contrarianism or a rejection of research because it goes against one’s politics.
All of the problems described above should not lead one to devolve into rampant conspiracism or a rejection of all academic research. What analysts need to take from this is to develop a healthy skepticism whenever they read the academic literature or see the literature cited. Random online commentators or conspiracy theorists erroneously believe they can just “do their own research,” which really means reading other conspiracy theorists’ rantings devoid of appropriate historical context or understanding of confounding variables. Also, conspiracy theories often come about because the "average" person assumes "experts" are so much more knowledgeable than they actually are. The average person doesn’t realize how limited experts are in their ability to think critically and broadly. So, if diabetes isn't cured already, then that's because the researchers (or their corporate backers) don't want it to be cured rather than researchers just aren't smart/clever enough to solve it or haven't thought outside the box, which happens with a lot of problems.
Analysts need to take a rationalist and realistic approach when it comes to academic research. They should absolutely engage with the literature, but they need a healthy skepticism and not to accept it without question. That means analysts need strong methodological training in order to parse out the relevant problems in a research article before citing it in a report.
Ergo… don’t trust the science (or anyone else for that matter).
Post Script: If you're interested in getting this newsletter through email instead of LinkedIn, please see here .
CEO Founder - Community of Guardians
9 个月???? Building #trust in Science for a Sustainable Future ???? As science professionals, educators, and advocates, it is our responsibility to cultivate trust by promoting transparency, integrity, and open communication in our work. By engaging with the public, addressing concerns, and providing accessible information, we can bridge the gap between science and society and empower individuals to make informed decisions that benefit both themselves and the planet. This is why we need the Community of Guardians! A place where science can be shared and we can start to rebuild our #trust in #science. A place where we can start taking actions that support #sustainability guided by the voice of science! We are launching our updated platform April 2024! A place where individuals can be #empowered and together make big changes happen! Take our “What’s my Number” quiz to find out what you passionate about and join your “Superhero” Team learning about the issues and taking #action that’s measurable and meaningful! Let's work together to build a future where trust in science serves as the foundation for a healthier, more sustainable world for generations to come. Follow me to learn more!
You've highlighted a crucial aspect of academic research, Treston. Staying skeptical involves critically evaluating the research design, sample size, data sources, and potential biases. To maintain a healthy skepticism, cross-referencing multiple reputable sources and seeking expert opinions can be valuable. How do you personally approach verifying the credibility of academic research in your field?
Experienced Leader in Risk, Security, Resilience, Safety, and Management Sciences | PhD Candidate, Researcher and Scholar
1 年Thousands of scientists publish a paper every five days https://www.nature.com/articles/d41586-018-06185-8
Experienced Leader in Risk, Security, Resilience, Safety, and Management Sciences | PhD Candidate, Researcher and Scholar
1 年There is a worrying amount of fraud in medical research And a worrying unwillingness to do anything about it https://www.economist.com/science-and-technology/2023/02/22/there-is-a-worrying-amount-of-fraud-in-medical-research
Experienced Leader in Risk, Security, Resilience, Safety, and Management Sciences | PhD Candidate, Researcher and Scholar
1 年Stanford head to resign after data manipulation probe https://www.bbc.com/news/world-us-canada-66251751