Is bias research biased?
Boyd Baumgartner
Latent Fingerprint Examiner @ King County, WA | AI Software Development Enthusiast
The question “Is bias research biased?” addresses a fundamental challenge within forensic science and related fields. Bias research often hinges on claims that "everyone thinks they aren’t biased," a statement that risks creating a logical trap where denial or admission of bias is interpreted as evidence of its existence. This circular reasoning introduces an epistemological problem, as the concept becomes unfalsifiable. Bias research frequently aims to infer bias retrospectively by identifying inconsistencies in conclusions, suggesting a correlation between observed variances and biased reasoning. However, without a clear causal mechanism, these interpretations risk conflating unrelated variables and relying on ambiguous definitions, such as “cognitive bias,” which often lacks precision. This ambiguity weakens the validity of bias claims and leaves it vulnerable to substantive criticism.
Challenging these claims further, the broader replication crisis plaguing scientific fields underscores how bias research might overstate its conclusions. Issues like low sample sizes, selective statistical practices, and using inexperienced test groups inflate the significance of findings, drawing parallels to broader scientific shortcomings. Additionally, solutions like Linear Sequential Unmasking (LSU) lack empirical support in real-world forensic applications, raising doubts about its practicality. Studies show that the effects predicted by LSU are not consistently observed, revealing limitations in implementing policies based on uncertain evidence. The complexity of forensic examinations makes isolating variables difficult, raising questions about the external validity of findings. The tension between internal control and real-world applicability suggests that the credibility of bias research is contingent on rigorous methodologies and clear definitions. Ultimately, the skepticism toward broad claims of bias reflects a philosophical critique of overstated findings and an emphasis on maintaining scientific integrity through clarity and reproducibility.
This video, based on a presentation I gave at the Indiana Division of the International Association for Identification explores these topics and more in depth.
Analytical Chemist | Data Integrity Advocate | Forensic Science Enthusiast
1 周Thanks for sharing this insightful article, Boyd Baumgartner!
CyberSec and Digital Forensics Analyst @ eForensics | Digital Forensics Expert
1 个月I enjoyed reading this article
CyberSec and Digital Forensics Analyst @ eForensics | Digital Forensics Expert
1 个月Very helpful
Owner, Imaging Forensics
1 个月I can hardly wait to watch this Boyd! This is a great topic, and I am certain that you have dived deeply into the subject.
Latent Print and Crime Scene expert
1 个月In the first highly publicized bias study, the immediate impression I had was that the six subjects were subjected to extreme bias by the researchers when they were told that the comparison had been accepted in the broad fingerprint community as an erroneous identification before they were shown the prints. A small study Glen Langenburg and I conducted with three test groups at an IAI conference followed by an additional test group of forensics students in college, indicated that proficient latent print examiners will push back strongly if you try to bias them. "Risk Averse," as you point out in your talk. And while you can show me anecdotal cases of biased identifications, I have seen no real evidence that bias is a significant factor in the overall practice of latent print comparisons. The greater problem there is weak examiners who can be pressured into claiming an identification they don't believe in the first place. That was a greater problem in the old days when a detective could come back into the ID Unit and stand over the ID Tech's shoulder urging the identification while the ID Tech was doing the comparison. My personal opinion is that the "problem" of bias is terribly overblown by researchers with their own agendas.