'Bad' Results, Good Studies: Perverse Incentives, and Sound Research.

'Bad' Results, Good Studies: Perverse Incentives, and Sound Research.

Francesca Gino, a distinguished behavioral scientist at Harvard University, has been placed on administrative leave since June following allegations of academic fraud. Her renowned research on rule-breaking behavior, featured in the book "Rebel Talent," now faces suspicion due to several studies associated with her being scrutinized for containing falsified data. Such a compelling irony naturally caught the attention of major press outlets like The New York Times.?

These allegations not only damage Gino's reputation as a researcher but also erode public confidence in the integrity of the scientific process. This underscores the urgency for more robust oversight mechanisms and increased transparency within academic institutions, casting a shadow on the world of scientific inquiry and research in general. It raises questions about the motivations and incentives that could lead a prestigious Harvard professor to tamper with data. Additionally, this prompts us to consider whether the market research industry is safe from such practices.


The Dishonesty Expert Is (Allegedly) Dishonest Herself.

The whistleblower was the data investigation blog “Data Colada”, -led by professors Uri Simonsohn (ESADE), Leif D. Nelson (Berkeley), and Joseph P. Simmons (UPenn)-, that in a four-part series of posts detailed multiple instances of academic misconduct.? As a result, Harvard decided to retract papers authored by Gino, including “Evil Genius? How Dishonesty Can Lead to Greater Creativity” and “The Moral Virtue of Authenticity: How Inauthenticity Produces Feelings of Immorality and Impurity”.

One of the most remarkable cases of data tampering is referred to by the whistle-blowers as “My Year School is Harvard.” This study examined the desirability of cleansing products based on the emotional level of feeling "dirty" among the subjects. The study was conducted in the following manner:

  • Almost 500 Harvard students completed an online survey where they were asked to express an opinion about a rather controversial Harvard campus issue and to provide some demographic information. Then they were randomly selected to write an essay about that issue arguing in favor or against their own side. After writing the essay, participants rated from 1 to 7 how desirable they found five cleansing products to be. Gino predicted that writing against your own side would make participants feel “dirty”, which would increase their desire for cleansing products.

The anomaly in this dataset involves how certain students responded to the demographic question "Year in School," which typically allows for answers like "freshman," "sophomore," and so on. However, data showed that some students provided a different response, "Harvard," without specifying their academic year. Interestingly, the students with such responses showed a significantly stronger effect on their desirability of cleansing products based on how dirty they felt compared to the rest of the sample; Gino’s suspicious rows showed a huge effect. Data Colada’s professors pointed out that these cases "were altered to produce the desired effect (...), and if these observations were altered, then it is reasonable to suspect that other observations were altered as well."

As clumsy as it may sound, Harvard’s withdrawal seems to corroborate their thesis. Data Colada's professors addedWe understand that Harvard had access to much more information than we did, including, where applicable, the original data collected using Qualtrics survey software. If the fraud was carried out by collecting real data on Qualtrics and then altering the downloaded data files, as is likely to be the case for three of these papers, then the original Qualtrics files would provide airtight evidence of fraud. (Conversely, if our concerns were misguided, then those files would provide airtight evidence that they were misguided).

It is important to mention that Gino has strongly refuted the allegations and has initiated legal proceedings against Harvard University and members of the Data Colada group. She contends, "While claiming to stand for process excellence, they reached outrageous conclusions based entirely on inference, assumption, and implausible leaps of logic"


Replicability Crisis in Scientific Research

Gino's alleged use of fake data casts doubt on the validity of her research. Behavioral scientists claim that minor "nudges" or slight modifications can change people's behaviors and choices. However, the scientific community has been grappling with a "Replicability Crisis" wherein subsequent studies fail to replicate the results of influential studies. This raises concerns that the effects of these purported "nudges" may not be as significant as initially proclaimed.

In 2015, a replication attempt on 100 psychology papers succeeded in reproducing the results from only 39 of them, highlighting the complexities of achieving consistent findings. It's crucial to note that the failure to replicate previous findings can result from various factors beyond intentional fraud. Unintentional errors and discrepancies in research conditions can also contribute to such outcomes. This issue extends beyond a single field; even in the realm of biomedical research, there have been instances of unsound research; The recent resignation of Standford's president serves as a stark reminder of the need for robust and trustworthy scientific inquiry.


Implications of Unsound Research in Market Research

While the Market Research Industry has implemented various measures to uphold ethical standards, it remains vulnerable to unsound research or fraudulent practices. Consultants and market researchers encounter pressures in securing clients, maintaining competitiveness, and delivering favorable results to meet expectations. In fact, the fundamental issues mentioned by academics for unsound research resonate deeply with market research practitioners:

  • Pressure to rapidly publish groundbreaking research: Scholars contend that even well-meaning researchers might succumb to the temptation of stretching their findings or conducting repeated experiments until they achieve the desired outcome (referred to as "P-hacking"). Emotional attachment to their hypotheses could unconsciously influence researchers' data interpretation and analysis, resulting in selective reporting or data manipulation to align with their initial beliefs.?
  • Lack of enthusiasm for rigorous quality checks: Peer review stands as a pivotal process in academia, involving experts in the field critically evaluating the quality and validity of research before its publication. However, a concerning issue arises due to the diminished prestige and insufficient funding available for rigorously scrutinizing someone else's work. Consequently, detecting practices like p-hacking becomes a demanding endeavor. This imbalance in incentives may inadvertently contribute to a gap in the thoroughness of evaluations and potentially allow unsound research practices to persist within the academic community.?
  • Challenges in Accurately Assessing the Cutting-Edge Knowledge: The file drawer problem, also referred to as publication bias arises when researchers and journals tend to favor publishing studies with positive or significant results, while those with negative or non-significant findings often go unpublished. This selective reporting distorts the perception of the prevalence of certain phenomena, leading to an overestimation of effects and discoveries, ultimately undermining the integrity of the scientific investigation. In specific fields, if the proportion of genuine findings to non-genuine ones is substantial, it may erroneously appear that little learning remains when in reality, the opposite could be true.

In conclusion, securing academic research grants often relies on publishing in reputable journals, leading to the presence of perverse incentives. While Market Research has proprietary aspects that differentiate it from Academia -where communalism should be the norm-, it also shares similar biases, particularly the inclination towards positive results and the existence of perverse incentives.


Extraordinary claims require extraordinary evidence

Adhering to rigorous methodologies, ensuring data accuracy, and maintaining transparency in reporting are essential for establishing trust with clients and the wider public, whether in market research or scientific studies. However, the existence of perverse incentives necessitates continuous efforts to address unreliable studies that may perplex rather than enlighten clients and the public.

As proposed by Stuart J. Ritchie, author of "Science Fictions: Exposing Fraud, Bias, Negligence, and Hype in Science," one potential solution could involve mandating researchers to pre-register their work, thereby reducing the temptation to manipulate their conclusions to align with the results; publishing worthiness should be based on methodology and scientific value regardless of outcomes or results in any particular direction. Additionally, leveraging technology to identify and rectify common errors that may infiltrate and skew statistical analysis could be another effective approach.

Providing data access and facilitating replication of results can enhance the credibility of research and help detect potential problems. As famously stated by Carl Sagan, "extraordinary claims require extraordinary evidence"; therefore, being vigilant about warning signs like findings contradicting existing research or sensationalized results can help identify instances of unsound research, misapplication of statistical techniques, or even instances of fabricated data and unethical conduct.


Update September 2023: Gino's Web Site "Breaking My Silence" -Gino conducted her investigation and claimed that HBS used the wrong dataset in their analysis.

Update November 2023: Seven anonymous faculty members have raised concerns about HBS' adherence to fair and transparent investigative processes; they argue that introducing a new policy that imposed restrictive conditions on Gino's ability to defend herself, suggests bias and a departure from established norms.

Update March 2024: An internal 1,300-page report from the Harvard Business School revealed that the faculty committee that led the investigation found Gino responsible for the alleged misconduct and recommended her termination.

Update July 2024: A post by DataColada compares the Original and Posted data versions of "Study 3A of Gino, Kouchaki, and Casciaro (2020)" showing how the data was altered to produce the published result.

Update October 2024: Judge rules that the First Amendment protects Data Colada bloggers sued by Francesca Gino.

Arianne Larimer

Helping you scale your custom quant market research with AI-enhanced workflow automation

1 年

Wow, I’ve been following this too. Great points. I’m curious to see how it pans out.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了