Lies, Damned Lies, and Statistics

"Lies, damned lies, and statistics" is a phrase describing the persuasive power of statistics to bolster weak arguments.

It is also sometimes used to doubt the statistics used to prove an opponent's point.

Last night I watched a startup pitching and they presented a slide with some statistics showing the effectiveness of their solution to a particular problem, and the first thing that came to my mind was exactly this phrase.

In statistics, there are several techniques (sometimes referred to as "tricks") that can be used to manipulate data or present results in a way that supports a particular point of view.

While these methods can be used for legitimate analysis, they can also be misused to mislead or deceive.

When you validate a business case or investment opportunity you should be aware of these tricks, and that is why I collected the most common ones for you.

1. Cherry-Picking Data

Selecting only the data that supports a particular conclusion while ignoring data that contradicts it.

Example: A study might report only the time periods where a particular stock performed well, ignoring periods of poor performance.

2. P-Hacking

Manipulating data or testing multiple hypotheses until a statistically significant result is found, often by increasing the number of tests without proper correction.

Example: Running many different statistical tests on a dataset and only reporting the ones that give a p-value below 0.05.

3. Misleading Graphs

Presenting data in a graph with a misleading scale, axis manipulation, or selective data points to exaggerate or downplay trends.

Example: Using a y-axis that starts at a non-zero value to exaggerate differences between groups.

4. Overgeneralization

Drawing broad conclusions from a small or unrepresentative sample.

Example: Conducting a survey in one city and generalizing the results to the entire country.

5. Omitting the Baseline

Failing to provide a baseline or control group for comparison, making the results seem more significant than they are.

Example: Reporting that a treatment led to a 50% improvement without mentioning that a placebo led to a 45% improvement.

6. Selective Reporting of Outcomes

Reporting only positive outcomes while ignoring negative or neutral results.

Example: A drug trial that only reports the successful outcomes while ignoring cases where the drug had no effect or caused harm.

7. Data Dredging

Analyzing large volumes of data in search of any statistically significant relationship, often without a prior hypothesis.

Example: Examining multiple variables in a dataset until any two variables show a correlation, then presenting this as meaningful without further validation.

8. Ignoring Confounding Variables

Failing to account for variables that could influence the results, leading to spurious conclusions.

Example: Claiming that ice cream sales cause drowning deaths without accounting for the confounding variable of temperature (both increase during summer).

9. Manipulating the Sample Size

Choosing a sample size that is too small to detect an effect or too large, which may exaggerate the significance of minor effects.

Example: Conducting a survey with only a few participants and claiming the results are representative of the entire population.

10. Misinterpreting Statistical Significance

Confusing statistical significance with practical significance or misrepresenting what a p-value actually indicates.

Example: Claiming that a treatment is effective based on a p-value below 0.05 without discussing the actual effect size or its practical implications.

11. Simpson's Paradox

Aggregating data without considering subgroups, which can lead to contradictory conclusions when the data is disaggregated.

Example: A treatment might seem effective in the overall population but ineffective or even harmful when broken down by specific demographic groups.

12. Non-Comparative Metrics

Presenting data without proper context, such as not comparing it to a relevant benchmark.

Example: Reporting that a company’s profits increased by 20% without mentioning that its competitors increased by 50%.

13. Double Dipping

Using the same data twice in a way that inflates the significance of the findings.

Example: Reporting an outcome as both a primary and secondary result, thus artificially increasing the perceived importance of the data.

14. Using Relative vs. Absolute Risk

Emphasizing relative risk instead of absolute risk to make a finding seem more significant.

Example: Saying a drug reduces the risk of disease by 50% (relative risk) when the absolute risk reduction is from 2% to 1%.

These techniques can be powerful when used correctly, but they can also be deceptive if not used with care and transparency.

Ethical statistical practice involves full disclosure of methods, careful interpretation of results, and avoiding the intentional misuse of these tricks.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了