What types of meta-analyses SHOULD NOT be used as scientific evidence
Image credit: Adobe stock #445149207

What types of meta-analyses SHOULD NOT be used as scientific evidence

It is common to use published #systematicreviews and #metaanalyses for scientific communication with HCPs. This is particularly true of older molecules where there are few new randomized studies being conducted and we often try to use systematic reviews and meta-analyses as scientific evidence to show their efficacy to be as good as the newer drugs or that they are particularly better suited for certain patient profiles. However, it is important to know some checks that should be made by #medicalaffairs and #marketing teams before using these as #scientificevidence. While there is a laundry list of things that should be checked to be sure of their robustness, a lot of the items on the list are related to statistics, which is not the domain of medical affairs and marketing, nor of the HCPs. Nevertheless, if you thought that HCPs would not be able to tell a good meta-analysis from a poor one and would readily be convinced by the new evidence shared with them, you could be mistaken. At least at the level of #KOLs, if not all HCPs, they can differentiate the chaff from the grain and some basic level of check is necessary before such evidence is used. If this is not done, you are at risk of losing your credibility in the eyes of such KOLs and HCPs. I am listing below some red flags that can be easily checked.

  1. Type of studies included: Any systematic review and meta-analysis publication will have the list of studies included in the form of a table. Check whether most if not all are randomized controlled trials (RCTs). I once came across a systematic review and meta-analysis where 9 out of 11 studies were RCTs, but all those that were RCTs were the ones with very small sample sizes. The ones that were not RCTs were studies with large sample sizes accounting for 80% of the total population of included studies. This is a red flag indicating that 80% of the pooled sample came from non-RCTs. I would not use such a publication in my scientific communication. If most included studies have not mentioned the study design (i.e. whether it is an RCT or not), that is also a red flag.
  2. 'Comparing apples to oranges'- high heterogeneity between included studies: A meta-analysis that does not report the heterogeneity (indicated by I2 values- read it as 'I square') should not be used. Heterogeneity indicates the dissimilarity between included studies. An example is a meta-analysis comparing the effects of various statins in preventing cardiac events including a mix of studies conducted among patients with no history of cardiac events and those with known cardiac events and presenting a combined result of the efficacy of one statin over the other. In general, I2 statistic between 30 and 60% is considered moderate heterogeneity and less than 30 indicates not much heterogeneity. If statistical heterogeneity is severe (I2 >60%), the meta-analysis should be abandoned.
  3. Keywords used in the search: Do the keywords used in the search justify the objective of the meta-analysis? I once came across a meta-analysis where the efficacy of product A was being compared with that of product B, but the keyword search did not include product B at all! What the authors did was include only studies directly comparing products A and B and showing product B was superior. Moreover, the included studies used different formulations of both products, which is known to affect the efficacy. Thus, any trial that studied product B alone or a specific formulation of product B was not included at all. Similarly, any study that studied product A alone or different formulations of product A was not included. Even among the included studies, there were 2 with large sample sizes that showed the superiority of product A while the small ones showed product B to be superior. Identical formulations of both products were not compared at all, which should have been the basis of comparison. However, the meta-analysis concluded B was superior.
  4. Publication bias: Selective publication of positive studies and exclusion of negative studies can result in publication bias. The above example also holds true as an example of publication bias. Moreover, inclusion of limited studies makes it impossible to use a funnel plot, which is important to assess publication bias.

Too few studies or studies with small sample size: While there is no standard definition of a small sample size, a rationale could be the disease that the meta-analysis is related to. For example, for an old drug like atenolol, a meta-analysis about its effect on hypertension that includes studies of sample sizes of even up to 100, would be inadequate considering the prevalence of hypertension. However, in a meta-analysis about a drug for uncommon cancers, even studies with sample sizes of up to 50 might be good. The same holds true for the number of studies included. Depending on the therapy area, the search criteria for a meta-analysis need to ensure that the number of studies that pass the selection criteria are not too few.

The above are some basic criteria that medical affairs and marketing can check for, before picking up a meta-analysis for use as scientific evidence. However, the above list is not exhaustive as there could be several statistical pitfalls as well. Nevertheless, the above list should serve as a basic hygiene check for selecting a meta-analysis. Otherwise, a smart competitor will not wait too long before pointing out the flaws in your evidence to the HCP, thus denting your credibility. I have myself created presentations for clients to educate HCPs on why evidence based on a particular meta-analysis being shared by a competitor should not be relied on.

I hope you find the above list helpful.

________________________________________________________________________________

#medicalaffairs ?#medicalwriting ?#medicalwriter ? medicalcommunications ?#brandplan

#healthcarecommunications ?#scientificwriting ?

#freelancemedicalwriter ?#scicomms #medcomms #advisoryboard

Ashok V K

Head of Marketing Oncology India at Dr. Reddy's Laboratories

8 个月

Thanks Dr Sangeeta for simplifying and explaining with rationale.

回复
Atul Phatak

Experienced business development professional clinical research Phase I to Phase IV.

8 个月

Thanks for this useful post.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了