Franklin JM, Dejene S, Huybrechts KF, Wang SV, Kulldorff M, Rothman KJ. A bias in the evaluation of bias comparing randomized trials and nonexperimental studies. Presented at the 33rd ICPE International Conference on Pharmacoepidemiology & Therapeutic Risk Management; August 28, 2017. Montreal, Canada. [abstract] Pharmacoepidemiol Drug Saf. 2017 Aug; 26(Suppl 2):32. doi: 10.1002/pds


BACKGROUND: Hemkens et al. conducted a meta-analysis to compare estimated treatment effects from randomized trials with estimated effects from observational studies based on routinely collected data (RCD), such as insurance claims and patient registries. They calculated a pooled relative odds ratio (ROR) of 1.31 (95% confidence interval [CI]: 1.03-1.65) across a variety of studies, concluding that RCD studies systematically over-estimated protective effects. However, to combine disparate studies, their meta-analysis inverted results for some clinical questions, forcing all estimates from RCD to be below 1.

OBJECTIVES: To evaluate the statistical properties of this pooled ROR and to reanalyze the data using a more appropriate method.

METHODS: We proved that the selective inversion rule employed in the original meta-analysis can positively bias the estimate of the ROR. We then showed that it did so by repeating the random effects meta-analysis using a different inversion rule to investigate the dependence of the ROR on the direction of comparisons. As an alternative to the ROR, we calculated the observed proportion of clinical questions where the RCD and trial CIs overlap, and compared it with the expected proportion assuming no systematic difference between study types. We focused on 50% CIs, as 95% CIs always overlapped.

RESULTS: When reanalyzing the data using a different inversion rule, we found an estimated ROR of 0.98 (0.78-1.23), indicating the ROR is highly dependent on the direction of comparisons. Out of 16 clinical questions, 50% CIs overlapped for 8 (50%; 25 to 75%) compared with an expected overlap of 60% assuming no systematic difference between RCD studies and trials.

CONCLUSIONS: There was little evidence of a systematic difference in effect estimates between RCD and randomized trials. Estimates of pooled RORs across distinct clinical questions are generally not interpretable and may be misleading.

Share on: