Reflections on the RCT DUPLICATE Study and Increasing Confidence in Real-World Evidence
With input by Alind Gupta, Louis Dron, and Jason Simeone.
Randomized clinical trials (RCTs) have long been considered the gold standard for assessing the efficacy of medical treatments, but real-world evidence (RWE) is often more representative of routine clinical care. The RCT DUPLICATE study makes great strides in comparing the two, but the study has its limitations: notably, the lack of quantitative bias analysis.
The RCT DUPLICATE Study: RWE vs. RCTs
The RCT DUPLICATE study offers valuable insights into the comparison between randomized clinical trials (RCTs) and real-world evidence generated from three US claims databases (Optum Clinformatics, MarketScan, and Medicare).1 By emulating the design of 30 completed and two ongoing RCTs of medications with studies of insurance claims data, the study aimed to determine the level of agreement between the results of RCTs and a replicated iteration of the RCT using real-world datasets. In the selected RCTs, a high degree of concordance was observed between the RCTs, and the database emulation results when the trial's design and measurements could be closely emulated. This finding suggests that real-world evidence studies can reach similar conclusions as RCTs when design and measurements can be closely emulated.
Limitations: A Call for Further Research
However, there are limitations to the RCT DUPLICATE study that must be considered. First, the study's sample was highly selective and not representative of all RCTs. A significant emphasis was placed upon cardiovascular RCTs, whereas applications of RWD studies for regulatory purposes have tended to be focused on rare diseases and oncology. This means that the conclusions drawn from this study may not be applicable to all trial emulations using real-world data. Importantly, the study did not include quantitative bias analysis (QBA) for unmeasured confounders, variables with measurement error or missing data (perennial issues with RWD), which could have increased confidence in the real-world evidence that was generated.2 The authors do, however, present semiquantitative positive/negative control analyses, which is a step in the right direction, but which may not be available in all cases.
Although the authors note potential sources of bias as limitations, they have not tried to quantify the impact of unmeasured confounders, mismeasurement, and missing data elements on results directly. Moreover, the study found weaker concordance among RCTs for which close emulation of certain design elements was not possible, which further highlights the importance of considering biases and confounding via QBA in these comparisons and is something increasingly called for by regulatory agencies.3 Had QBA been performed for known sources of bias, a closer alignment to the target trial could have been possible.
Despite these limitations, the RCT DUPLICATE study sheds light on the complementary nature of RCTs and real-world evidence.4 RCTs have long been considered the gold standard for assessing the efficacy of medical treatments, as they allow for strong causal inferences. On the other hand, real-world evidence is often more representative of routine clinical care, providing insights into how treatments work in real-world populations.
As the authors note, to understand the complementary roles of RCTs and real-world evidence, both their strengths and limitations must be taken into account. RCTs offer strong internal validity, but their results may not always be generalizable to real-world settings or to different patient populations. Real-world evidence, in contrast, can provide valuable insights into treatment effects in clinical practice that are not observable by RCTs, but they are subject to biases and confounding factors due to the lack of randomization.
The RCT DUPLICATE study offers important insights into the potential of real-world evidence studies to complement RCTs in understanding how medications work in clinical practice. However, the study's limitations, particularly the lack of quantitative bias analysis for known sources of bias, highlight the need for further research to validate and build upon these findings. By understanding and addressing the strengths and limitations of both RCTs and real-world evidence studies, researchers and practitioners can make better-informed decisions about the use of these types of evidence in decision-making and treatment planning.
1. Wang, Shirley V., et al. "Emulation of randomized clinical trials with nonrandomized database analyses: results of 32 clinical trials." JAMA 329.16 (2023): 1376-1385.
2. Zhang, Qing, et al. "Real-World Comparative Effectiveness of First-Line Alectinib Versus Crizotinib in Patients With Advanced ALK-Positive NSCLC With or Without Baseline Central Nervous System Metastases." JTO Clinical and Research Reports 4.4 (2023): 100483.
3. FDA, US. "Considerations for the design and conduct of externally controlled trials for drug and biological products guidance for industry." (2023).
4. Subbiah, Vivek. "The next generation of evidence-based medicine." Nature Medicine (2023): 1-10.
Read more from Perspectives on Enquiry & Evidence:
Sorry no results please clear the filters and try again
How Target Trial Emulation Can Take the Guesswork Out of Comparative Effect Estimates in Medicare Drug Price Negotiation
Comparative Effectiveness: Methods and Techniques for Better Decision-Making
Can RWE Help Restore Decades of Health Inequalities? Yes, and Here’s How
If you'd like updates on our blog posts, sign up for email updates below.