<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Using Quantitative Bias Analysis in Real World Data Strategy

The gold standard for assessing the efficacy for a medicine continues to be RCTs, however, for many reasons (disease rarity and/or ethical concerns), two-arm trials with adequate power may be infeasible. In such cases, single-arm trials or a purely observational study are conducted. To evaluate comparative treatment effects using data from single-arm trials, non-randomized studies, or indirect treatment comparisons (ITCs), are designed to incorporate external data sources. These include datasets from other historical trials or observational, real-world data (RWD).

Any study involving RWD to generate real-world evidence (RWE) involves the risk of multiple potential sources of bias. Indeed, these concerns are even more pronounced in ITCs, as clinical trial and RWD cohorts often have very different patient population characteristics, as well as settings under which data are collected. Specifically, selection bias, confounding, and missing data are among the set of limitations of RWE identified by FDA reviewers.1 These challenges cannot be overcome completely but can be accounted for in a principled way by quantifying their effects using a suite of methods falling under the umbrella term quantitative bias analysis (QBA).

Role of quantitative bias analysis for decision-makers

While the concept of quantifying bias, and many QBA methods, are not new, they are not commonly applied in pharmaceutical studies. This is despite the number of publications including RWD increasing rapidly year by year.2 For comparative effectiveness studies between trial and real-world patients, in which the real-world cohort is considered a synthetic or external control arm (SCA or ECA), selection bias is currently most often dealt with by inverse probability treatment weighting and propensity score-based matching. However, these commonly used approaches do not address residual bias, mismeasured or unmeasured confounding, and missing data. Decision-makers often find this inadequately reliable for HTA given that the robustness of the evidence is unknown in the face of the many uncertainties that may affect treatment effect estimates minimally or up to a critical degree. The result is that HTA bodies and regulators are forced to behave in a conservative manner that ultimately may translate into patients unable to access new medicines.

A recent publication in Nature Communications titled “Addressing Challenges with Real-World Synthetic Control Arms to Demonstrate the Comparative Effectiveness of Pralsetinib in Non-Small Cell Lung Cancer” demonstrates multiple QBA methods including approaches to deal with missing data and residual confounding.3 By assessing the robustness of estimates arising from ITCs of pralsetinib from a single-arm trial (NCT03037385) with “control” treatment regimens in RWD cohorts constructed from a US EHR database from Flatiron Health, the study aims to be a template for similar studies.

How to use QBA to address missing data

To address missingness in a key measure of severity, the Eastern Cooperative Oncology Group (ECOG) Performance Status (PS) score, analyses systematically assessed how treatment effect estimates change under different assumptions of missingness. For trickier cases like the assumption where values are not all randomly missing, the authors demonstrate how a tipping point analysis is used to consider a range of plausible scenarios.

This approach has the advantage of minimizing the need for further assumptions to be made. The treatment effect estimates were robust against several missingness assumptions, which addressed the concern that excluding patients from the study for missing ECOG PS invalidates conclusions drawn.

How to use QBA to address unmeasured confounding

Unmeasured confounding was assessed by estimating the minimum association of a hypothetical unmeasured confounder with treatment assignment and outcome of interest needed to nullify the observed treatment effect. A range of these confounder-exposure and confounder-outcome associates were represented on a bias plot, allowing for at-a-glance determination of whether the results in question were robust to plausible unmeasured confounding. In this case, the QBA suggested it would be implausible for sufficiently large systematic differences in unmeasured prognostic variables to reverse the findings, lending further support to the findings.

Custom QBA to account for differences in trial and RWD performance

Lastly, beyond existing methods common among the suite of tools offered by QBA, tailored methods can be developed to address specific suspected sources of bias in an analysis. One such example is also given in the study, where the control regimens in question seemed to demonstrate lower efficacy in terms of median overall survival in the RWD relative to what would have been expected from a trial situation. Concordance between RWD and clinical trials is a topic meriting discussion on its own,4 however when the goal is to estimate comparative effectiveness between clinical trial and RWD, all efforts possible should be made toward identifying and understanding how bias could affect the reliability and robustness of the results.

Ultimately the authors answered these concerns by designing and executing an analysis based on the idea of a tipping-point analysis as described above. The results showed that the treatment effect estimated by the main analysis was robust against any discrepancy in RWD and clinical trial regimen performance.

 

“While there is significant concern that comparisons involving single-armed trials will yield biased results, few have thought to ask just how biased such a result is,” said Grace Hsu, Associate Director of Advanced Epidemiology at Cytel Inc. Ms. Hsu was a principal investigator on this project and an early industry pioneer for Quantitative Bias Analysis. “No one thought to ask, just how much bias is there? What we have shown through the implementation of these techniques is that when you measure the bias, it is sometimes infinitesimally small, too small to warrant concern that patients will receive subpar therapies.”

 

Continuous research into QBA by industry, academics, and decision-makers

This collaboration between F. Hoffmann-La Roche and Cytel Inc. is a part of their ongoing efforts to raising awareness of QBA by illustrating its practical advantages, and showing their value given the additional and relatively higher levels of expertise and time required. An earlier showcase of QBA applications was conducted in the publication “Assessment of Alectinib vs Ceritinib in ALK-Positive Non–Small Cell Lung Cancer.”5 Beyond trial vs RWD comparisons, studies based entirely in RWD such as in “Association Between Smoking History and Overall Survival in Patients Receiving Pembrolizumab for First-Line Treatment of Advanced Non–Small Cell Lung Cancer”6 use QBA to support the analysis comparing two cohorts both drawn from RWD.

In all the examples cited above, data from Flatiron Health was used as the RWD source. Flatiron Health is known to offer some of the most comprehensive oncology RWD in the world. Yet these studies demonstrate that using high-quality RWD sources is complemented by the benefits of QBA—neither can entirely replace the other.

 

“QBA requires strong expertise,” says Dr. Sreeram Ramagopalan, the Global Head of Real-World Evidence for Market Access at Roche, “It was impressive to see how methodological advances can help decision makers quantify bias present in RWD.”

 

 QBA is not the panacea for all issues that may arise from a study but can substantially enhance the quality and utility of evidence generated by giving a principled roadmap for the assessment of robustness. In turn, the potential QBA offers in expanding research possibilities, and unlocking the value of real-world data would be a gain for payers to better use RWE.

To provide a deeper understanding into applications of QBA, further collaboration by Roche, including the National Institute for Health and Care Excellence (NICE), PHMR, academic colleagues at Harvard University, Leiden University, and Imperial College London, is in progress via a study titled “Quantitative Bias Analysis for the Assessment of Bias in Comparisons Between Synthetic Control Arms from External Data and Lung Cancer Trials” (QBASEL). Preliminary results from QBASEL were presented at this year’s HTAi on June 27.

 

References

1 Patel, D., Grimson, F., Mihaylova, E., et al. (2021). Use of External Comparators for Health Technology Assessment Submissions Based on Single-Arm Trials. Value in Health, 24(8), 1118-1125.
2 Booth, C. M., Karim, S., & Mackillop, W. J. (2019). Real-World Data: Towards Achieving the Achievable in Cancer Care. Nature Reviews Clinical Oncology, 16(5), 312-325.
3 Popat, S., Liu, S. V., Scheuer, N., et al. (2022). Addressing Challenges with Real-World Synthetic Control Arms to Demonstrate the Comparative Effectiveness of Pralsetinib in Non-Small Cell Lung Cancer. Nature Communications, 13(1), 1-10.
4 Hsu, G. G., MacKay, E., Scheuer, N., & Ramagopalan, S. V. (2021). Keeping It Real: Implications of Real-World Treatment Outcomes for First-Line Immunotherapy in Metastatic Non-Small-Cell Lung Cancer. Immunotherapy, 13(18), 1453-1456.
5 Wilkinson, S., Gupta, A., Scheuer, N., et al. (2021). Assessment of Alectinib vs Ceritinib in ALK-Positive Non–Small Cell Lung Cancer in Phase 2 Trials and in Real-World Data. JAMA Network Open, 4(10), e2126306-e2126306.
6 Popat, S., Liu, S. V., Scheuer, N., et al. (2022). Association Between Smoking History and Overall Survival in Patients Receiving Pembrolizumab for First-Line Treatment of Advanced Non–Small Cell Lung Cancer. JAMA Network Open, 5(5), e2214046-e2214046.

 


 

 

contact iconSubscribe back to top