<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Quantifying Uncertainty in RWE Studies with Quantitative Bias Analysis

Missing data and unmeasured confounding are common challenges for researchers, particularly in observational studies and those involving real-world data, jeopardizing the validity of study conclusions. Here, we introduce a useful tool — quantitative bias analysis (QBA) — to address these challenges.

Comparative analysis with synthetic/external control arms A common stage for quantitative bias analysis

In a previous post introducing the applications and key elements of comparative analysis with synthetic/external control arms, we emphasized the various factors involved in data selection. Even with careful selection after systematic literature and data landscaping, imperfections in the data persist. Examples include missing data in measured variables, measurement error, and entirely unmeasured confounders. Would this deem a study infeasible?

Well-designed statistical analysis can mitigate the effects of bias in study estimates, and QBA can further support a study by allowing researchers to quantify how bias can change study conclusions. Regulatory and HTA agencies have referenced QBA in their guidance.

 

Regulatory and HTA agency guidance referencing QBA

  • NICE: “If concerns about residual bias remain high and impact on the ability to make recommendations, developers could consider using quantitative bias analysis.”1

  • FDA: “Sponsors should develop a priori plans for assessing the impact of confounding factors and sources of bias, with quantitative or qualitative bias analyses used to evaluate these concerns. Such prespecified analyses can assist in the interpretation of study results.”2

  • CADTH: “QBAs have several benefits, including identifying sources of systematic error and providing ranges of potential impacts of bias on study results, reducing undue confidence in results and conclusions.”3

 

What is quantitative bias analysis?

Quantitative bias analysis4 is a set of statistical techniques used to assess the potential impact of a wide range of systematic errors (biases) on the results of a research study. QBA goes beyond simply acknowledging the possibility or qualitative descriptions of bias; it allows researchers to quantify the magnitude and direction of the bias, giving a far more complete and nuanced picture of the uncertainty associated with a study.

Some key elements about QBA are as follows:

  • Focuses on systematic errors: QBA targets biases that consistently distort the results in a particular direction, unlike random errors (noise), which can fluctuate from study to study.

  • Provides context and transparency: QBA doesn’t simply present a single number as a measure of bias. It helps researchers interpret the potential impact of bias based on the specific context of the study, and associated domain knowledge such as the prevalence of an unmeasured suspected confounder.

By incorporating QBA into a study, researchers can:

  • Strengthen the credibility of findings: By quantifying potential bias, researchers can demonstrate the robustness of their results against all kinds of data challenges or highlight limitations that need further investigation.

  • Guide future research: Understanding the types of bias that might have affected a current study can inform researchers on how to improve future studies, such as by collecting more comprehensive data or employing more robust methodologies.

QBA is a valuable tool for producing rigorous conclusions, leading to more reliable and trustworthy studies. What follows are some QBA applications for common data challenges.

 

Quantitative bias analysis for missing data: Beyond imputation

Missing data is an obstacle often faced in real-world data, but in clinical studies as well — incomplete surveys, lost samples, or unreliable measurements can leave gaps in a study. The missingness mechanisms can be summarized as follows:

  • Missing Completely at Random (MCAR): Here, the missingness is unrelated to the study variables. An example would be random people skipping questions on a survey, unrelated to the question itself.

  • Missing at Random (MAR): When the missing data depends on observed variables but not the unobserved outcome. For example, people may skip questions related to sensitive topics, but answer others.

  • Missing Not at Random (MNAR): The missingness depends on unobserved variables — this is the most difficult scenario to address. For example, people with worse health outcomes might be less likely to complete a health survey.

 

Figure 1: Conceptual Missing Data Bias Example4

Hsu_QBA workshop_Fig1

 

A natural response may be to impute the missing data, however, that involves making an assumption about the mechanism of missingness, which can be difficult to justify. QBA offers a way to understand the potential impact of missing data on study results by testing for robustness under a wide range of missingness assumptions.

Based on the type of missing data, QBA can help us understand the potential bias and adjust for it. For MCAR data, the analysis is relatively straightforward, and standard statistical methods might suffice. However, for MAR and MNAR data, QBA shines. It utilizes a range of assumptions about the missing data mechanism and observed variables to estimate the missing values and their impact on the results, then a tipping point analysis is used to determine whether any plausible assumptions lead to a reversal of study conclusions. This allows us to get a more accurate picture of the true cause-and-effect relationship.

 

Quantitative bias analysis for unmeasured confounding: When not everything important is measured

Another roadblock in causal inference is unmeasured confounding. Confounding variables are those that influence both the exposure and the outcome, potentially distorting the observed relationship between them. The trouble is, sometimes these confounding variables are not measured at all in the data.

A type of QBA to assess the impact of unmeasured confounding is by introducing the concept of the E-value. This is the minimum association between the unmeasured confounder and both the exposure and outcome that would completely explain away the observed effect. While far easier to calculate than generating the unmeasured variables from scratch, interpreting the E-value must be done very carefully.

 

Importance of contextualizing the E-value

Broadly speaking, a small E-value indicates a strong confounder that can easily explain away the observed association. A larger E-value suggests a weaker unmeasured confounder, whose impact is less concerning. However, determining what is a “small” versus “large” E-value alone can be misleading.

What may be considered a large E-value differs from study to study, because the association between the exposure and outcome also differ between studies. It follows naturally that a stronger hypothetical confounder would need to exist to overcome a stronger exposure-outcome association. Thus, contextualizing the E-value for each analysis should be done.

 

Examples of quantitative bias analysis for missing data, unmeasured confounding, and beyond

Cytel’s researchers are industry pioneers in the development and practical application of QBA methodologies, particularly for comparative analyses. NICE published an RWE framework referencing Cytel’s work as an example for researchers’ consideration on the practical application of QBA. Examples of published work applying QBA as described here can be found in Nature Communications, JAMA Network Open, and Journal of Comparative Effectiveness Research. 4, 5, 6, 7­­

QBA targeting highly specific applications can also be designed, such as a case in lung cancer where the real-world patients had systematically poorer survival than those in the comparator trial cohort.8

 

Final takeaways

QBA empowers researchers to fully leverage all types of data and navigate their challenges due to missingness, unmeasured confounding, and more. While not a magic bullet, by quantifying the potential impact of these issues, QBA allows researchers to make more informed interpretations of their findings and highlight the limitations of their studies. This fosters transparency and helps decision-makers understand the robustness of the conclusions drawn.

 

Interested in learning more? For other examples of QBA, with cases tailored for their specific research challenges, join Grace Hsu at ISPOR US 2024 in Atlanta, where she is a presenter for the workshop: “Quantify to Qualify: A Quantitative Bias Analysis Workshop to Bust Bias and Drive Trustworthy Data-Driven Decision Making in Real-World Evidence.”

Cytel’s Real-World and Advanced Analytics team will be at ISPOR US Booth #1018. Click to book a meeting with our experts:

 

Book a Meeting at ISPOR US

 

 

Grace Hsu_crop1-01About Grace Hsu

Grace Hsu is Director, Real-World Evidence, at Cytel. She has 9 years of experience in consulting and guiding project strategies. Grace holds a Master’s degree in Statistics, providing statistical consulting and strategy development for data curation, and the application of advanced analytics to clinical and RWD. Examples of her peer-reviewed publications include work on COVID-19, synthetic/external control arm comparative effectiveness analysis, quantitative bias analysis, Bayesian borrowing, and other methods of indirect comparison for both pharmaceutical research and HTA/regulatory submissions.

 

 

Notes

1 National Institute for Health and Care Excellence. (2022). NICE real-world evidence framework.

2 U.S. FDA. (2023). Considerations for the design and conduct of externally controlled trials for drug and biological products: Guidance for industry.

3 Canada’s Drug and Health Technology Agency. (2023.) Guidance for reporting real-world evidence

4 Thorlund, K., Duffield, S., Popat, S., Ramagopalan, S., Gupta, A., Hsu, G., Arora, P., and Subbiah, V. (2023). Quantitative bias analysis for external control arms using real-world data in clinical trials: A primer for clinical researchers. Journal of Comparative Effectiveness Research, 0(0), e230147.

5 Popat, S., Liu, S. V., Scheuer, N., Gupta, A., Hsu, G. G., Ramagopalan, S. V., Griesinger, F., and Subbiah, V. (2022). Association between smoking history and overall survival in patients receiving Pembrolizumab for first-line treatment of advanced non–small cell lung cancer. JAMA Network Open, 5(5), e2214046.

6 Wilkinson, S., Gupta, A., Scheuer, N., Mackay, E., Arora, P., Thorlund, K., Wasiak, R., Ray, J., Ramagopalan, S., and Subbiah, V. (2021). Assessment of Alectinib vs Ceritinib in ALK -Positive non–small cell lung cancer in phase 2 trials and in real-world data. JAMA Network Open, 4(10), e2126306.

7 Ramagopalan, S., Gupta, A., Arora, P., Thorlund, K., Ray, J., and Subbiah, V. (2021). Comparative effectiveness of Atezolizumab, Nivolumab, and Docetaxel in patients with previously treated non–small cell lung cancer. JAMA Network Open, 4(11), e2134299.

8 Popat, S., Liu, S. V., Scheuer, N., Hsu, G. G., Lockhart, A., Ramagopalan, S. V., Griesinger, F., and Subbiah, V. (2022). Addressing challenges with real-world synthetic control arms to demonstrate the comparative effectiveness of Pralsetinib in non-small cell lung cancer. Nature Communications, 13(1), 3500.

Read more from Cytel's Perspectives:

Sorry no results please clear the filters and try again

FDA Guidance on the Design and Conduct of Externally Controlled Trials — What to Watch

The U.S. FDA has recently provided specific guidance[i] on the design and conduct of trials incorporating an external...
Read more

The New EU HTA Landscape: Insights on Indirect Evidence

How should health technology developers prepare for future market access activities in Europe? Numerous discussions...
Read more

Key Elements and Implications of the Draft EU JCA Implementing Act

Written by Lydia Vinals, PhD, and Grammati Sarri, PhD The draft Implementing Act of the EU Health Technology Assessment...
Read more

 

contact iconSubscribe back to top