As more payers and HTA agencies turn to real world data to compare the effectiveness of various treatment effects, two sources of bias might become problematic for sponsors. The first comes from ‘unmeasured confounders’, those variables not documented in a dataset which might have influenced the health outcomes of patients. The second comes from other forms of missing data. Being able to identify and quantify such bias is a major step towards convincing decision-makers of the soundness of comparative effectiveness findings using real world data.
Cytel scientists led by Dr. Radek Wasiak and Dr. Paul Arora recently worked with Roche to investigate unmeasured confounding and missing data for the comparative effectiveness of alectinib versus ceritinib, two treatments for non-small cell lung cancer. The quantitative bias analysis investigated two questions:
How extreme would an unmeasured confounder need to be to nullify the estimated treatment effect?
How biased would missing values need to be to nullify the effect?
A quantifiable approach to answering these questions demonstrated that the findings of the initial comparative effectiveness study were robust to plausible ranges of bias.
Researchers knew that 47% of the patients had missing data in the real world dataset, but did not know the degree to which this would bias the effects of the comparative effectiveness study. They first used a multiple imputation method to calculate this. A multiple imputation method is one where missing data is estimated using a number of hypothetical (but complete) datasets. These hypothetaical datasets are carefully constructed by an advanced model, and each missing data point is then calculated, first by looking within a dataset, and then across datasets.
Such methods, while gaining popularity, assumes that missing data is random. (The method also cannot account for any confounders.) The researchers then built on the multiple imputation by testing assumptions about the missing values, first against another real-world dataset, and then against the results of an RCT. The RWD-RWD comparison showed that missing data would have to be extremely skewed to warrant concerns, whereas the RWD-RCT comparison suggested that the missing data would have to be so skewed it was not possible to capture in the model.
Both these findings show that the initial comparative effectiveness study is robust to bias from missing data.
Plausibility of Confounders
In order to explore the possibility of an unmeasured confounder, researchers created a model comparing exposure, outcomes and the hypothetical prevalence of a confounder. This revealed the degree of imbalance between treatment arms that would be necessary for there to be an unmeasured confounder, and also the degree of correlation such a confounder would need to have with the outcomes. Typically, if the largest measured imbalance, and the largest measured correlation are significantly smaller than the hypothetical ones necessary for a confounder to exist, this raises the plausibility of no unmeasured confounder in the model.
This discrepancy was evaluated by researchers, and the findings of the comparative effectiveness study were quantitatively determined to be robust to an unmeasured confounder (i.e. it was highly implausible that such a confounder existed either individually or as a combination of unmeasured confounders).
Quantifying bias analysis can help reassure payers and other decision-makers about the strength of comparative effectiveness studies that make use of real-world data. The findings of Cytel and Roche’s investigations are a part of ISPOR’s pre-release session.
Click below to download our full list of sessions at Virtual ISPOR.
About the Author of Blog:
Dr. Esha Senchaudhuri is a research and communications specialist, committed to helping scholars and scientists translate their research findings to public and private sector executives. At Cytel Esha leads content strategy and content production across the company's five business units. She received a doctorate from the London School of Economics in philosophy, and is a former early-career policy fellow of the American Academy of Arts and Sciences. She has taught medical ethics at the Harvard School of Public Health (TH Chan School), and sits on the Steering Committee of the Society for Women in Philosophy's Eastern Division, which is responsible for awarding the Distinguished Woman in Philosophy Award.