Supposing two treatments, A and B, need to be compared that have not been compared through a clinical trial. In the absence of such information, those treatments have been compared with each other via a third treatment, C (i.e., A to C and B to C) using indirect treatment comparison approaches. Recent developments are challenging this status quo. The increased availability of regulatory-grade RWD helps. We can also now avoid some of the biases that used to plague the use of observational data.
This summer Cytel is pioneering two “Head-to-Head Comparisons using Real World Data” studies, one in oncology one in cardiovascular disease. You can follow the conduct of these investigations through a new Cytel webinar series, which includes Professor Miguel Hernan of Harvard University’s TH Chan School of Public Health. Watch the recording of the first webinar in this series by clicking the button.
Continue to read an interview with Professor Hernan.
Cytel: Following your theoretical and applied work on causal inference issues, you have spent the last few years advocating for the increased use of term ‘causal’ instead of ‘association’ when interpreting the results of studies using real world data. Why have you decided to pursue this ‘crusade?’
Miguel Hernan: It does feel like a crusade sometimes. A serious problem in the medical literature is that the actual research question is often left unspecified. For example, a journal may publish a paper whose stated goal is to estimate “the association between aspirin and stroke” from observational data, but a quick look at the paper shows that editors and authors were actually interested in “the causal effect of aspirin on stroke”. Why not say so? Because there is a misconception that all you can obtain from observational data is associations not causal effects.
The misconception is that all you can obtain from any data, including randomized trial data, are misconceptions. We are just more willing to interpret some associations as causal effects than others, but the bottom line is that our scientific aim in both randomized trials and observational studies is to estimate the causal effect.
We estimate the association because that is all we can estimate from data, but the association is the means not the aim.
So, going back to your question, if we conflate the means (association) and the aims (causal effect), everything becomes confused. For example, if we were truly interested in the association between aspirin and stroke, we would not adjust for confounders in observational studies. We adjust for confounders because we are trying to emulate the process of randomization, that is, because we are truly interested in the causal effect. The medical literature is full of examples in which observational studies yielded implausible results because the reluctance to be explicit about the causal goals led to the application of the wrong analysis methods. Many medical papers seem to be shooting at a causal target without ever telling us what the target is and, as a result, readers cannot determine whether the authors hit the target or even used a correct methodology for that particular target.
Cytel: As you think of the drug development process and evidence generation required to support development of new drugs, what are the type of questions or problems that head to head comparisons using real world data can be helpful for? What is the advantage vs other applicable methods?
Hernan: Often decision makers need to make a decision now, but evidence from a relevant randomized trial is not available (and perhaps never will be). What are we supposed to say to clinicians, patients, regulators, payers, and others who need to decide whether to give treatment A or B? That they cannot decide because a randomized trial of A vs. B doesn’t exist?
The fact is that decision makers will make a decision between A and B, regardless of the quality of the available evidence. A sound use of real world data will allow us to provide information, however tentative it may be, that will improve the quality of their decision making. This is especially true for decisions involving head-to-head comparisons for which no randomized trial is available. We can use the observational data to try to emulate the target trial that would answer the question of interest. The alternative is relying on our gut feeling.
Cytel: What is the benefit of the approach from the perspective of a researcher who is trying to convince a regulator or a payer of the value of their approach, and relatedly why should lifesciences companies be looking at this type of research when considering their evidence generation plans?
Hernan: Unfortunately, that approach may not be feasible or practical (sometimes it may not be even ethical).
But, even if it were possible to run a randomized trial, it is impossible to do period between the start of the design of the randomized trial and the publication of its findings is long, typically measured in years. Now suppose the therapy is already in use. Why not use real world data to provide a provisional evaluation of the therapy until the randomized trial is completed?
When the evaluation is done correctly we have an opportunity to improve decision making during the years in which no other data are available. Again, are there any other data-based alternatives?
Cytel: Will the greater availability of real world data making the head to head comparisons more valuable or robust from a conclusions perspective?
Hernan: We all agree that, other things being equal, a randomized trial would be preferable to an observational analysis of real world data. The problem is that other things are usually not equal. Even when randomized trial data are available, many important questions remain to be answered. We can then use observational data to emulate a target trial that expands the eligibility criteria of the existing randomized trials. The target trial may also have a longer follow-up and therefore more clinically relevant outcomes, or a richer set of comparisons.
In fact, if randomized trials are available, their results can be used as for benchmarks for the target trial emulation. This approach beings the best of both worlds and allows us to maximize the value of the real world data.