The award-winning TOGETHER Trial was designed with the vision of ensuring that COVID-19 therapies are both effective and accessible to the majority of people, especially in the low- and middle-income countries. Members of the TOGETHER Trial, led by Principal Researcher Dr. Edward Mills (Cytel & McMaster), studied existing interventions as possible treatments for COVID-19. The TOGETHER Trial recently won the Society of Clinical Trials David Sackett Trial of the Year Award for 2021.
I interviewed Ofir Harari, Senior Research Principal (Statistics) at Cytel, who passionately worked on the TOGETHER Trial from its inception. Ofir has been working in the field of statistics and data analysis since 2007. His experience includes design and analysis of randomized and cluster-randomized clinical trials, Bayesian adaptive designs, statistical emulation, geospatial analysis, and network meta-analysis. At Cytel, Ofir leads projects in the area of real-world analytics. Prior to joining Cytel, Ofir was a postdoctoral fellow at the University of Toronto and Simon Fraser University. Ofir’s interest and expertise lie in the intersection of statistical methodology and software development.
Can you please talk about your involvement in the TOGETHER Trial? How has the journey been from the start to now winning the SCT Trial of the Year award?
First, I must mention that the TOGETHER Trial is primarily the challenging work of two people. The driving force behind it is Ed Mills who conceptualized it and had the courage to pursue it, got the funding, and to this moment is the public face of this trial, presenting the results to the media and regulatory committees. Then there is Gilmar Reis who has overseen the entire operation in Brazil. He makes everything tick!
We also need to acknowledge the efforts of the entire team who administers the trial, an operation which is so complicated that I am struggling to even wrap my head around it. There are multiple sites, different drugs and their corresponding placebos, and data input. I can only admire from a distance the kind of work that all these people have done, while I sit here in the convenience of my beautiful office and analyze data on my computer.
The first trial was conducted under the umbrella of the Gates Foundation and tested Hydroxychloroquine and Lopinavir/Ritonavir. It used the more traditional, frequentist group-sequential design with a single interim analysis. We had expected a much higher event rate, but it became clear that the original endpoint of hospitalization was challenging power-wise and was also problematic, with fewer patients than normal being hospitalized due to the hospital situation in Brazil at the time.
Ed then got significant funding from Patrick Collison to move forward with the trial in a “platform” format, to test many other candidate therapies. We began to rewrite the entire protocol with Ed determined to go back to his initial plan of conducting a Bayesian adaptive trial with multiple interim analyses. We also changed the primary endpoint to a composite hospitalization/6-hour ER (Emergency Room) visit as a proxy to the kind of medical condition that would normally result in hospitalization.
In the analysis phase, my good friend and colleague Hinda Ruton did most of the data management and a lot of the analyses as well. Ruton’s incredible work at handling this with minimal past experience cannot be overstated. With each round and each review of our papers, we made improvements until, in our most recent Ivermectin paper that was published in the New England Journal of Medicine we finally used Bayesian analysis for all primary and secondary endpoints as well as subgroup analyses.
Along the way, we found that the antidepressant Fluvoxamine showed a clear and substantial signal, reducing the incidence of severe disease in outpatients, and I still hope to see it being prescribed to high-risk individuals. Overall, the recognition that the trial has received is a testament to the quality of the operation and what is on the line, more than anything else.
What are Master Protocols? Why did we select this approach for TOGETHER Trial?
A master protocol is a trial protocol that is written with the intention of using it repeatedly for multiple therapies over an indefinite period of time. New therapies may be added or an old one may be stopped depending upon the situation, but the design, the endpoints, the subgroups of interest and the analyses performed remain the same. Imagine the amount of time and money it saves in writing new protocols and setting up infrastructure for separate trials.
It was a natural choice for the TOGETHER Trial because the intention was always to (a) test multitude of therapies, (b) for the same disease (COVID-19), and (c) for the same patient population (outpatients).
Also, the British RECOVERY trial, a large platform trial in an inpatient setting, had made the headlines earlier by demonstrating the efficacy of dexamethasone in saving lives. They had showcased the efficiency of that framework, and you cannot go back from there.
Did we use any advanced statistical methods for the TOGETHER Trial? Was there any use of Bayesian methods for this trial?
This is quite a spicy topic! The event rates were assigned prior distributions; the test statistic used for this trial was the posterior probability of superiority of treatment to control; the decision boundaries were framed accordingly (stop for futility if that probability is exceedingly low, stop for efficacy if it is extremely high). All the communications used that language too (posterior efficacy instead of p-value, posterior medians instead of maximum likelihood estimates and Bayesian credible intervals instead of frequentist confidence intervals), so everything looks perfectly Bayesian, right? Except, the decision boundaries were calibrated by a frequentist simulation under the null effect to meet the traditional 2.5% (one-sided) type I error rate requirement. Hence, it is really a frequentist design that uses the Bayesian language. Right or wrong, this is still the expectation set by the regulatory authorities. Although there are more people calling to conduct “truly Bayesian” trials with complete disregard to null hypothesis testing considerations, all the seemingly Bayesian adaptive trials today follow these guidelines.
I am not sure how I feel about the use of Bayesian methods for the TOGETHER Trial. On one hand, null hypothesis testing is very counterintuitive and goes against the natural way of thinking of evidence and uncertainty of both scientists and non-scientists. On the other hand, not many trial results are replicable, and too many therapies do not stand the test of time. I still feel that the traditional frequentist ways of handling multiple testing provide stronger guardrails against false discoveries. Additionally, we also need to think about choosing a Bayesian prior that would not skew the results too much, especially when data is not abundant.
Can you talk about some of the challenges we faced during the trial design and how we resolved them?
As I mentioned, the original trial design was a standard group sequential design with futility testing at the (single) interim analysis. We then amended the protocol and changed the interim analysis schedule to three interim analyses including futility and superiority tests, and one final analysis. It was crucial that the type I error rate be controlled for the results to be globally recognized which made way for a Bayesian/frequentist hybrid model that does not really go well with "orthodox" Bayesian, but that is the current state of things. The superiority/futility thresholds were determined via a very large simulation, with the futility threshold rising with each iteration, to reflect the diminishing probability of making a discovery as fewer patients remain to be enrolled. If I ever happen to design a similar trial, I will incorporate sample size reassessment elements as an integral part of the design to ensure that we can at least consider increasing the number of patients randomized without inflating the type I error rate. Few things are more frustrating than an underpowered trial.
Are you currently working on anything exciting that you can share with us?
As a member of the Real World and Advanced Analytics division at Cytel, my key role does not include the design of analysis of Clinical trials. Currently, I am involved in a large Target Trial Emulation (TTE) project for a big pharmaceutical client where we try to estimate the per-protocol treatment effect of an advanced non-small cell lung cancer therapy using data that is part randomized clinical trial and part observational.
Along with my colleagues from Canada and the Netherlands, I am also involved in two research projects. One of them is a novel methodology for indirect treatment comparison (ITC) with non-shared effect modification. This paper has been conditionally accepted by Research Synthesis Methods. The second research investigates Bayesian borrowing in a meta-analysis setting where the patient population or the therapy may change slightly between trials, but they may still inform each other.
Click below to read Cytel's press release on the SCT awards.
About the Author of Blog:
Mansha Sachdev specializes in content creation and knowledge management. She holds an MBA degree and has over 12 years of experience in handling various facets of marketing, across industries. At Cytel, Mansha is a Senior Content Marketing Manager and is responsible for producing informative content that is related to the pharmaceutical and medical devices industries.