<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Design Concept for Confirmatory Basket Trial Interview with Bob Beckman: Part 2

emre-karatas-194353.jpg

In this blog, we share the second part of our interview with Bob Beckman, about a design concept for a confirmatory basket trial. Beckman is Professor of Oncology and of Biostatistics, Bioinformatics, and Biomathematics at Lombardi Comprehensive Cancer Center and the Innovation Center of Biomedical Informatics, Georgetown University Medical Center. The first part of the interview, which focuses on the context of the design is available to read here.  Otherwise, read on to learn more details about this innovative design which has the potential to drastically increase drug development efficiency. Beckman presented on this topic at Cytel's East User Group Meeting in October.

Now you’ve given the context, can you tell us more about the design itself?
Really, the design is very simple - it’s a funnel. The indications go in, and some are removed or ‘pruned’ at an interim analysis. Those that pass this interim checkpoint continue to the end, and the indications are then pooled together and evaluated for the definitive endpoint.

One of the unique features of our basket trial design is that it includes an interim analysis based on an interim endpoint. So in oncology, for example, the interim analysis might be based on progression-free survival (PFS) or response rate (RR) and the final analysis would be based on overall survival. We would actually recommend that each indication is powered on the interim endpoint because it is more sensitive.

 And so at the interim analysis, you're asking the question - does the drug appear to be working in this indication based on the interim endpoint? 

For example, 140 patients for each of three indications might be enough sample size to power on progression-free survival, or fewer for RR. Then for overall survival, you would need to pool all the indications together at the end. And so at the interim analysis, you're asking the question - does the drug appear to be working in this indication based on the interim endpoint? If it’s not, you remove this indication from the basket and don't carry it forward to the final pool. If it does appear to be working, you may qualify for accelerated approval of that indication based on the interim endpoint.
advice.jpg
What are your key recommendations for those considering this approach?
So, our basket trial design begins with selecting the indications that you believe will succeed. We actually recommend that it should be primarily reserved for indications that already have some Phase 2 evidence, preferably from a randomized trial. In addition, there should also be evidence that the biomarker assay works for that particular disease. We also propose that if you have a drug that might be effective for 10 indications, you go ahead and investigate the first few indications in the traditional way. In other words, you show beyond a shadow of a doubt that the drug is working in one indication first. Then you can proceed to investigate the remaining indications within a basket trial.

All of these recommendations are to help reduce your risk of selecting an indication which negatively affects the rest of the indications in the basket.

What are the key challenges of applying the design?
There are a number of challenges. One is making sure that after the ‘pruning’ step you don't have any negative indications remaining. To mitigate this risk, our biggest recommendation is to not begin with any indication in your basket that doesn't already have a lot of evidence. And then you're conducting a powered statistical test of those indications to select only the best ones. Of course, there is still a risk of one indication not working and diluting the results of the others. But you lower that risk by following the common sense guidelines I just mentioned.
challenge.jpg
The second challenge is controlling the false positive rate (or Type I error) and this is a much more difficult question. 

For the non-statistician, the false positive rate is not the chance that for a particular study the result is wrong. Rather, it’s the chance that if you put in a drug which is ineffective throughout a study that it would generate a positive result. In simple terms, if you put sugar water into the trial, how often is it going to tell you that sugar water is the cure for cancer? Now in a basket trial, where the sugar water is being tried for different indications, controlling this false positive rate is more challenging. Let’s say you're going to try out sugar water in five indications. At the interim analysis you remove two of the indications and pool the other three. What's the chance that it's going to be positive? In this example, the truth is that sugar water doesn't work for any of the indications. It’s the global null hypothesis - which is to say the drug doesn't work for any of them. And when you can control the false positive rate under those circumstances that's known as a weak control of the false positive rate. It is not easy to achieve weak control under these circumstances. The act of using interim data to prune the indications and then still including that data in the final pooled analysis in itself introduces some bias (“random high bias”). However, we have solved this problem and do achieve weak control of the false positive rate. We do this by aiming for an even lower nominal false positive rate, and so when inflation of the false positive rate occurs due to random high bias the false positive rate still remains within the desired range. We provide methods for calculating this lower nominal false positive rate, which essentially is a statistical penalty that we must pay.

Now let's imagine we have another drug, in this case, not sugar water. This drug doesn't work for 2 out of 5 indications but it works for 3. What's the chance that the study will correctly identify only the 3 that it works for, without a fourth one being carried along by accident? And then all the permutations of that, where if there are any indications in which the drug does not work, that they will not appear in the final pool and appear to be positive. This is called strong control of the false positive rate or Type I error. Can we control that?

For this design, strong control of the Type I error or false positive rate is obviously desirable but clearly extremely difficult to achieve. At the moment, we are only able to establish the ‘weak control’ I described earlier.


".....we note that the pooled population in the basket design shares some similarity with an all-comer population in a conventional Phase 3 study which may consist of heterogeneous subgroups."


Now you can obviously see the problem with strong control and why it's so difficult. Let's say we have approved three indications based on a pooled result. Could that be because one of the indications is so strong it dominated the statistics? One approach in the end would be to look at all three indications individually, but if you demand statistical proof for each indication then you end up with a trial that doesn't save you anything. You would just have five individual trials powered in the traditional way and administratively bundled together, but without saving any sample size. So currently, we can only say that the pool was positive and if you put sugar water into this trial it probably wouldn't produce that kind of result. Having said all that, we note that the pooled population in the basket design shares some similarity with an all-comer population in a conventional Phase 3 study which may consist of heterogeneous subgroups. While the population on the label after a positive conventional Phase 3 study may exclude certain subgroups, a positive basket trial also doesn’t guarantee the approval of all indications in the pool.

It will be interesting to see under what circumstances health authorities will accept that. Because they generally require strong control which is obviously much easier when you have one hypothesis.

And this leads to another challenge, which is that you have to talk with the health authority in advance about the design. Even though our design is much more rigorous and suitable for confirmatory trials than basket trials have been before, the strong control of Type I error issue has not yet been resolved.

How do you hope the design will be applied?
Our hope is that this design will be broadly applied, especially in rare diseases. As I mentioned before, there have been examples of drugs that have been approved on the basis of one tumor shrinking out of five. And our basket trial design would provide a much higher level of evidence to the health authorities. For drugs that are effective, but not spectacular, in rare diseases or for small biomarker-defined subsets of other diseases, the design offers a cost-effective development route.

What feedback have you had so far?
When I present on this topic, the audience reaction has been very positive. I lead the DIA’s Adaptive Design Scientific Working Group which is mostly statisticians and a few clinicians who have an interest in statistics. This project is actually an outgrowth of that group. When I give talks about this topic I often recruit several new members to the group because they are excited about the kind of work that the group does. So in that respect we've had very positive feedback from drug developers.

We would love to get feedback from health authorities, but we haven't had any so far. My sense is that the health authorities would rather give their feedback in the context of a real-life application. In other words, they have a company who approaches them and says I want to apply this design to my drug - what do you think? And because we don't have strong control of the Type I error, there is still some judgment that they would have to apply on a case by case basis.

Conceptual image of asphalt road and direction arrow.jpeg
What’s next for your research?
One of the things that we are interested in is real-world data - by which I mean electronic health records, pharmacy records, or insurance records. One of my colleagues at Georgetown noticed an application for basket trials in cases where a drug is approved for one large disease, and because of its known molecular mechanism of action it's used ‘off-label’ for a bunch of smaller rare diseases that share a similar mechanism. But there's no current evidence about whether this off-label use is effective or not.

"We’re looking into whether you can leverage the electronic health records, pharmacy records and insurance records for this 'off-label' use and use that information to select the indications at the beginning."

This means that some patients may be treated with the drug and not really benefit, and on the other hand in cases where the drug might work, some of those patients find it's not covered by their insurance due to lack of evidence.
So the idea is to do a basket trial of some of those smaller indications, answer some of these questions and provide some evidence. We’re looking into whether you can leverage the electronic health records, pharmacy records and insurance records for this 'off-label' use and use that information to select the indications at the beginning. Sometimes in rare diseases, you don't actually even know what the best endpoint is, but you might be able to get some information from this real-world data about picking an endpoint. You might find that additional real-world data comes out when you're halfway through the trial and you could also apply some of it to the interim analysis to further sharpen your selection. So we have a simulation project ongoing where we have identified an example of this, and are trying to see if the real- world data improves the performance of the basket trial.
This is another interesting area that we hope we will be able to say something about in a year or two.

We would like to thank both Bob Beckman for sharing his insights and Cong Chen for review support. You can read part 1 of the interview here.

Further reading

Beckman, R., Antonijevic, Z., Kalamegham, R. and Chen, C. (2016), Adaptive Design for a Confirmatory Basket Trial in Multiple Tumor Types Based on a Putative Predictive Biomarker. Clinical Pharmacology & Therapeutics, 100: 617–625. doi: 10.1002/cpt.446

Chen, C. et al., 2016. Statistical Design and Considerations of a Phase 3 Basket Trial for Simultaneous Investigation of Multiple Tumor Types in One Study. Statistics in Biopharmaceutical Research, 8(3), pp.248–257


Did you like this article and would you like to learn more about innovative trial strategies in oncology clinical development? Click the button below to download our article ' Are adaptive designs the answer to oncology development success'

Download Article

 

R Beckman 13-Oct-09 (4).jpg

About Robert Beckman
Robert Beckman, M.D., an oncology clinical researcher and mathematical biologist, has played significant leadership roles in developing new oncology clinical research groups at 4 pharmaceutical companies and in 5 cross-company collaborations, brought 23 oncology therapies into first-in man studies, 5 through phase 2, and 2 to market. He has co-invented novel clinical strategies for proof of concept studies and for early and late biomarker driven clinical development, including a confirmatory basket trial design. He has developed a new approach to cancer precision medicine based on tumor evolution, which has the potential to dramatically enhance survival and cure rates. His versatile publication record, comprising greater than 200 articles and abstracts, ranges from computational chemistry to clinical oncology, emphasizing quantitative approaches. Dr. Beckman is currently Professor of Oncology and of Biostatistics, Bioinformatics, and Biomathematics at Lombardi Comprehensive Cancer Center and the Innovation Center of Biomedical Informatics, Georgetown University Medical Center.

contact iconSubscribe back to top