If you're a company considering an adaptive trial for the first time, you probably have many questions. One of our customers recently sat down with us to discuss some anxieties about an adaptive trial. The ensuing discussion included senior members of Cytel Consulting. Here's a transcript of our conversation.
Q: It was mentioned that there are fewer examples of how adaptive designs can work with cross-over studies. We run a lot of these, is there any advice on how we could use adaptive techniques effectively?
A: Adaptive designs can be very efficient in the context of crossover designs in bioequivalence studies. In particular, for cases where assumptions are made with respect to the value of the coefficient of variation, adaptive design techniques like blinded sample size re-estimation offer a good way out.
Q: A lot of these techniques are unfamiliar (and perhaps unintuitive) to our colleagues. What advice would you have on how to persuade people that the implementation of these novel designs does not pose a risk?
A: There are many things in this world that people are unfamiliar with when they are first introduced. No improvements can be made without overcoming unfamiliarity. Since adaptive designs bring many improvements to drug development, it would be very beneficial for your company that your colleagues familiarize themselves. When it comes to “unintuitive” designs, adaptive designs are actually very intuitive and logical. The main concepts and benefits of adaptive designs are very easy to grasp.
Q: Even when specific measurements are restricted to subsamples, reviewers raise eyebrows when publishing. How will they react to results from studies that were stopped early?
A: When proper procedures (pre-specification, control of Type I error rate, and bias) are implemented, results of a study that was stopped early are very sound to scientific/regulatory communities. There are many examples of this.
Q: Are flexible designs a violation of the “keep it simple” principle?
A: Some adaptive designs are relatively simple, others may be more complex. The main problem is that “keep it simple” is a not always a good principle. If a slightly more complex design can de-risk drug development, improve probability of success, assign more patients to better treatments/doses, discontinue inefficient ones, or reduce costs and development time, then “keep it simple” is a bad principle. In some cases even an unethical one.
Q: Do adaptive designs save subjects but cost extra in resources administered by study clinicians and statisticians?
A: Adaptive designs can cost a little more per patient, and in addition to that they require additional statistical resources. But this additional cost is far outweighed by benefits of adaptive designs. One has to keep in mind is that greatest benefits of adaptive designs are better and more ethical drug development. Improved probability of success among other things. Reductions in development time and costs are also very appealing. And not to leave the question partially answered: overall, adaptive design generally reduce the cost of drug development, and sometimes substantially.
Q: What is the overhead that is necessary to ensure blindness and correct treatment assignment in a flexible versus a conventional trial?
A: Most Phase 3 trials are now implemented with some form of flexibility (e.g. a GSD with futility stopping boundary) so companies managing late phase studies are generally used to the implications of interim analyses (or at least reviews) managed through an independent committee (and the impacts of such processes on cost). The upfront costs for the simulation study, regulatory preparations (including defending the design before the agencies), and developing customized software (if necessary) are very much dependent on design and study phase.
Q: How is contracting managed? What are the additional costs (in %) when the study is not stopped interim?
A: Generally you budget for the full trial and then certain costs are reduced if you stop early. One example where this is not always the case is a sample size re-estimation i.e. some clients only ask for budgets assuming no increase and then when the trial starts they start thinking about what would be needed if the sample size is increased (this is often the case for smaller biotechs who go out to seek additional funding for any potential sample size increase). Ideally it would be better to know the full budget up front with the maximum potential sample size increase included and also to know by how much it would be reduced if the sample size increase does not end up being implemented. Again the question about percentage of additional costs is highly dependent on the trial itself and whether things like treatment costs need to be included in the evaluation (and how expensive the treatment manufacturing and supply process actually is)
Q: External reviewers (rarely statisticians) sometimes request pairwise t-test when a linear mixed model has been used because of they have mistrust of “manipulating” the data by sophisticated statistical modelling. So would we overcome their mistrust of the advanced statistical reasoning underlying a flexible design?
A: This is unrelated to adaptive design. Rather, it is related to modelling. Adaptive design can be implemented with the use of pair-wise tests or models. It is just that models are more efficient at exploratory stage of development. At confirmatory stage of development pair-wise testing of a sort is usually required.
To find out more about Cytel Consulting Services click here:
Related Items of Interest