'The aim of a discussion should not be victory but progress.'
This principle, expressed by the French essayist Joseph Joubert, applies effectively to the spirit of scientific debate. More specifically, within the clinical development space, the field of adaptive designs has seen its fair share of both discussion and progress. In this blog we’ll take a look at one debated area- the efficiency of Adaptive SSR designs.
Debating Adaptive SSR
Though the statistical framework for confirmatory adaptive designs was introduced back in the late 1990s it wasn’t applied to actual studies till nearly 10 years later. The area of unblinded sample size reassessment sparked particularly vigorous debate, and passionately held positions were tabled at conferences and in publications by industry, regulatory and academic statisticians.
Since 2009 and 2010 when the European Medicines Agency (EMEA) and the FDA respectively released their adaptive design guidance documents, adaptive designs have found their way into regulatory review and some all the way to New Drug Application (NDA) approval.
Among those submitted for regulatory review, the most popular have in fact been designs that propose an SSR . Opponents of these designs often base their criticism on the premise that such designs are inefficient. But let's ask the question:
How applicable is this question of efficiency in a practical setting?
The February 10, 2016 issue of Statistics in Medicine (volume 35, pages 350-358) contains a commentary by Mehta and Liu on a paper by Bauer, Bretz et al (Statistics in Medicine 2016, volume 35, pages 325-347) titled "25 Years of Confirmatory Adaptive Designs: Opportunities and Pitfalls" which tackles this question head on.
Mehta and Liu debunk the myth that adaptive sample size re-estimation (SSR) designs should not be used because they are inefficient. They argue that the proper way to compare the efficiencies of non-adaptive and adaptive-SSR designs is to use a common standard for the comparison.
To our knowledge the Mehta-Liu paper is the first time that this common standard has been utilized for comparing non-adaptive and adaptive-SSR designs.
In the Mehta and Liu paper this is achieved by constructing two types of power plots. The first type is the plot of unconditional power versus δ , while maintaining the same expected sample size, at each value of δ.
Figure 1: Unconditional power comparisons between adaptive and fixed designs: conservative sample size re-estimation.
The second type is the plot of conditional power at the time of the interim analysis, when data from the trial are at hand and a sample size re-estimation is being considered.
Figure 2: Conditional power of adaptive design: conservative sample size re-estimation
Efficiency is in the eye of the beholder
As we can see from these two plots, efficiency is really in the eye of the beholder. On the one hand, it's true that depending on the type of SSR rule adopted, an adaptive design may suffer a small power loss at the design stage. But if the interim results are promising, then this will be more than compensated for at the interim monitoring stage,.
In the end, the sponsor's preference will depend on the utility assigned to making large investments before the trial begins, with no opportunity for subsequent change, versus making a smaller up-front investment with the option to increase the investment if promising results emerge as the trial progresses.
Many thanks to Dr Cyrus Mehta for providing the summary of the Mehta and Liu publication for our blog.