In the 2010 draft FDA ‘Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics', the agency makes an important distinction between ‘well understood’ and ‘less well understood’ adaptive designs.
‘Well understood” adaptive designs may include such approaches as adaptation of eligibility criteria, adaptation for stopping early and adaptations to maintain study power based on blinded interim analyses of aggregate data. For these 'well-understood designs', there is little concern from the FDA about their potential to be implemented in adequate and well-controlled trials. On the other hand, at the time of the drafting of the guidance at least, ‘ less well understood designs' (which include such approaches as adaptations for dose selection studies, adaptation of patient population based on treatment-effect estimates, and adaptation for end-point selection based on interim estimates of treatment effect) gave greater concern. Interestingly, the FDA Adaptive Designs for Medical Device Clinical Studies : Guidance for Industry and Food and Drug Administration Staff does not adopt this distinction.
A recent article, Addressing Challenges and Opportunities of “Less Well-Understood” Adaptive Designs (He et al 2016) (1) takes a look at some of the perceived challenges of these designs and ways in which they may be overcome. The publication is the result of work by a best practice sub-team formed by the DIA Adaptive Design Scientific Working group in January 2014. Cytel's Yannis Jemiai is a member of this group, and one of the co-authors of the article.
In this blog, we take a look at a few of the challenges outlined and some of the suggested mitigations. One aspect covered in the publication is seamless designs- and given the scope we'll devote a separate blog to this area.
Type I error control
Type 1 error is an important concern for agencies in relation to adaptive trials. The Type I error rate for a study may be inflated if inadequate adjustment is made for all possible adaptation choices
Type I error may be controlled using a variety of methods including weighted test statistics, or unweighted test statistics or p-value combination methods.. Bonferroni techniques may also be used. It may also be possible to justify Type I error control using simulation techniques although this approach does have some challenges. The authors caution that ‘complex methods that may result in difficulty in interpretation of results, from both statistical and clinical perspectives, would not be recommended. ‘
Interim data review process and role of Data Monitoring Committee
Challenge: Knowledge of interim results could introduce bias and compromise the trial.
Obviously, those who are reviewing the interim results should be selected to have the right experience and expertise for the tasks in hand. The knowledge of the interim results should also be tightly restricted to the decision-making group.
Read more: Responsibilities of Data Monitoring Committees: Consensus Recommendations (Bierer et al 2016)
Statistical bias in estimates of treatment effect
Challenge: Treatment effect estimates for some types of adaptations have the potential to overstate the true effect size.
Likelihood of bias can increase along with a number of factors- with the complexity and flexibility of adaptation rules, with larger sample sizes, and in cases where interim analyses are taking place later in the trial. Understanding how these factors affect bias is important.
Simulation can also be used effectively to assess and understand the potential for estimation bias in particular situations.
Read more: Simulation capabilities in East 6.4
Potential for subject heterogeneity across stages
Changes to inclusion/exclusion criteria and/or opening up new study sites in new regions/countries due to slow enrollment can increase issues of subject heterogeneity. This is a problem for any trial, but is of particular concern in adaptive trials because the methods assume that there is consistency between the setting of post-adaptation data and that which determines the adaptation.
To mitigate these challenges protocol amendments should be minimized where possible. To address geographic issues, it may be prudent for regions or sites to enroll within a similar timeframe- indeed any measures which can encourage sites to accrue subjects at similar rates are helpful. During the course of the trial, it's critical to closely monitor blinded event rates and other aspects to identify any signals that have the potential to compromise a trial’s results.
Potential for making decisions based on highly variable and often unreliable interim results
Interim results have the potential to be misleading.
Interim analyses should be well planned and well timed based on the type of trial, endpoint and enrollment conditions. The authors note that ‘An important consideration in timing an interim analysis is identifying the earliest point at which enough valid and reliable information has been collected to act upon with confidence’. In general for a trial to benefit from interim analysis and adaptations the enrollment must be slow enough, and the endpoint be observed quickly enough.
Potential for overrun of subjects being recruited
Should enrollment continue while an interim analysis is being conducted? On the one hand, it’s necessary to consider to handle subjects enrolled to arms that might not continue. If on the other hand enrollment is paused then this has an operational impact and may also introduce bias.
A prolonged period of overrun can negatively impact the efficiency of the design and enroll more patients than desired to treatments that may be dropped. Proper planning- including the use of predictive simulations is key. If there are study specific concerns about how the patients will be handled, or how their data will be included in the analyses then efforts should be taken to minimize the overrun.
The sharing of such best practices in adaptive designs is an important means of promoting understanding, and ultimately removing barriers to their usage. As these steps are consistently taken by the industry, the authors note that:
‘It is not unreasonable to predict that some design types may be in the process of being viewed as sufficiently well understood to allow their broader usage when applied appropriately, even in confirmatory settings.’
It's important to note, that while adaptive designs do have potential to improve efficiency and maximize the value obtained from clinical trials, not every trial is a suitable candidate. Cytel statistical consultants have extensive experience in designing and implementing both traditional and adaptive approaches, and work closely with clients to identify the best solution for their specific requirements. To find out more about how we can help click below.
1) He, W., Gallo Paul, Miller, E., Jemiai, Y., Maca, J., Koury, K., Fan, X.F., Jiang, Q., Wang, C. and Lin, M. (2016) ‘Addressing challenges and opportunities of “Less Well-Understood” Adaptive designs’,Therapeutic Innovation & Regulatory Science, . doi: 10.1177/2168479016663265.
2) Bierer, B.E., Li, R., Seltzer, J., Sleeper, L.A., Frank, E., Knirsch, C., Aldinger, C.E., Levine, R.J., Massaro, J., Shah, A., Barnes, M., Snapinn, S. and Wittes, J. (2016) ‘Responsibilities of data monitoring committees: Consensus recommendations’, Therapeutic Innovation & Regulatory Science, 50(5), pp. 648–659. doi: 10.1177/2168479016646812.