Much like finding the best route on your GPS, simulation-guided clinical trial design can help sponsors select the best path forward, harnessing data to forecast risks, navigate uncertainties, align on goals, and produce pressure-tested clinical trial designs. In the following interview, and as the inaugural post of our new Industry Voices series, Cytel’s Chief Medical Officer Dr. Albert Kim speaks with Dr. Esha Senchaudhuri to discuss many aspects of simulation-guided design – its abilities, its benefits, and much more.
To begin, what does the statistical design of a clinical trial have to do with its strategic priorities?
I think they're intimately related. The strategic priorities of a sponsor should directly influence the statistical design of a clinical trial. Every clinical trial should generate a compelling dataset from which clear decisions can be made, whether it's about if your drug is safe and tolerable, or if your drug is efficacious, or what the risk benefit profile of your drug is, or if you're trying to trigger the next round of investment for development of your drug, or considering whether to stop your drug program. These types of strategic priorities rely on good statistical design because that design enables a clear decision. Conversely, if the statistical design is poor and decisions cannot be made with confidence, the data from a clinical trial has low value.
And you've said in a few places that Simulation-Guided Design helps with this process. What is Simulation-Guided Design? Could you explain in a non-technical sense how simulation works?
Yes, that's a terrific question. There are a lot of assumptions and misconceptions about what simulation is and how we use it at Cytel, so let me start with the fact that simulation as we use it is not some black box comprised of artificial intelligence that magically predicts what your new drug or device will do.
Simulation, as we use it, generates data for teams to decide how to best design their studies based on their priorities. For a given set of assumptions and circumstances, we use the established principles of Monte Carlo simulation to show what might happen in a clinical trial if it were run many times. Critically, this approach helps teams better understand variability and uncertainty – the probability that an outcome will differ from their desired projection. For example, if they believe their new drug might reduce mortality by 20% based on early data, what is the likelihood that they will observe that same result in a larger clinical trial? A simulation-guided approach helps visualize the potential effects of variability in key factors like recruitment rate, treatment effect, and drop-outs.
So simulations can tell you how variability affects observed results and helps teams make informed decisions about how to design a clinical trial so that it has the best chance of success when they actually run it in the real world, out in the wild.
Simulation-Guided Design is a lot like something that people do in everyday life right now. Many of us drive a car from point A to point B with the help of a GPS application on their mobile phone or in their car. If you use one of those applications, you are using Simulation-Guided Driving. If you tell the app you want to drive from Framingham to Logan Airport, the app tells you how long it would take if you took the highway or local roads based on a rudimentary simulation. And then you select a route based on the information it gives you.
This is very similar to what our Solara® software platform does for clinical trial design. Solara® can quickly tell a team the estimated duration, cost, and power a particular trial design has under specific scenarios the trial might encounter, and the team can then make a confident decision about which design best fits their priorities.
So these powerful simulations are meant to help sponsors pressure test design options. Could you talk a little bit about what pressure testing designs means in this context?
Sure. Across many industries, pressure testing is a fundamental best practice in how you would design, engineer, and manufacture a product, whether it's a shoe or a house or a car. Let’s take the car example. If it drives well on a straight line, that's good. How does it handle around curves? Or what does it do when it rains? How does your design perform under unanticipated or stressful situations?
So if we apply that same school of thought to clinical trial design, pressure testing means subjecting the design to multiple stressors so you can understand how your trial would perform in less than ideal circumstances.
And a concrete example of that is treatment effect of a certain drug. Consider a cancer drug that you think will lower mortality by 25%. How would a clinical trial design perform if this drug is more or less effective than that? Let's say it's only 10% effective or 50% effective, will you still have the ability to make clear decisions from the data generated? How will those scenarios affect when your trial completes, or when you might be able to accelerate your development plans? These kinds of questions are crucial in the clinical development world and can be informed by thoughtful scenario-based pressure testing using simulation-guided trial design.
Won't there always be operational uncertainties? How do you tie that back to achieving your strategic priorities?
Absolutely. There are many uncertainties when you actually take your clinical trial design and execute it in the real world. I think we would all acknowledge that it's very difficult to predict the future, if not impossible. You don't know which of the circumstances you will encounter, whether they are problems with shipping drug supply, or new government regulations.
But while prediction is difficult, you can plan for the highest risks using simulations to guide your design. For example, if you simulate a scenario where enrollment is slow, your team can understand how that affects your timelines and ability to deliver the critical data you need and react accordingly. Conversely, looking at a simulation of rapid enrollment lets your team understand how quickly you might have the critical data and plan for accelerating your program and getting a novel medicine to patients in need. You can choose to make your trial more robust in certain dimensions so that it can withstand stress.
Simulation-guided design helps provide a clear sense of the boundaries within which your clinical trial can perform successfully so that upon execution, you'll be better prepared as a sponsor to respond to the actual conditions that you see.
So this idea of robustness seems to be key, can you talk a little bit more about what a robust design is and best practices for achieving robustness when you do a design of a clinical trial?
There are a couple concepts wrapped up in there. Robustness is a concept that can be applied to many different dimensions. You can make your clinical trial withstand extreme conditions for a number of different aspects.
Going back to the example of treatment effect, let's say the highest priority for a team is to very clearly show that you are better than the standard-of-care medicine. Designing a robust trial from a treatment effect perspective could place extra emphasis the ability to draw a statistical conclusion about that treatment effect by, for example, expanding enrollment in the study to add power. But that's not the only way to design a clinical trial. Just like there are many ways to design a house, there are many ways to design a trial and some sponsors choose not to be robust in a treatment effect area. Some might want to place the emphasis on speed. So they want to be robust in the speed element, meaning they will trade speed for cost or trade speed for power. You can't have everything, but using Simulation-Guided Trial Design, you can see how your design aligns with your strategic priorities and then tune your design to invest the most (increase robustness) in those elements of your trial are most important to you.
So one of the cornerstones of this design aspect for most sponsors is choosing the right endpoint for a clinical trial. How does Simulation-Guided Design, when you talk about aligning strategy with clinical uncertainty, help you choose the right endpoints?
That's a nuanced discussion and choosing the right endpoints is a deep topic. At high level, the way Simulation-Guided Trial Design helps you with endpoint selection is not necessarily from a scientific perspective, but from a technical perspective. The choice between a single endpoint that's binary versus composite, or selecting a hierarchical testing structure are complicated issues that have a many implications and ripple effects on the way your design will function. Some of them are counterintuitive because they're additive or non-linear.
By showing teams the implications and ripple effects of their choices on power, speed, and cost, Simulation-Guided Trial Design can help inform their decisions about which endpoint strategies best fits their priorities. A conventional limitation in using SGTD has been the high resource burden in coding and computation. However, a digital development platform like Solara® can run tens of thousands of simulations within minutes, so a team can make educated choices about their options in a time efficient manner.
So if sponsors are now comparing tens of thousands of new designs, how can they sort through all of these design options quickly to meet their strategic objectives?
I think this is a crucial point. First of all, it's not necessary to go through tens of thousands of designs to come up with the one that best fits you. But if you didn't know the choices existed at first, then you don't know what you're missing. A key aspect of Simulation-Guided Trial Design is showing people what they might not have considered that clients or sponsors or clinical trial designers can select the best one consciously rather than rejecting by omission or ignorance or because it took too much time.
If you have 10,000 design options, one way to sort through them quickly is by using objective numerical criteria in a data-driven process. In other words, assigning each design a score.
For example, in Solara®, each design has a numeric weighted score based on how a given sponsor prioritizes factors like speed, power, and cost. It's an easy way to sort and visualize the performance or the goodness of fit for a particular design with your priorities. In general, the highest scoring designs are better fits for the priorities a team has set.
Right. So you'd have a set of priorities, you'd figure out how to weight them and then a good simulation platform would actually take those weights and help you sort through the designs based on how you've weighted these different strategic objectives that you've been talking about. I want to quickly shift gears a little bit. It's Cytel, so a question about adaptive designs. When you're dealing with an adaptive design, won't a further element of uncertainty be when an interim analysis is held? How does that play a role in choosing the optimal design for your needs?
Again, this gets back to the concept of how you can make a clear decision based on a dataset. Timing of an interim analysis closely relates to how big or complete the dataset is and, as a consequence of that, how clear a decision you can make on a particular question.
So when you choose to execute an interim analysis is a big choice from a design perspective. Too early, you may not have sufficient power to make a clear decision and the exercise may be futile. Too late and the interim results may not be available to affect the course of your program. Simulations can help you predict what your power is going to be at a certain point in time for an interim analysis dataset and help you make the most intelligent decision about where to place that interim analysis in time.
We've talked about multiple objectives that sponsors can have when designing clinical trials. So say that a sponsor has more than one objective, an expedited timeline, a well-powered clinical trial. How can simulation help decision-makers optimize across more than one of these parameters?
Key question here, which goes back to the discussion thread we had earlier on robustness and sorting through 10,000 trials. An important first step is defining what your priorities are and in what order and in what weight. And a good simulation platform will then apply that decision rule, that weighting function. Suppose the most important thing to me is speed, the second most is power, and the third most is the cost. The weighting function will then allow you to numerically evaluate all your different choices based on that lens. So that's how you can optimize across a very wide design space full of thousands of designs by very clearly pre-specifying what your priorities are. The additional luxury of Simulation-Guided Design is that you can change that priority list or weighting and see how that surfaces a different design that might better fit a different set of priorities, should circumstances change.
Right. So if you have a sponsor team with multiple stakeholders and they all have different priorities, you can actually change the weightings of different priorities to see what the results are.
Correct. So let's say you have a group of clinicians that says, "Hey, to be a meaningful addition to treatment in lung cancer, this new medicine has to at least have a 20% reduction on top of standard-of-care." And then let's say a different set of stakeholders says, "Well, in order to be commercially successful, it has to at least be 30% effective on top of standard-of-care." A simulation platform can run a number of designs and scenarios to show you what the design implications are of those two different treatment effects. So you can have a nice dialogue at the team level of the implications of designing for a treatment effect of 20% or 30% because those trials would probably have different durations and timing of interim analyses. Not to mention all the downstream effects on operational aspects of the trial.
So do Simulation-Guided Design tools actually facilitate these discussions? And if so, how do they do so?
We certainly think that they do. In that last example I was just bringing up, if you have data that says focusing on a lower treatment effect is clinically important, but focusing on a higher treatment effect is commercially important, having that discussion at the team level would directly be enabled by having the simulation data saying, okay, we could focus on a 20% treatment effect, but at the risk of running the trial for two years longer than a 30% treatment effect. Or perhaps the team would discuss having an interim analysis at a 25% recruitment stage versus a 50% recruitment stage, and which set of data might heighten the risk of a poor decision. So with all those scenarios on the table, teams can start talking directly about the important strategic issues. It can really raise the level of conversation and the team’s business intelligence.
Right. So as a final question, how do you think Simulation-Guided Design will affect how sponsor teams need to approach collaboration in order to have the most important strategic discussions that they can?
Well, make no mistake, Simulation-Guided Trial Design is alive and well in the product development community right now. It is embedded in the way that people do things. I think the Cytel approach takes it to the next level. A Simulation-Guided Trial Design platform like Solara® really enables this paradigm of digital development, where you can remove the technical coding part of simulations, bring the data to the team and say, here are the different scenarios we might encounter, which are the ones that we're most comfortable with? How are we going to prioritize one design over another, given the multiple different perspectives that are at the table from a team perspective? And I think that’s the groundbreaking thing about how Cytel solutions like Solara® will let teams accelerate the development of their new medicine or device. The technical issues fade into the background, and the key questions and discussions around product development can take center stage.
Well, that all seems very exciting, so thank you so much for discussing Simulation-Guided Design with us today.
Great. My pleasure.
Industry Voices is a new column featuring a series of interviews with Cytel's thought leaders on their respective areas of expertise. Stay tuned for more Industry Voices to come.
Read more from Perspectives on Enquiry & Evidence:
Sorry no results please clear the filters and try again