<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Selecting Your Next Clinical Trial Design Using Quantitative Methods

biostats-in-the-boardroom-blog-banner-01

C-Suite and R&D Decision-Makers are always striving to make evidence-driven decisions. Yet the rules by which evidence is evaluated can bias these decisions, even when the method of decision-making seems objective. Our Chief Scientific Officer, Dr. Yannis Jemiai, has worked extensively on how to operationalize decision theoretic tools for clinical development decision-making. Here he introduces three quantitative frameworks that life-sciences decision-makers can quickly incorporate into their selection process when selecting an optimal design for their next clinical trial.

Why would an empiricist deny that science begins with observation?

The philosopher Karl Popper once began a lecture in Vienna by saying, “Take pencil and paper, carefully observe, and write down what you have observed.”
This met with several confused responses, primarily of the form of what was supposed to be observed. Popper reflected, “Clearly the instruction “Observe!” is absurd – Observation is always selective. It needs a chosen object, a definitive task, an interest, a point of view, a problem….” [1]

Popper would go on to formulate an entire system of empirical reasoning, beginning not with observation but hypothesis. Formulate a hypothesis, observe whether or not it can be falsified, and if it can, reject said hypothesis. This has been a cornerstone of scientific reasoning, and statistical reasoning, since Popper’s time.

Evidence-driven decision-making now appears to have a similar problem. We all gather data, quantify said data, import it into models, but what is it that we are actually analyzing? New technology has made it easier to organize and interpret data, but integrating this new evidence towards decision-making takes further steps. We need to ensure that we are methodically assimilating data and generating evidence, with a specific decision-context in mind. The challenge is to do so without biasing the evidence-driven decision.

Much has been written on how to structure a better decision process. One of the shortcomings of not taking the time to create an evidence-driven strategy is that we sometimes create biases by thinking we are being objective. We might have a hypothesis of what will work, and we see the data confirming our suspicions. What we do not always take the time to do is to see if the evidence indicates there are better options.

One way to avoid this error of judgment is to create a series of decision rules that reflect our priorities even before evaluating the evidence. These ‘Go-No/Go’ criteria are conditional rules that say, “IF the evidence dictates X then I will do Y, otherwise I will do Z,” where Z could easily be explore more options. Yet, this approach essentially begs the question, because the evidence can itself suggest which decision rules ought to be chosen and implemented within a given clinical development framework. The simple choice of Go/No-Go Rules, unless constructed with care, will not result in the objective evaluation of evidence for a specific decision. My recent book chapters on Go/No-Go Decision Rules aims to tackle this question and related solutions [2].

Aside from Go/No Go Rules, there is also a method that utilizes a quantitative scoring rule to reflect the priorities of decision-makers. Suppose we aim to optimize a decision over two parameters. For a clinical development strategy this might be an optimization over the upfront costs of a trial and the time it takes for clinical trial completion. We can decide prior to examining our evidence that we would give trial completion a much higher weight than a specific cost. Using this we build a weighted scoring rule on which we judge every clinical trial design before us. This way, the scoring rule in fact ranks the clinical trial designs for us, based on our own priorities [3].

Finally, we might simply want to identify all the possible clinical trial designs that optimize over one of these parameters, without causing any reductions across the other. For example, if for a given cost, design A will complete a clinical trial early within eight months compared to design B within nine months, then design A is clearly the preferable alternative. Design B is said to be dominated by A and can be eliminated from consideration.

Identifying which designs are dominated given our set of priorities can simply help narrow the space of decision-making and give context to our decision rules. Ultimately, this helps to know both which clinical trial designs – the dominant or Pareto efficient set – to focus on in our decision-making, and what to benchmark against when making rudimentary calculations of the cost of an additional day, week or month of extending a clinical trial [4].

All three of these quantitative operations use objective standards to help C-Suite and R&D Decision Makers to guide quantitative and evidence-based decision-making. Technology like Cytel’s new SolaraTM provides a way to operationalize this, using advisor algorithms built in to the software to facilitate these calculations.learn More

References:

[1] Beyond Programming: To a New Era of Design By Bruce I. Blum, page 32
[2] Yannis’ Book Chapters 
[3] Re-imagining Clinical Trials - Leveraging Statistics & Cloud-Computing to Increase Development Productivity, page 5
[4] Satoru Hiwa, Tomoyuki Hiroyasu, Mitsunori Miki, "Design Mode Analysis of Pareto Solution Set for Decision-Making Support", Journal of Applied Mathematics, vol. 2014, Article ID 520209, 15 pages, 2014. https://doi.org/10.1155/2014/520209

 


 

About Yannis Jemiai

Yannis_JemiaiYannis Jemiai has a pivotal role within Cytel as Chief Scientific Officer he has oversight for the corporate-level Scientific Agenda which includes establishing research portfolios in Bayesian, small sample, and other flexible designs; as well as complex innovative designs including adaptive trials, master protocols and MAMS. Yannis also has an extensive portfolio of research in adaptive trial design, financial and pharmaceutical strategy, decision theory, and regulatory affairs.

His own research has been published in numerous statistical journals. Dr. Jemiai earned his Ph.D. from Harvard University, an M.P.H. from Columbia University, and a B.A. in Molecular and Cellular Biology also from Harvard.

 

contact iconSubscribe back to top