<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Computation and Clinical Trial Design: New Directions

Historically, advances in the statistical design of clinical trials have accompanied progress within the science and practice of computation. The early 1990s witnessed increased exploration of adaptive and group sequential methods, in no small part due to the enhanced calculations made possible by software that had been developed a decade prior. The similar expanse of designs and methods throughout the past two decades, and the novel departures from the traditional two-arm design, have come with the ability to quickly compute more intricate and complex algorithms. By the beginning of the 2010s, the alignment of biostatistics and computation had grown close enough for educators and academics to begin advocating that biostatisticians needed to be well-grounded in computational reasoning, to equip themselves for unchartered terrains of drug discovery [1].

Recent advances in computational tools have made the construction of high-efficiency clinical trials more rigorous than ever, with the ability to thoroughly explore a design space consisting of hundreds of thousands of possible simulations. Not only does such capability enable the optimization and de-risking of clinical trial design, it enables sponsors to ask questions they might never have had opportunity to explore before now.

Nobel prize winning behavioral economists Daniel Kahnemann and Amos Tversky have written extensively on how ‘statistical intuitions’ of decision-makers are often complicated by highly subjective responses to facts and evidence [2]. For example, a trial sponsor might have a very clear sense of whether or not to extend a trial by six months to increase power by 0.5% - but asking whether or not to increase the same trial by seven weeks to increase power by .292% requires much more serious consideration. This is not simply because the calculations are a little more difficult, or because tradeoffs between speed and power do not follow a strict linear curve. Rather, Kahnemann and Tversky argue that when making decisions under uncertainty, most decision-makers (even those equipped with rigorous and reliable data) tend to use ‘heuristics’ rather than merely hard facts. These heuristics rely on intuitions about familiar percentages. We all have a sense of what 5% really means. Few people have strong intuitions about what to do about a percentage like 2.92%.

Yet this is a skill that decision-makers can quickly and easily develop. As it becomes common practice to explore thousands of trial designs, and to extrapolate tradeoffs in speed and power, the ability to ask more refined questions about the costs and opportunities for clinical development will become more common place. This in turn has the potential to improve calculations on the expected net present value of a clinical trial, and to make decisions based on a more robust pool of evidence.

As advances in computation and access to computational techniques enable new opportunities for clinical trial design, sponsors will have to gain the ability to detect new routes of questioning and enquiry for maximal benefit to the trial. Cytel’s new whitepaper on the Reimagined Clinical Trials aims to offer an introduction to this new framework of decision-making, powered by advances in computation.

Read Whitepaper

References:

[1] Nolan, Deborah, and Duncan Temple Lang. "Computing in the statistics curricula." The American Statistician 64.2 (2010): 97-107.

[2] Kahneman, Daniel, and Amos Tversky. "On the study of statistical intuitions.” Cognition 11.2 (1982) : 123 – 141.https://apps.dtic.mil/sti/pdfs/ADA099507.pdf

 

contact iconSubscribe back to top