<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Cyrus Mehta on the Founding of Cytel

35_Cyrus_banner-1

On the occasion of Cytel’s 35th anniversary, co-founder Professor Cyrus Mehta sits down with Dr. Esha Senchaudhuri to discuss the founding of Cytel, the evolution of the industry over the last 35 years, and the ongoing innovations in software and statistical strategy.

Cytel was founded after an extensive collaboration between you and Nitin Patel in places like the Dana Farber Cancer Research Institute. What were the problems you were collaborating on that led to the founding of Cytel?

We were primarily collaborating on developing computational methods to perform inferences on small categorical data sets, i.e., exact tests for categorical data. These problems arose from my work at the Dana Farber Cancer Institute, where we had to make decisions on whether a new drug was safe relative to the controlled drug. The data was classified in terms of the levels of toxicity that were obtained from these clinical trials, and the categories were no toxicity, mild toxicity, moderate toxicity, severe toxicity, and drug death, in that order. There were very few drugs in the high categories, and more in the lower categories. Analyzing that data where you had zeroes and small numbers in higher categories could not be done using the traditional large sample methods. These problems led us to tackle categorical data differently, primarily by using permutation tests.

What was the statistical landscape like in the pharmaceutical industry at Cytel’s founding? How does it compare to the landscape now?

It is completely different now. In the 1980s, and part of the 1990s, too, clinical trials were outsourced by pharmaceutical companies to the National Institutes of Health. The National Institutes of Health funded cooperative groups that were academic centers, especially in oncology, and they grouped together and pulled their patients into these clinical trials. Multiple institutions formed one cooperative group, such as the Eastern Cooperative Oncology Group, the Radiotherapy Oncology Group, or the Children’s Leukemia Group A. These groups had a coordination center, an operations office, and a statistical center, and these different institutions ran the trials, funneled all the data through them, and developed reports.

This model gradually changed over time as the pharmaceutical companies began to conduct clinical trials themselves, with their own data. Today, I believe, there is very little work done by the cooperative groups, and almost all the trials are sponsored by pharmaceutical companies.

How did Cytel move from small sample to group sequential to adaptive design expertise?

It started as a business opportunity where we had to expand our offerings as it was not sufficient to just provide statistical software for small sample problems. There was not enough scope for consulting and no design component to it. We were providing tools for analyzing small, sparse, and incomplete data. Hence, pharmaceutical companies could use our software StatXact® and LogXact® to analyze the data on their own. The only consulting opportunity we had was in training them. On the other hand, group sequential problems had a larger impact on the business of the sponsoring organization, because with group sequential design you could get your products to the market faster with fewer patients. So, it was a bigger opportunity for us.

The methodology for these group sequential methods was developed by some of the statisticians at Harvard, where I am a faculty member. That allowed me to collaborate with some of those statisticians and persuade them to act as consultants to Cytel in the early days when we were developing the East® software. Later, these group sequential methods evolved further into adaptive methods. Again, we sensed this as an opportunity for Cytel, and our East software evolved further into providing solutions not just for group sequential designs, but for adaptive group sequential designs.

35_Cyrus_Quote1_East

How was East created and how did it become the flagship product that it is today?

In those days, we used to conduct an annual seminar with the pharmaceutical industry. There was a pharmaceutical company called Schering-Plough Corporation, which is now a part of Merck. The biostatistics departments at Harvard and Schering-Plough used to have an annual, two-day symposium at Harvard on different important topics. One of the early ones was on group sequential designs and, subsequently, there were other such workshops on missing data, multiple comparisons problems, and numerous other important statistical issues that needed solutions. The workshop on designing group sequential clinical trials inspired me to develop a software based on the methods that were being presented at that symposium, which was perhaps in the early or mid-1990s. I was able to gain support from the National Cancer Institute through their program called the Small Business Innovation Research Program. We were able to apply for a grant to develop the East software through this program.

During this process, we collaborated with Anastasios (Butch) Tsiatis, Kyungmann Kim, and Sandro Pampallona, outstanding experts on group sequential methods, who were at Harvard at that time. We also received a lot of advice from David DeMets, who developed the famous Lan and DeMets error spending function. Chris Jennison and Bruce Turnbull, who have written the standard textbook on this topic, also extended their support, engaged in technical discussions with us, and offered joint workshops with us to industry statisticians.

One of Cytel’s greatest innovations is undoubtedly within the realm of sample size re-estimation. What was the innovation of the promising zone?

Around the turn of the century, some papers were being published on adaptive methods. Adaptive methods were the next evolution of group sequential methods, in the sense that one was now able to look at the interim data and adapt the sample size of the study, the number of looks that were needed or the spacing of the looks. All of this could be done unblinded because there was a model already set up for doing unblinded interim analysis in group sequential designs. An independent data monitoring committee could look at the data, without revealing anything to the pharmaceutical sponsor. That made it possible to develop more innovative designs than simply stopping a trial early for overwhelming efficacy or futility.

There was now an operational process for looking at interim data, while at the same time many papers were published on statistical methodology for making adaptive changes based on the interim looks, without inflating the type one error. These papers made it clear that this was a good opportunity for East. Hence, we started putting these methods into East, and the word Promising Zone was coined by my colleague, Stuart Pocock. We wrote a paper on sample size re-estimation, and we called it the Promising Zone Method. We proposed that the best way to implement sample size re-estimation was to partition the interim data into zones. So, if the interim data were excellent, you need not make any change in the sample size. Also, if the interim data were quite bad, you need not make any change in the sample size. In the first case, you could stop early for efficacy or just continue unchanged. In the second case, you might stop early for futility or continue unchanged.

There would be a sweet spot in the middle where it might be advantageous to increase the sample size, because having seen the interim data, it may appear that you could increase power. In this case, it would be called conditional power, that is, conditional on what you would have already seen. So, there would be a good opportunity to increase the sample size and boost the conditional power back to the desired 90% that the sponsor would be interested in. The Promising Zone was defined as the zone in which we could boost the conditional power. Outside of this zone, it would not boost the power high enough, be too expensive, or the conditional power might already be super high, and it would not be necessary to increase the sample size.

Some people don’t realize that there are still innovations being added to the promising zone and to sample size re-estimation methods in general, and that these will create ripples across the industry. Would you be able to highlight some of what we will see in the next few years?

There has been a lot of research parallel to Adaptive Designs, on testing of multiple end points, or multiple treatment arms. It has been observed that in a clinical trial, you will gain a lot of efficiency if you ask more than one question about what is it that you want to discover about the new treatment, relative to the standard of care. At the same time, preserving the integrity of the family-wise error rate, which is the equivalent in multiple testing of the type one error in a two-arm simple trial. This methodology has been developed, but what is needed now is to combine this methodology with the interim analysis.

Group sequential and adaptive methods focus on taking interim looks at the data and using them to make modifications. Multiple testing has been developed for no interim analysis, it allows single look at the data and asking multiple questions at the end of the study, such as, which of many sub-groups is doing better, or if you know that the treatment is already good for overall survival, is it also good for progression free survival? Or the other way around. Does the presence or absence of a genetic mutation make a difference to the patient’s response compared to the population at large? A number of such multiple questions can be asked. If you combine this opportunity with the interim analysis, you can refine the questions and focus the sample size only on those questions that look interesting from the interim analysis point of view. Hence, the promise is that you combine the methods of adaptive design with the available statistical research that has already emerged on multiple testing, but for which there has been no interim analysis.

How do you think software and statistical strategy will shape clinical trials in the next few years?

There is already a huge community building open-source software, but that software, which is useful for research and to explore new ideas, is probably not ready for regulatory submissions. For regulatory submissions, one needs software that has been vetted and seen to be valid from a regulatory perspective. That is where we have been very successful with East. We are moving forward with our latest software at Cytel called Solara, which makes it possible to efficiently, intuitively, and quickly explore all the different multiple testing and group sequential options that are available under different scenarios of treatment effects, enrollment rates, and other unknown co-variants.

With Solara, you can have a possible design space of thousands of trials and use technology to explore all of them very quickly and find the ones that are worth pursuing further. This is an example of how software can combine with statistical methodology. We are developing many different statistical methods that give us lots of options for design parameters, and we are developing software that allows you to quickly generate all these different statistical models, and then explore them quickly with a nice visualization. In the end, you can have the best possible design for any given situation.

35_Cyrus_Quote2_Solara

In the past few years, you have occasionally argued that private sector companies, even smaller biotechs, could evolve to make use of the advantages of multi-arm and platform trials. How would this work and how would Cytel help?

At present, I think they need a neutral party to set up the infrastructure for a platform trial. If it is a pharmaceutical company that sets up the platform, then they will only be testing their own drugs on it, and it will not be a collaboration with different industry partners. An academic center can probably be the neutral partner. A good model is the STAMPEDE Trial where the Medical Research Council in England had set up the infrastructure. In this trial different molecules from different companies were tested over a twenty-year period for prostate cancer.

Many successful drugs were discovered in this manner as it did not require going through the Phase II and Phase III settings. You can have many patients right up front, and you can test the new molecules with large sample sizes against the standard control arm. These molecules did not have to be tested in sequence. Instead, you could test several at a time against a common control arm. The winners would be taken off the trial, and then be available to the patients, and new molecules could take their place.

When we attended the ASCO meetings every year, we would find that very often there would be a new discovery from the STAMPEDE Trial, and it would be announced at ASCO. The reason this works is because there are plenty of new molecules coming out from small biotech companies and large pharmaceutical companies. Hence, when you have a plethora of molecules, you can test them simultaneously against standard of care, rather than testing sequentially. These are exciting new molecules and there is no shortage of physicians who want to put their patients on these trials. Consequently, there is no difficulty in recruiting patients and finding new molecules. In fact, in the Medical Research Council, they have very strict criteria for which new molecules they will accept for testing because they get requests from several companies.

Within therapeutic areas, Cytel’s greatest contributions have arguably been in oncology and cardiovascular. How have these areas developed in the past 35 years and how has Cytel contributed?

Cardiovascular trials have advanced in a very specific way. The clinicians who work in the cardiovascular area have become more sophisticated about statistical methods. They love partnering and working with statisticians. They also understand the data really well, and I would say, much more than the oncologists. They have used group sequential methods a lot and have no difficulties with it. But they have not looked at biomarkers yet and are now looking to start designing trials where they can look at biomarkers and reduce the size of their trials. Typically, their trials have been quite large because they have been so successful with their treatments.

There is a group called the Heart Failure Collaboratory, which is a public-private partnership between statisticians and cardiologists. They meet once a year, and they are now starting to look at more sophisticated problems. Typically, they have been looking at time-to-event trials and the standard has always been to make the assumption that the hazard ratio of the treatment to the control arm is constant. They have discovered in recent trials that this assumption does not always hold, and are starting to look at new methods, in partnership with experts on these questions. How will you analyze these data if the proportional hazard assumption is not valid? I think there is a good opportunity here for Cytel to work one step earlier. How will you design trials where you do not know whether the proportional hazards assumption will hold or not? If you can design such trials, where the design is so good that if the proportional hazard assumption holds, you have the right sample size, and if it does not hold, you still have the right sample size, that will be a holy grail. Some of us have been working on that question.

In Oncology, they have moved further ahead than cardiology, in terms of looking at biomarkers. They recognized long ago that large oncology trials are not successful. The population is too heterogeneous, so they are looking at starting with smaller subgroups and trying to develop methods where the molecule is targeted at specific biomarkers only. Here, there are opportunities for adaptive designs, which we have been involved with, but only with the smaller biotech companies. Large pharmaceutical companies are generally more cautious about using new methods.

As it is a 35-year anniversary celebration, where do you think the industry will be in 2050 (about 35 years from now)?

That is a very difficult question. Technology advances at an exponential pace. This acceleration is something that my friend, Ray Kurzweil, has been demonstrating over the years with specific examples from different fields. If you look at how we have moved, for instance, from doing computations on a slide rule to 50 years later doing computations in the Cloud, then the kind of computing power that will be available in 30 or 50 years from now is unimaginable. How the biopharmaceutical industry will exploit this power is another question. My guess is that they will be able to use predictive models very successfully. But again, this will be combined with medical advances, which are also accelerating. The new therapies are developed much more rapidly now. Hence, it is very hard to predict what will happen 50 years from now.

Are there any words of wisdom you would like to share with young people in the field now?

I think young people should follow their passion and pursue their interests in research, and not think of it as just a job. If they’re interested in something and find it promising, even if it is not fashionable, then they should pursue it and not worry about whether it is going to be popular. They should believe in it themselves and follow their instinct.

 

Read the full 35th anniversary interview series here:

Download Ebook

 

Read more from Perspectives on Enquiry & Evidence:

Sorry no results please clear the filters and try again

Nitin Patel on 35 Years of Technological Innovation

On the occasion of Cytel’s 35th anniversary, co-founder Professor Nitin Patel sits down with Dr. Esha Senchaudhuri to...
Read more

Joshua Schultz on the Evolution of Cytel

On the occasion of Cytel’s 35th anniversary, our CEO Joshua Schultz sits down with Dr. Esha Senchaudhuri to discuss the...
Read more
 
contact iconSubscribe back to top