<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Nitin Patel on 35 Years of Technological Innovation

35th_Nitin_banner

On the occasion of Cytel’s 35th anniversary, co-founder Professor Nitin Patel sits down with Dr. Esha Senchaudhuri to discuss the founding of Cytel, its evolution over the last 35 years, and his vision for the future of the field.

Tell us a little bit about the founding of Cytel. What led to the need for a software company that paved the way for innovative designs?

Well, it all began from the research that Cyrus and I were conducting, and we came across problems that were of common interest to both of us. He was very interested in applications and biostatistics, and I really enjoy working on algorithms. Together, we managed to get funding and grants that enabled us to begin the production of our first product at Cytel. We also felt there was a big change in the 1980s in computer technology in what was called the microcomputer revolution, and there was an opportunity for small companies to produce software using Federal Research grants and to put them on the market.

Before Cytel was launched, I was in India and there was no internet, so we had to mail drafts to each other describing our research ideas. But it all eventually worked out. It looked like it was possible to do some work that centered on new problems, that could be solved with sophisticated algorithms on inexpensive desktop computers. The rapid growth in these computers in the United States meant that there would be a market if we developed a software product based on these algorithms. Marvin Zelen, who was the chair of the biostatistics department at Harvard, gave us a couple of statistical problems he thought were very important and unsolved. We realized that solving these problems would require constructing algorithms that combined ideas from the fields of statistics and scientific computing, with optimization methods we had learned in operations research. That is what got us started on the journey of building Cytel.

35th_Nitin_Quote1

The combination of statistics with operations research (OR) and computer science, was it new in the 1980s?

Yes, it was. It turns out statistics and OR are closely related, but they started in different applications. Modern statistics started essentially in agriculture, while OR started in mathematical modeling for military applications. Use of mathematical models was common and there were many overlapping methodologies. I have always found it difficult to draw a line between the two.

Performing calculations rapidly on mechanical calculators was essential to apply statistics and OR methods in practice. Computers were an obvious upgrade from calculators. However, computer science grew alongside both fields and developed far more sophisticated methods than were common practice with calculators. We had to use some of these sophisticated methods to enable the massive calculations required to be done within acceptable time limits. These methods were not common knowledge in the field of statistics and OR in the 1980s.

What was the technological landscape like at Cytel’s founding? What were biomedical experts expecting to achieve with such new technological tools?

The most important aspect of the technological landscape at Cytel’s founding was that the IBM PC had established itself as a fast-growing presence in companies and universities, and the DOS operating system provided a stable platform for software development. Our earlier research programs had migrated from using minicomputers at Harvard and at the Indian Institute of Management in Ahmedabad to a personal computer made by DEC owned by Cyrus. It became obvious to us at this time that we needed to move to the IBM PC. It was clear that biostatisticians in academia and industry would be interested in statistical software that ran on the IBM PC. There was a worry in companies and regulatory authorities in the biopharm industry that many of the commonly used methods made large sample approximations. There were questions about how reliable these methods were in cases where you had, for instance, rare events occurring in a large sample or had a small sample. Some statisticians were looking for ways to avoid making those assumptions. That is what gave us a start, and we got a number of statisticians from industry and academia to write supporting letters for our grant proposals. This was extremely helpful.

At the same time, we found that selling to big pharmaceutical companies was much more difficult and took longer than we had expected. Fortunately, since we had grant funding, we could continue to operate through long sales cycles. Change is also always hard, and it’s especially hard in larger companies. The pharmaceutical industry is known to be quite conservative, and rightly so as they are dealing with life and death matters. However, things have been changing recently, largely due to the various environmental factors. The industry was way more conservative back then than it is today.

As Cytel celebrates its 35th Anniversary, tell us a little bit about what you see as Cytel’s greatest technological and methodological achievements?

I think it was our algorithms to solve problems related to analysis of small samples and rare events in large samples. It was exciting to solve difficult problems in computational statistics with novel tools that involve integrating ideas across disciplines, a founding strength of Cytel. The greatest strength of Cytel in terms of innovation is being able to create solutions that bridge theory and practice in applications using methods that integrate statistics, operations research, and computer science.

Professor Marvin Zelen, who sat on Cytel’s Board of Directors, argued in 1982 that statistics and computing would someday become inseparable fields. A number of statisticians criticized the view at the time. What was the debate about and how has Cytel navigated it throughout the year?

I agree with Marvin that statistics and computing are tightly coupled. Computing enables many methods in statistics to be useful in practice, and statistics plays a big role in solving important problems in computer science (for example, Monte Carlo simulation). Also, today, a vast amount of data that is collected by computers has to be organized, sampled, and cleaned. These methods are modern versions of the kind of data cleaning that has been done for census data for many years but require algorithms that work at scale.

We were speaking to several statisticians who were not very comfortable with recently developed optimization methods and computer science techniques. But they understood the techniques at a high level, and we were able to convince them of the usefulness of these methods through collaborative research projects, papers, and seminars.

New innovations in operations research also played a key role in early software development for clinical trial design. What is operations research and what was its contribution to design?

INFORMS, the professional society of OR in the United States, defines OR as “the scientific process of transforming data into insights to make better decisions.” Many statisticians would feel that transforming data into insights is the core of what they do. You can see why I have trouble drawing a line between the two disciples conceptually!

I like to think OR focuses more on optimization, heuristics, and systems thinking. OR and statistics often use very similar models. Differences in terminology and emphasis often mask a common underlying mathematical structure. East uses recursive methods for analysis of group sequential designs that follow the paradigm of dynamic programming in OR. Maximum likelihood estimation in statistics involves optimization.

Recent work you have done argues about the importance of aligning statistics and analytics with business strategy. Would it be possible to highlight some of the main ideas that you are now positing?

I am working on extending statistical models to include analytics that incorporate commercial and financial components in clinical trial designs and sequences of clinical trials. I have also been using a systems perspective to include analytics on drug supply that enable integration of decisions in execution of trials with trial design. In the past, I have also worked on designing trials from program and portfolio perspectives.

I’ve been working on developing visualization methods that enable statisticians and clinicians to interactively inject judgmental considerations in finding the best trial designs in a dataset of thousands of designs using a decision support framework.

What is different about how sponsors now approach clinical development strategy compared to how it was being done 35 years ago, 25 years ago, or 10 years ago? Today, we have all of this computing technology, dynamic visuals are coming up with Solara, but has there been a shift according to you?

Yes, there has been a major shift. The most obvious one is the vast increase in use of simulation approaches in trial design compared to earlier.

In the past 10 years, the acceptance of adaptive designs has also increased. Back in the day, when Cyrus started talking about adaptive designs to people, they did not understand the need for it. Moreover, in some places, the old methods worked very well, while in others, they did not. So, it is important to know when adaptive designs should be selected over traditional designs.

One of the biggest criticisms of clinical trials is that they work with a very narrow segment of the potential population and then show promising results. But in medical practice, frequently, the promise does not materialize. Earlier, one made assumptions to select a design and did some sensitivity analysis around it. Now, there is a much larger effort in trying to look at various scenarios that reflect what might happen. We look for robust solutions rather than making strong assumptions to develop designs that are optimal using tenuous assumptions. Sponsors are looking for designs that will stand up to a lot of uncertainty.

Given all these changes, what do you think the field will look like 35 years from now? Do you predict any paradigm shifts? You said that Cytel was prepared for computing because we could understand, even in the 1980s, 1990s, and early 2000s, the synergy between statistics and operations research. Based on your position now, what do you see coming downstream toward us?

Today, in software development at Cytel, statisticians develop prototypes for functions in products in R. These are too slow to deploy as engines in industrial-strength products. Software engineers code the logic in C and C++ to provide the necessary speed. This transfer is time-consuming and often requires rework when the production engines are tested. One of the biggest developments I can see coming up in the next few years is a language called Julia, which solves this two-language problem. Julia has largely been used in physics and engineering domains, but it has begun to be deployed in products in the life sciences. I believe Julia has the potential to greatly reduce development costs as well as time to market for our software products.

Julia’s syntax is designed to be similar to Python, thus reducing the learning time for statisticians and engineers, and it automatically constructs parallel programming code that today requires a large amount of customization work.

Another major advantage is that programs in Julia smoothly interface with programs in R and C++ so we can make a smooth transition to deploying Julia in our products using earlier programs written in these languages. We have a small research project in the Innovation Center to explore the promise of Julia.

Would you say we are heading toward a universal computer language?

I don’t think there will be one universal language ever. There will always be specialized languages. Especially now, when computing is getting into so many different fields; each discipline will want to find a language that best captures constructs that are most meaningful to it.

Do you predict a specialist model or a generalist model in 2050?

I think it will be a combination of both. We need specialists because of the accelerated pace of learning in so many fields. But we also need generalists to provide the connective tissue that is essential for effective cross-disciplinary teams.

35th_Nitin_Quote2b

Are there any bits of wisdom you would share with young people in the field today about how to push the boundaries of science forward?

In my opinion, there is room for play in work. In other words, there should be some elements of your work that seem like just plain fun. One needs to be ready to experiment and be open to change to avoid becoming obsolete. It is good to have some breadth along with depth in your specialization. I always say, try to be at least a T-shaped person, if not a π-shaped person. That is, you should try to have breadth and know one or two areas of depth in your skill set.

 

Read the full 35th anniversary interview series here:

Download Ebook

 

Whats new in Perspectives on Enquiry & Evidence:

New call-to-action

New call-to-action

contact iconSubscribe back to top