<img alt="" src="https://secure.lote1otto.com/219869.png" style="display:none;">
Skip to content

Highlights from the 2020 Virtual CDISC EU Interchange by Angelo Tinazzi

In early March, when countries around the world started implementing lockdowns, the European CDISC Committee (E3C) together with CDISC decided to cancel our physical event in Berlin, planned for April 1-2, 2020. It was a tough decision, but unavoidable and necessary.

We did not let this dampen our spirits and immediately came up with an alternative plan – go virtual with the event! In only two weeks the team managed to pull together a revised program and the registrations were opened on the CDISC website. The scale of the event went from being Europe-only to Global, and around 300 people attended it worldwide. In the end, the event was a hit. Everything worked out very well, with no major technical disruptions and the speakers respecting the allocated time slots.

In this two-part blog post, I share a summary of the sessions I was able to attend, while simultaneously ensuring business continuity for my regular projects.

Our captain and E3C chair, Jörg Dillert, Oracle, opened the event on April 1, introducing the E3C with the best possible analogy - football or soccer (for Americans).

cdiscAlthough, I am not a big fan of plenary sessions, this time I pushed myself to be more attentive and attended all of them. I was particularly impressed by “cry of alarm”, a session by Professor James N'Dow from the University of Aberdeen. He explained that not only, “90% of what is published in the scientific literature is unreliable and unfit to trigger a change of Clinical Practice Guideline Recommendations”, but also that the remaining “10% high quality is ignored by a significant number of urologists”. This is a cry of alarm in the medical field where he is involved, but I believe the same can be said for many other medical areas. It is where the medical scientific community is struggling and better use of data, networking and international collaborations, is the probable solution.

One of the benefits of attending CDISC-EU Interchange, for me, is the availability of regulatory updates on data submission topics from the three main health authorities - the US FDA, the Japanese PMDA and the European EMA. This year, PMDA and FDA provided a good set of technical details and clarifications on their current plans and position on certain specific topics, and that is my focus for this post.

Dr. Yuki Ando, PMDA, described their requirement that for all submissions after March 31, 2020, clinical studies should be submitted in a format confirming to the CDISC standards. Dr. Yuki also spoke about PMDA’s position on the use of specific software for datasets CDISC conformance validation, clarifying that they do not expect sponsors to use any specific software or versions, if they are able to validate their data package on applying the PMDA validation rules.

Dr. Matilde Kam from the CDER division of the FDA, also clarified on several expectations of the FDA biostatisticians when they receive data submission package, particularly during the “preliminary filing review”. Here are some guidelines worth noting:

  • Define-xml should be sufficiently detailed
  • Datasets containing the primary endpoints should be clearly identified
  • Analysis datasets should be sufficiently structured and defined to permit the analysis and re-production of the primary endpoints
  • Safety data should be organized to permit analysis across different trials within a given NDA submission
  • Data should be accessible, sufficiently documented and of required quality
  • Analysis datasets should be traceable back to raw/SDTM data, and it should allow reproduction of sponsor results

Dr. Kam also presented some interesting results of an FDA internal survey project to identify easy vs difficult to review NDA/BLA data submissions, where they reported on areas of improvements in submission data packages, for sponsors. Overall, among the 42 reviewed submissions, the ones received between 2015 and 2019, “40% of them were classified as Difficult to Review”. The primary challenge was lack of documentation and details provided in the ADRG, define-xml and the submission of analysis programs as, “94% of difficult submissions received an information request for programs and 40% of difficult submissions did not provide any programs vs 100% in easy submissions.” 

Dr. Kam also stressed on the following aspects:

   For ADRG:
  • Be clear and concise
  • Include graphics to show data dependencies to facilitate traceability understanding

    For define-xml:

  • Use English and pseudo code, complete logic, details about input datasets and parameters
  • Describe derivations in define-xml rather than in the ADRG (With few exceptions such as when derivations are too complex and require several steps to be described. In such cases, the derivations provided in then define-xml can be supported by additional details in the ADRG.)

   Submitted Analysis Programs:

  • Ensure it is not too complex to be understood
  • Avoid extensive use of macros and provide macros used for analysis
  • Elaborate on the steps involved
  • Avoid extensive data manipulation in analysis programs, and cover this in the ADaM datasets

   For ADaM datasets:

  • Make use of intermediate datasets
  • Keep SDTM variables to improve traceability, e.g. --SEQ
  • Use flags to indicate records or subjects to select
  • Use DTYPE when imputations are performed
  • Provide all datasets used for analysis

All these recommendations are in line with the CDISC standard implementation, particularly ADaM and the official CDISC trainings we provide as authorized CDISC ADaM Instructors. Moreover, I was glad to see the stress on quality as my presentation “Re-mastering the define.xml and its brother the reviewer guide” also emphasized on the need of having better quality CDISC documentation.

In the next part of this blog, I will give you a brief overview of the other sessions I attended at CDISC-EU Interchange and my overall takeaway from the event.


 

Access the recording of Angelo’s presentation at CDISC-EU Interchange.

Access replay

For the “CDISC 360 preliminary demonstrations”, CDISC has organized a separate virtual event.

 

About Angelo Tinazzi

AngeloAngelo Tinazzi is Senior Director, Statistical Programming, Clinical Data Standards and Clinical Data Submission at Cytel. He is a well- published and recognized expert in statistical programming with over 20 years' experience in clinical research. The application of CDISC standards in different therapeutic areas is part of his core expertise since 2003 in particular in the context of data submission to health authorities such as the FDA and PMDA.

Angelo is an authorized CDISC instructor and member of the CDISC ADaM Team as well as the CDISC European Committee where he also manages the Italian-speaking CDISC User Network.

 

contact iconSubscribe back to top