One of my wife’s favorite TV shows is ‘Quattro Ristoranti’ (Four Restaurants). In each episode of the show, 4 restaurants of the same style are assessed and the one getting the best evaluation wins the prize. One of the first things the TV presenter Alessandro Borghese, a famous Italian chef, does while visiting the restaurant is to assess (of course!) the kitchen and how much the kitchen and its tools are cleaned. This assessment could have a big impact on the final outcome regardless of the quality of the food served in the restaurant .... the state of the kitchen and its cleanliness influences Borghese’s faith in the chef’s work.
This is exactly what could happen in a data submission to health authorities such as the FDA: the efficacy and safety of your drug are of course what matter, but lack of traceability, or poor or insufficient documentation might trigger questions and concerns from the reviewer. While this might not impact the overall final outcome of your submission, approval could be delayed if the reviewer starts questioning what you have done by requesting changes, or new deliverables to clarify the aspects that were not sufficiently clear in your original submission.
In my career, I have been exposed to several studies requiring the use of CDISC standards either as programmer study lead or as a CDISC subject matter expert reviewing CDISC packages. In this capacity, I have seen several define.xml and reviewer guides that demonstrate how differently individual "users" and companies approach the same mapping issue in SDTM or the same analysis "modelling" in ADaM. I’ve also observed wide variations in the level of detail provided for example in a reviewer’s guide, or a computational algorithm used to describe a derivation in an ADaM define.xml.
Moreover, very often people have different "attention" to details, perhaps (incorrectly) thinking they do not matter. If for example, you created an SDTM dataset and for one variable you assigned the wrong label (not as per the SDTM Ig) or you did not assign a label at all and this has been identified from the Pinnacle21 validation outcome, why not correct the issue in the dataset instead of leaving the issue as it is and justifying it in the reviewer’s guide as a "Programmatic Error"?
"You may think these are minor issues because they do not ultimately impact any results. However, you are risking your credibility with the FDA reviewer, who may conclude that your package is not of good quality."
We see increasingly, both from the mock-up/ test submissions we do to the FDA, or that our clients do with their CROs, that agency concerns about such details is increasing. For example, issues such as variables in define.xml that should have a controlled terminology assigned and they have not, improper use of variables in ADaM, incomplete reviewer’s guide documentation such as not providing a rationale for all warnings and errors P21 reported during the validation of your package; all these issues are subject of FDA concerns and you are recommended to correct them.
You may think these are minor issues because they do not ultimately impact any results. However, you are risking your credibility with the FDA reviewer, who may conclude that your package is not of good quality. Further, if you don’t address those issues on-time, you might receive a request from the FDA to correct them when you think you are done and your package is ready to be delivered. Fixing, for example, an SDTM dataset could have a "cascade" effect as you might also need to re-run other datasets or even re-generate the outputs (and ADaM as well).
Proper documentation is of vital importance and could be a success factor for your submission. Do not cut corners! Please try to imagine that you are the "recipient" of such packages and check, for example, if the way you explained your derivation in define.xml is clear enough. Do not hesitate to find alternative ways if you identify that define.xml is not the most appropriate “tool” to describe a complex derivation. Question yourself if you need more than 1000 characters to describe your derivation and whether you might instead describe your complex algorithm in a separate document (e.g. a PDF document that you hyperlink in the define.xml). Please do note, that while you might get an error from P21 while validating your define.xml, this is not an issue anymore when you submit to the FDA so in your define fields you can now have longer than 1000 characters.
This topic together with other recommendation will be the subject of my poster "The « CDISC Stupidario » (the CDISC Nonsense)" that I will present at PhUSE in Frankfurt, so if you are attending the conference on November 4-7 do not hesitate to join me and my Cytel colleagues at both the poster session on Sunday night or at the Cytel booth number 6 and see also how we can help in making your submission package "FDA concerns-free".
In the meantime, click the button below to watch our webinar replay " Decisions for your next trial: When to adopt the CDISC data standard"
This is the first blog in a new series from Angelo Tinazzi, 'The Good Data Submission Doctor'
About the author
Angelo Tinazzi is Director, Statistical Programming, Clinical Data Standards and Clinical Data Submission at Cytel. He is a well- published and recognized expert in statistical programming with over 20 years' experience in clinical research. The application of CDISC standards in different therapeutic areas is part of his core expertise since 2003 in particular in the context of data submission to health authorities such as the FDA and PMDA.
Angelo is an authorized CDISC instructor and member of the CDISC ADaM Team as well as the CDISC European Committee where he manages the Italian-speaking CDISC User Network.
Paper: How to Prepare High-quality Metadata for Submission, Varun Debbeti, PhuSE-US Connect, 2018