All Times EDT
With two years since the release of draft ICH E9 Addendum (R1), scientific community and industry have made great progress in further advancing this topic and resolving various issues with implementing the guidance in clinical practice. The COVID-19 pandemic caused a dramatic increase in the number of intercurrent events (ICE), thus being a natural stress test for proper handling of ICEs in study protocols and statistical analysis plans. This short course will cover our learning from the implementation of the addendum during the pandemic. Specially, the outline of the short course is:
• The key elements and concepts of the ICH E9 (R1) Addendum • Defining estimands in presence of ICEs using potential outcomes based on causal inference framework • Handling ICEs using a mix of strategies depending on the type and ICE, emphasizing improved practices for collecting the reasons for treatment discontinuations (a major type of ICE) • Overview of statistical methods for handling missing values and guidance on using appropriate methods tailored to the estimand(s) of interest. • Overview of the principal stratification methods in the context of estimands incorporating ICEs (if time permits)
Yongming Qu, PhD, Sr. Research Fellow at Eli Lilly and Company. Dr. Qu obtained his PhD in statistics from Iowa State University in 2002. He has had rich experience in supporting and leading phase 1-4 clinical development in pharmaceutical industry. He is an ASA Fellow and has been an active researcher in improving statistical methods in drug development with approximately >80 articles published in statistical and medical journals. Recently, he has published several articles regarding estimands and missing data imputation and is a technical leader in driving the implementing the ICH E9 (R1) in clinical studies in Eli Lilly and Company. He has recently given invited presentations in estimands and missing data in ESFPI Webinar (2020), ENAR webinar (2020), BASS invited presentation (2020), China DIA (2020), PSI (2021), DIA webinar (2021), and RISW invited sessions (2019, 2021).
Ilya Lipkovich, PhD, Sr. Research Advisor at Eli Lilly and Company. Dr. Lipkovich received his Ph.D. in Statistics from Virginia Tech in 2002 and has 20 years of statistical consulting experience in pharmaceutical industry. He is an ASA Fellow and published on subgroup identification in clinical data, analysis with missing data, and causal inference in statistical and medical journals including > 50 articles and a book “Analyzing Longitudinal Clinical Trial Data. A Practical Guide.” He frequently taught short courses and webinars on these topics. Recently, he has published several articles connecting estimands with missing data and causal inference and co-authored a book “Estimands, Estimators and Sensitivity Analysis in Clinical Trials.”
Selected publications co-authored by instructors that are relevant to the short course:
Lipkovich, I., Ratitch, B., & Mallinckrodt, C. H. (2020). Causal inference and estimands in clinical trials. Statistics in Biopharmaceutical Research, 12(1), 54-67.
Luo, J., Ruberg, S. J., & Qu, Y. (2021). Estimating the treatment effect for adherers using multiple imputation. arXiv preprint arXiv:2102.03499. Pharmaceutical Statistics. In press.
Mallinckrodt, C.H., Bell, J. Liu, G., Ratitch, B., O'Kelly, M., Lipkovich, I., Singh, P., Xu. L., Molenberghs, G. (2020). Aligning estimators with estimands in clinical trials: Putting the ICH E9(R1) Guidelines into practice. Therapeutic innovation & regulatory science, 54(2),353-364.
Mallinckrodt, C., Molenberghs, G., Lipkovich, I., and Ratitch, B. (2020), Estimands, Estimators and Sensitivity Analysis in Clinical Trials, Chapman & Hall/CRC Biostatistics Series, Boca Raton, FL: Chapman & Hall/CRC Press.
Mallinckrodt, C.H., Lin, Q., Lipkovich, I., Molenberghs, G. (2012). A structured approach to choosing estimands and estimators in longitudinal clinical trials. Pharmaceutical Statistics,11, 456-461.
Qu, Y., & Dai, B. (2021). Return-to-baseline multiple imputation for missing values in clinical trials. arXiv preprint arXiv:2111.09423.
Qu, Y., & Lipkovich, I. (2021). Implementation of ICH E9 (R1): A few points learned during the COVID-19 pandemic. Therapeutic Innovation & Regulatory Science 55, 984–988.
Qu, Y., Luo, J., & Ruberg, S. J. (2021). Implementation of tripartite estimands using adherence causal estimators under the causal inference framework. Pharmaceutical Statistics, 20(1), 55-67.
Qu, Y., Shurzinske, L., & Sethuraman, S. (2021). Defining estimands using a mix of strategies to handle intercurrent events in clinical trials. Pharmaceutical Statistics, 20(2), 314-323.
Ratitch, B., Bell, J., Mallinckrodt, C., Bartlett, J. W., Goel, N., Molenberghs, G., ... & Lipkovich, I. (2020). Choosing estimands in clinical trials: putting the ICH E9 (R1) into practice. Therapeutic innovation & regulatory science, 54(2), 324-341.
Ratitch, B., Goel, N., Mallinckrodt, C., Bell, J., Bartlett, J.W., Molenberghs, G., Singh, P., Lipkovich, I. and O’Kelly, M. (2020). Defining efficacy estimands in clinical trials: examples illustrating ICH E9 (R1) guidelines. Therapeutic innovation & regulatory science, 54(2), 370-384.
Wiens, B.L., & Lipkovich, I. (2020). The impact of major events on ongoing noninferiority trials, with application to COVID-19. Statistics in Biopharmaceutical Research, 12(4), 443–450.
Zhang, Y., Fu, H., Ruberg, S. J., & Qu, Y. (2021). Statistical inference on the estimators of the adherer average causal effect. Statistics in Biopharmaceutical Research, 1-4.
As a part of the isolation to address the complexity, the depth, and the broadness of the drug development, the 21st-century cures act mandates the FDA to explore the use of novel designs, including Bayesian design, in the drug development. To satisfy a mandate of the Cures Act on the use of novel clinical trial designs including the Bayesian design, the FDA published a guidance on adaptive design “FDA. Guidance for Industry: Adaptive Designs for Clinical Trials of Drugs and Biologics. 2019.” and “Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products” in 2020. To date, Bayesian analyses have been used in phase I and phase II studies. It has also in late pahse trials, e.g. to help to provide substantial evidence and needed confidence for the approval of Belimumab.
In this course, Dr. Yuan and Dr. Travis will overview Bayesian methods that are utilized to leverage external data into the design and analysis of a new study of interest. External data broadly include historical data, natural history data, data from similar populations, data from drug from the same drug class. The Bayesian framework provides a flexible way to integrate external data to improve inference for the study of interest, potentially addressing many practical issues in clinical trials, such as lack of pediatric patients and patients with rare diseases, high cost and lack of efficiency.
In part I, Dr. Yuan will overview static and dynamic borrowing methods including Bayesian hierarchical model (BHM); power prior, commensurate prior, robust meta-analytic predictive prior developed between 2001-2011 and multisource exchangeability model, calibrated BHM, optimal BHM, and elastic prior developed between 2011-2021. The objective of these methods is to encourage information borrowing when historical and trial data are “similar” and refrain from information borrowing when historical and trial data are “different”. The proc and cons, as well as the connections, of the methods will be elucidated.
In part 2, Dr. Travis will discuss potential application in clinical trials from regulatory perspective and case-study example (or mock example) to demonstrate the use of these methods. Mock R code will be provided during the case-study illustration.
During the training, participants will be able to learn (1) general ideas of Bayesian approaches to borrow information from external data and assumptions these approaches make; (2) understand the difference between static and dynamic borrowing, as well as their potential applications; (3) understand mechanism of control of borrowing strength and the links between the methods. (4) Apply these methods to real examples with hand-on training on programming.
Instructors’ background:
Ying Yuan, PhD, is a Bettyann Asche Murray Distinguished Professor and Deputy Chair in the Department of Biostatistics at the University of Texas MD Anderson Cancer Center. Dr. Yuan has published over 100 statistical methodology papers on innovative Bayesian adaptive designs, including early phase trials, seamless trials, biomarker-guided trials, and basket and platform trials. The designs and software developed by Dr. Yuan’s lab have been widely used in medical research institutes and pharmaceutical companies. Dr. Yuan is the Chair of Data Safety Monitoring Board (DSMB) of MD Anderson Cancer Center, was elected as the American Statistical Association Fellow.
James Travis is a lead mathematical statistician supporting the Division of Pediatric and Maternal Health in the Center for Drug Evaluation and Research. He is a member of the Complex and Innovative Trial Designs Program steering committee and the Office of Biostatistics Bayesian and Pediatrics committees. He has provided many training sessions on Bayesian approaches to FDA colleagues. He has research interests in the use of external data in the analysis of pediatric clinical trials. He joined the FDA in 2014 and received his PhD from the University of Maryland Baltimore County.
In May 2021, the U.S. Food and Drug Administration (FDA) released a revised draft guidance for industry on “Adjustment for Covariates in Randomized Clinical Trials for Drugs and Biological Products”. Covariate adjustment is a statistical analysis method for improving precision and power in clinical trials by adjusting for pre-specified, prognostic baseline variables. Here, the term “covariates” refers to baseline variables, that is, variables that are measured before randomization such as age, gender, BMI, comorbidities. The resulting sample size reductions can lead to substantial cost savings, and also can lead to more ethical trials since they avoid exposing more participants than necessary to experimental treatments. Though covariate adjustment is recommended by the U.S. Food and Drug Administration and the European Medicines Agency (EMA), many trials do not exploit the available information in baseline variables or only make use of the baseline measurement of the outcome.
In Part 1 of the workshop, we introduce the concept of covariate adjustment. In particular, we explain what covariate adjustment is, how it works, when it may be useful to apply, and how to implement it (in a preplanned way that is robust to model misspecification) for a variety of scenarios. We demonstrate the impact of covariate adjustment using completed trial data sets in multiple disease areas. We provide step-by-step, clear documentation of how to apply the software in each setting. Participants will have the time to apply the software tools on the different datasets in small groups.
In Part 2 of the workshop, we present a new statistical method that enables us to easily combine covariate adjustment with group sequential designs. The result will be faster, more efficient trials for many disease areas, without sacrificing validity or power. This approach can lead to faster trials even when the experimental treatment is ineffective; this may be more ethical in settings where it is desirable to stop as early as possible to avoid unnecessary exposure to side effects. The new statistical method and software will be demonstrated using the same real datasets as in the first part. We will provide step-by-step, clear documentation of how to apply the software in these different settings. Participants will have the time to apply the new software tools on the different datasets in small groups.
Course outline:
40 minutes: Introduction and overview of covariate adjustment (Michael Rosenblum)
10 minutes: Discussion (Q&A time)
5 minutes: Break
20 minutes: Software tools demonstration on covariate adjustment (Michael Rosenblum)
25 minutes: Small group work applying software tools on covariate adjustment (Michael Rosenblum and Kelly Van Lancker)
10 minutes: Break
30 minutes: Presentation of new statistical method to combine covariate adjustment and group sequential designs (Kelly Van Lancker)
10 minutes: Discussion (Q&A time)
15 minutes: Software tools demonstration (Joshua Betz)
5 minutes: Break
30 minutes: Small group work applying software tool on group sequential designs (Michael Rosenblum, Kelly Van Lancker and Joshua Betz)
10 minutes: Discussion (Q&A time)
Target Audience:
The intended audience consists of clinicians and statisticians with statistical training. Participants should be familiar with the following concepts: Type I error, power, bias and variance.
Course Objectives:
1. Participants will learn about the benefits and limitations of using covariate adjustment to analyze data from randomized trials, and how it can be applied to improve precision and speed up trials.
2. Participants will learn key concepts from the recent (May 2021) draft guidance from the FDA on covariate adjustment
3. Participants will gain experience implementing covariate adjustment on simulated data sets.
Instructors Background:
Michael Rosenblum is a Professor of Biostatistics at Johns Hopkins Bloomberg School of Public Health. His research is in causal inference with a focus on developing new statistical methods and software for the design and analysis of randomized trials, with clinical applications in HIV, Alzheimer’s disease, stroke, and cardiac resynchronization devices. He is funded by the Johns Hopkins Center for Excellence in Regulatory Science and Innovation for the project: “Statistical methods to improve precision and reduce the required sample size in many phase 2 and 3 clinical trials, including COVID-19 trials, by covariate adjustment”.
Dr. Kelly Van Lancker is a postdoctoral researcher in the Biostatistics Department of the Johns Hopkins Bloomberg School of Public Health. She has obtained a PhD in statistics from Ghent University in Belgium. Her primary research interests are the use of causal inference methods in clinical trials and obtaining valid inference when the analysis involves data-adaptive methods, such as variable selection.
Josh Betz is an Assistant Scientist in the Biostatistics department of the Johns Hopkins Bloomberg School of Public Health, and part of the Johns Hopkins Biostatistics Center. His research includes the design, monitoring, and analysis of randomized trials in practice and developing software to assist with randomized trial design.
Relevant Papers:
Wang, B., Susukida, R., Mojtabai, R., Amin-Esmaeili, M., and Rosenblum, M. (2021) Model-Robust Inference for Clinical Trials that Improve Precision by Stratified Randomization and Adjustment for Additional Baseline Variables. Journal of the American Statistical Association, Theory and Methods Section. https://www.tandfonline.com/doi/full/10.1080/01621459.2021.1981338
Benkeser, D., Diaz, I., Luedtke, A., Segal, J., Scharfstein, D., and Rosenblum, M. (2020) Improving Precision and Power in Randomized Trials for COVID-19 Treatments Using Covariate Adjustment, for Binary, Ordinal, or Time to Event Outcomes. Biometrics. This paper was selected to be a discussion paper. https://doi.org/10.1111/biom.13377
Van Lancker, K., Vandebosch, A., & Vansteelandt, S. (2020). Improving interim decisions in randomized trials by exploiting information on short-term endpoints and prognostic baseline covariates. Pharmaceutical statistics, 19(5), 583-601. https://doi.org/10.1002/pst.2014
COVID-19 pandemic has ignited a world-wide broad interest in development of vaccines. Because of biological nature, there are many unique statistical issues and challenges in design and analyze vaccine clinical trials. Some examples include using immunogenicity to evaluate vaccine effects and consistency in manufacture; the rigorous large studies needed to demonstrate efficacy due to low incidence rate of disease; identifying and using biomarkers based on correlates of protection; stringent safety requirement because of broad administration to millions of healthy individuals; and application of innovative designs to speed up the development.
This half-day short course will provide an overview of study designs and analysis methods for vaccine clinical trials. Following some general introduction of vaccine development, the course will cover topics for statistical methods in analysis of immunogenicity, efficacy, and safety. Unique features for vaccine trials such as non-inferiority design, lot consistency, correlate of protection, super superiority study, and handling of low incidence events, etc. will be discussed. Case examples from various vaccine programs will be presented.
Course outlines: 1) Introduction of vaccine development and design considerations; 2) Statistical methods for immunogenicity including non-inferiority comparison, handling of missing data, issues for multi-valent vaccines, and lot consistency; 3) Statistical method for efficacy including conditional exact methods, adaptive dose range and seamless designs, and Bayesian methods; 4) Correlate of protection, modeling efficacy from immunogenicity markers; 5) Evaluation of safety.
Instructors’ background:
Dr. Wenji Pu is Statistics Director at GlaxoSmithKline (GSK) plc and has more than 18 years of experience in designing and analyzing clinical trials. At GSK, Wenji has worked on many vaccine programs including respiratory syncytial virus (RSV) vaccine, herpes zoster vaccine, and flu vaccine, and has been the resource for providing statistical expertise within GSK vaccine statistics group. His research interest includes repeated measurements, categorical data analyses, survival analyses, missing data, and Bayesian adaptive design.
Abstract: The use of open-source R is evolving in drug discovery, research, and development for study design, data analysis, visualization, and report generation in the pharmaceutical industry. In this workshop, we will explore strategies to use R to prepare tables, listings, and figures in a clinical study report and how to prepare the eCTD submission packages for those TLFs and associated source code. The workshop will have three parts.
Part 1, Delivering TLFs in CSR: provides general information with examples to create tables, listings, and figures.
Part 2, Clinical Trial Project: provides general information with examples to manage a clinical trial A&R project.
Part 3, eCTD Submission package: provides general information in preparing a submission package related to clinical study report (CSR) in electronic Common Technical Document (eCTD).
The training is based on an open-source book “R for Clinical Study Reports and Submission” available at https://r4csr.org/ with a demo project https://github.com/elong0527/esubdemo.
Prerequisite: It is an intermediate level training. We assume people are familiar with data manipulation in R. Some good references include Hands-On Programming with R (https://rstudio-education.github.io/hopr/) and R for Data Science (https://r4ds.had.co.nz/).
The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) issued an ICH Harmonized Guideline: ICH E9 (R1) Addendum. The US Food and Drug Administration adopted this as guidance in May 2021. The ICH E9 (R1) Addendum introduces the estimand framework for clinical trials to obtain clear and interpretable treatment effects, which enables clear assessment of benefits and risks of treatments. The estimand framework is intended to facilitate the dialogue on drug/biologic development among review disciplines, as well as between Sponsor and Regulator. This course introduces the estimand framework to statisticians, provides tools to specify clinical questions of interest precisely and facilitate cross-disciplinary collaboration, and highlights key concepts with illustrative examples. Multiple opportunities for questions are included throughout the course as well as an interactive practice session for the audience.
Target audience: Statisticians
Objectives: At the end of this training the attendees will be able to: • Understand fundamentals of the estimand framework. • Identify relevant discussion topics and methods for successful cross-disciplinary collaboration, including precise specification of clinical questions of interest. • Recognize and apply important estimand considerations.
Prerequisites: None
Outline • Module 1: Overview of the Estimand Framework (Alexei C. Ionan) • Module 2: Key Considerations s (John Scott) • Module 3: Real Example, Including Productive Cross-Disciplinary Interactions (Susan Mayo and Miya Paterniti)
Relevant Materials
ICH E9 (R1) Addendum on Estimands and Sensitivity Analysis in Clinical Trials to the Guideline on Statistical Principles for Clinical Trials, Step 4, 20 November 2019 https://database.ich.org/sites/default/files/E9-R1_Step4_Guideline_2019_1203.pdf
Instructor Background
Alexei C. Ionan is a Mathematical Statistician in the Division of Biometrics IX of the Office of Biostatistics, supporting application review in the Office of Oncologic Diseases at the FDA. He has over 17 years of experience evaluating, developing, and applying statistical methods in oncology. He leads multiple groups at the FDA. His research interests include Bayesian methods, estimands, decision theory, causal inference, predictive biomarkers, early detection of cancer, and optimal design.
John Scott is Director of the Division of Biostatistics in the FDA's Center for Biologics Evaluation and Research, where he has also served as Deputy Director and as a statistical reviewer for blood products and for cellular, tissue and gene therapies. Prior to joining the FDA in 2008, he worked in psychiatric clinical trials at the Western Psychiatric Institute and Clinic of the University of Pittsburgh Medical Center. He was one of FDA's representatives on the ICH E9(R1) expert working group. Dr. Scott is also the CBER lead for 21st Century Cures and PDUFA VI efforts in Complex and Innovative Trial Design and has been heavily involved in a number of FDA's statistical policy and outreach projects, including the 2019 Adaptive Design Guidance for Drugs and Biologics, the 2020 Guidance on Interacting with the FDA on Complex Innovative Trial Design, and the ICH E20 expert working group on adaptive designs. Dr. Scott has taught numerous internal and external short courses on topics including benefit-risk assessment, multiple endpoints, adaptive clinical trial design, and Bayesian analysis. Dr. Scott holds a Ph.D. in Biostatistics from the University of Pittsburgh, an A.M. in Mathematics from Washington University in St. Louis, and a B.A. in Liberal Arts from Sarah Lawrence College. He is a Fellow of the American Statistical Association and is a past Editor of the journal Pharmaceutical Statistics.
Miya Paterniti is a clinical team leader in the Division of Pulmonology, Allergy, and Critical Care within the Office of New Drugs in the FDA’s Center for Drug Evaluation and Research. She received her M.D. at the University of Maryland and completed her internal medicine residency and fellowship in Allergy and Clinical Immunology at The Johns Hopkins School of Medicine. She is also an Assistant Professor at The Johns Hopkins School of Medicine and a practicing allergist. She has been working on estimands for several years and has presented at multiple conferences on estimands.
Susan Mayo is a senior biostatistical reviewer in Division III, Office of Biostatistics within the Office of New Drugs in the FDA’s Center for Drug Evaluation and Research. She received M.S. degrees from Louisiana State in Applied Statistics and Marine Sciences and began her working career in 1986. She has served as a statistician in small to mid-sized biotechnology and drug delivery companies, consulted for several years with small device companies, and served for many years in large pharmaceutical companies. She joined Office of Biostatistics in 2018 and serves with her colleagues in the Division of Pulmonology, Allergy, and Critical Care. Her interests include estimands, safety statistics, statistical graphics, benefit-risk assessment, and the study of change, used in implementation of these important concepts for benefiting drug development and ultimately, public health.
A Bayesian approach provides the formal framework to incorporate external information into the statistical analysis of a clinical trial. There is an intrinsic interest of leveraging all available information for an efficient design and analysis. This allows trials with smaller sample size or with unequal randomization. Examples include early phases drug development, occasionally in phase III trial, and special areas such as medical devices, orphan indications and extrapolation in pediatric studies. Recently, 21st Century Cure Act and PUDUFA VI encourage the use of relevant historical data for efficient design. An appropriate statistical method in this context needs to leverage “borrowing” of information while considering the heterogeneity between historical and current trial. In this short course, we'll cover different statistical frameworks to incorporate trial external evidence with real life example.
We begin with introducing the meta-analytic predictive (MAP) framework for borrowing historical data. The MAP approach is based on Bayesian hierarchical model which combines the evidence from different sources. It provides a prediction for the current study based on the available information while accounting for inherent heterogeneity in the data. This approach can be used widely in different applications of clinical trial. These applications will be demonstrated using the R package RBesT, the R Bayesian evidence synthesis tools, which are freely available from CRAN.
In the second part of the short course, we focus on the propensity score integrated power prior approach. The power prior is a useful class of informative priors for external control data. The power prior discounts the likelihood of the external control data directly using a power parameter. However, choice of the power parameter is tricky in the real-life applications. An integrated propensity score-based method along with prior power can be useful in this context. A two-stage provides a paradigm for conducting a comparative observational, non-randomized study within the premarket regulatory setting. The power parameters are calculated using trial data and external control divided into homogeneous strata using propensity score. Methodological and practical aspects will be discussed to facilitate real life implementation.
Outline I. Introduction: Motivation and general framework
II. Overview of available methods
III. Meta-analytic Predictive (MAP) Prior
a. MAP approach for the analysis of a new trial using historical controls
b. Design Considerations
c. Implementation in real-life using RBesT: Case studies
d. Extension of meta-analytic framework
IV. Propensity score approaches
a. Score-integrated power prior
b. Composite likelihood approach
c. Design and sample size considerations
d. Real life application
V. Regulatory perspective of using trial external information
VI. Concluding Remarks and Discussion
PRESENTER(S):
Dr. Satrajit Roychoudhury is a Senior Director and a member of Statistical Research and Innovation group in Pfizer Inc. Prior to joining, he was a member of Statistical Methodology and Consulting group at Novartis. He started his career as a research statistician in Schering-Plough Research Institute (now Merck Co.). He has 15 years of extensive experience in working in different phases of clinical trials. His primary expertise includes implementation of innovative statistical methodology in clinical trials. He has co-authored several publications/book chapters in this area and provided statistical training at major conferences. His areas of research include the use of survival analysis, model-informed drug development and Bayesian methods in clinical trials.
Dr. Ram C Tiwari is a Senior Director and head of Statistical Methodology at the BMS/Lawrenceville site in New Jersey. In this position, Ram is responsible for promoting the use of novel statistical methods and innovative clinical study designs in the drug development. Prior to joining BMS, he spent 20+ years serving at NIH/NCI and the FDA, and over 20 years in academia as Professor and Chair of the Department of Mathematics at the University of North Carolina at Charlotte. He received his PhD from Florida State University and he is a Fellow of the American Statistical Association. Ram has published over 200 papers on various topics in Statistics, and a book on “Signal Detection for Medical Scientists: Likelihood Ratio Test-based Methodology”, 2021, Francis & Taylor.
The amount of real-world data (RWD) collected from sources other than protocol-driven clinical studies is increasing ultra-rapidly. Such sources include procedure or disease registries, electronic health records, electronic insurance claims databases, and patient-reported outcomes. The clinical evidence that can be derived from analysis of these RWD is considered as real-world evidence (RWE) that can complement the knowledge derived from traditional well-controlled clinical trials. Statistically, leveraging RWE can be viewed as "borrowing" data collected on patients from RWD sources to augment a prospective investigational study and reduce the required number of prospectively enrolled patients. Leveraging RWE can therefore save time and cost of the investigational study, thereby improving the efficiency of regulatory decision-making.
Incorporating RWD in regulatory decision-making demands much more than "mixing" RWD with investigational clinical trial data. The RWD has to undergo appropriate analysis for deriving the right RWE. Moreover, such analysis has to be integrated with the design and analysis of the investigational study for regulatory decision-making. The standard clinical trial toolbox does not offer ready solutions for incorporating RWD. Therefore, there is an unmet need for sound clinical trial design and analysis for leveraging RWE in clinical evaluations.
Course Outline and Main Topics:
In this course, the instructors will cover a series of methods they have developed for leveraging real-world data in clinical trial design and analysis. Noteworthy, these work has been recognized by the FDA and received The FDA CDRH Excellence in Scientific Research Award-EXTERNAL EVIDENCE METHODS RESEARCH (GROUP) and The FDA Scientific Achievement Award- EXCELLENCE IN ANALYTICAL SCIENCE (GROUP) for extraordinary achievements in the timely development and active promotion of novel statistical methods for leveraging real-world evidence to support regulatory decision-making in 2020.
In Part I of the course, we introduce a new method for proposing performance goals—numerical target values pertaining to effectiveness or safety endpoints in single-arm medical product clinical studies—by leveraging RWE. The method applies entropy balancing to address possible patient dissimilarities between the study’s target patient population and existing real-world patients, and can take into account operation differences between clinical studies and real-world clinical practice.
In Part II of the course, we introduce a method that extends the Bayesian power prior approach for a single-arm study to leverage external RWD. The method uses propensity score methodology to pre-select a subset of RWD patients that are similar to those in the current study in terms of covariates, and to stratify the selected patients together with those in the current study into more homogeneous strata. The power prior approach is then applied in each stratum to obtain stratum-specific posterior distributions, which are combined to complete the Bayesian inference for the parameters of interest.
In Part III of the course, we introduce several extensions of the PS-integrated method in Part II. These extensions include 1) a frequentist PS-integrated composite likelihood approach for incorporating RWE in single-arm clinical studies; 2) leveraging multiple RWD sources in single-arm medical product clinical studies; 3) leveraging RWD for the evaluation of diagnostic tests for low prevalence diseases; 4) augmenting both arms of a randomized controlled trial by leveraging RWD; and 5) PS-integrated approach for survival analysis.
In Part IV of the course, we describe an R package, psrwe, that implements a PS-integrated power prior (PSPP) method, a PS-integrated composite likelihood (PSCL) method, and a PS-integrated weighted Kaplan-Meier estimation (PSKM) method for the methods in Parts II and III. Illustrative examples are provided to demonstrate each of the approaches.
In Part V of the course, we introduce a propensity score-based Bayesian non-parametric Dirichlet process mixture model that summarizes subject-level information from randomized and RWD to draw inference on the causal treatment effect in exploratory analysis.
Instructors’ background:
Dr. Chenguang Wang is a Senior Director and the Head of Statistical Innovation at Regeneron. Previously, Dr. Wang was an Associate Professor with Johns Hopkins University and an FDA Mathematical Statistician at CDRH. Dr. Wang has extensive experience in clinical trial design and analysis in the regulatory setting. Dr. Wang also holds B.S. and M.S. degrees in Computer Science and has abundant experience developing user-friendly statistical software.
Dr. Nelson Lu is a team leader in the Division of Biostatistics, Center for Devices and Radiological Health, Food and Drug Administration. Dr. Lu has extensive experience in the design and analysis of clinical trials and studies involving RWD for pre-market regulatory submissions. Prior to joining the FDA, he worked in Wyeth.
Dr. Wei-Chen Chen is a computational and mathematical statistician in FDA CDRH. Dr. Chen reviews clinical trial submissions for therapeutic devices. Dr. Chen is also specialized in statistical computing and high-performance computing using R, C, Fortran, MPI, and ZeroMQ.
References:
• Wang C, Rosner GL. A Bayesian nonparametric causal inference model for synthesizing randomized clinical trial and real-world evidence. Stat Med. 2019;38(14):2573-2588.
• Wang C, Li H, Chen WC, Lu N, Tiwari R, Xu Y, Yue LQ. Propensity score-integrated power prior approach for incorporating real-world evidence in single-arm clinical studies. J Biopharm Stat. 2019;29(5):731-748.
• Wang C, Lu N, Chen WC, Li H, Tiwari R, Xu Y, Yue LQ. Propensity score-integrated composite likelihood approach for incorporating real-world evidence in single-arm clinical studies. J Biopharm Stat. 2020;30(3):495-507.
• Li, H, Chen, WC, Lu N, Wang C, Tiwari R, Xu Y, Yue L. Novel Statistical Approaches and Applications in Leveraging Real-World Data in Regulatory Clinical Studies. Health Services and Outcomes Research Methodology. 2020. doi: 10.1007/s10742-020-00218-4
• Chen WC, Wang C, Li H, Lu N, Tiwari R, Xu Y, Yue LQ. Propensity score-integrated composite likelihood approach for augmenting the control arm of a randomized controlled trial by incorporating real-world data. J Biopharm Stat. 2020;30(3):508-520.
• Li H, Chen WC, Lu N, Song C, Wang C, Tiwari R, Xu Y, Yue L. Mitigating Study Power Loss Caused by Clinical Trial Disruptions Due to the COVID-19 Pandemic: Leveraging External Data via Propensity Score-Integrated Approaches. Statistics in Biopharmaceutical Research. 2020. doi: 10.1080/19466315.2020.1860813
• Lu N, Wang C, Chen WC, Li H, Song C, Tiwari R, Xu Y, Yue LQ. Leverage multiple real-world data sources in single-arm medical device clinical studies. J Biopharm Stat. 2021:1-17. doi: 10.1080/10543406.2021.1897994.
• Chen WC, Li H, Wang C, Lu N, Song C, Tiwari R, Xu Y, Yue LQ. Evaluation of diagnostic tests for low prevalence diseases: a statistical approach for leveraging real-world data to accelerate the study. J Biopharm Stat. 2021;31(3):375-390.
• Li H, Chen WC, Wang C, Lu N, Song C, Tiwari R, Xu Y, Yue LQ. Augmenting Both Arms of a Randomized Controlled Trial Using External Data: An Application of the Propensity Score-Integrated Approaches. Stat Biosci. 2021. doi: 10.1007/s12561-021-09315-5. PMC8214051.
• Wang C, Gary R, Bao T, Lu N, Chen WC, Li H, Tiwari R, Xu Y, Yue LQ. Leveraging Real-World Evidence for Determining Performance Goals for Medical Device Studies, Statistics in Medicine, 2021. Accepted.
• Lu N, Wang C, Chen WC, Li H, Song C, Tiwari R, Xu Y, Yue LQ. Propensity Score-Integrated Power Prior Approach for Augmenting the Control Arm of a Randomized Controlled Trial by Incorporating Multiple External Data Sources. J Biopharm Stat. 2021. Accepted.
ICH E9(R1) has defined estimands for pharmaceutical clinical trials more precisely and more thoroughly than any other previous document in the pharmaceutical industry. Of particular note is the clarity around intercurrent events (IEs), and the impact of IEs on one’s ability to define, infer, and assess an estimand. The manner in which such IEs are handled determines not only WHAT is to be estimated (the estimand), but also HOW inference should be performed (e.g., via an estimator and its corresponding uncertainty assessment), including the definition and treatment of missing data. While there have been many publications, scientific sessions, and other venues for discussing and debating this topic, in pharmaceutical clinical trials much less attention has been paid to one of the strategies described in ICH E9(R1), namely principal stratification.
The learning objectives of this course are: • To explain the importance of considering principal stratification when estimating a treatment effect of a new pharmaceutical agent; • To demonstrate the use of the Tripartite Estimand Approach (TEA), which includes estimation of the treatment effect in the principal stratum of patients who would adhere to their study treatment; (1) • To demonstrate the use of principal stratification based on the early response of a biomarker that is predictive of a longer-term clinical outcome. (2, 3)
Principles, theory, and methods with accompanying examples and R codes for executing specialized principal stratification analyses will be covered.
=====
1 - Akacha M, Bretz F, Ruberg SJ (2017) Estimands in Clinical Trials - Broadening the Perspective. Statistics in Medicine 36, 1: 5-19.
2 - Ridker PM, MacFadyen JG, Everett BM, Libby P, Thuren T, Glynn RJ; CANTOS Trial Group. Relationship of C-reactive protein reduction to cardiovascular event reduction following treatment with canakinumab: a secondary analysis from the CANTOS randomised controlled trial. Lancet. 2018 Jan 27;391(10118):319-328. doi: 10.1016/S0140-6736(17)32814-3.
3 - Bornkamp, B, Rufibach, K, Lin, J, et al. Principal stratum strategy: Potential role in drug development. Pharmaceutical Statistics. 2021; 20: 737– 751. https://doi.org/10.1002/pst.2104.
=====
Course Outline
Introduction (Ruberg – 40 minutes)
Rationale for estimating the direct treatment effect (instead of the treatment policy effect)
The Tripartite Estimand Approach (TEA) – Principal strata defined by adherence
Principal strata defined by early biomarker response
Causal Inference (Sabbaghi – 60 minutes)
The Rubin Causal Model (RCM) and Potential Outcomes
Statistical theory and Adherence Average Causal Estimators (AdACE) related to clinical trials
Break (20 minutes)
Examples with corresponding R code (Sabbaghi – 60 minutes)
Applications of the TEA compared to ITT, MMRM and composite strategies
Diabetes
Alzheimer’s Disease
Applications for early response biomarkers
Cardiovascular disease
Psychiatry
Epilogue (Ruberg – 30 minutes)
Putting principal stratification into the overall ICH E9(R1) context
Other disease states for consideration
Other issues for analysis and interpretation of results from principal stratum
Labeling considerations
Floor Discussion – Q&A (20 minutes)
=====
Instructor Background
Stephen J Ruberg, PhD
Dr. Ruberg was in the pharmaceutical industry for 38 years, where he worked in all phases of drug development and commercialization – from R&D to Business Analytics. He retired from Lilly at the end of 2017. In his last 10 years at Lilly, he formed the Advanced Analytics Hub for which he was the Scientific Leader and ultimately the Distinguished Research Fellow. Since his retirement, he has formed his own consulting company, Analytix Thinking, which is dedicated to teaching good statistical principles and to consulting on analytical strategies for organizations. He is also an Adjunct Professor of Statistics in the Department of Statistics at Purdue University.
He has been a Fellow of the American Statistical Association (ASA) since 1994, was given the Career Achievement Award by Quantitative Scientists in the Pharmaceutical Industry and was elected a Fellow of International Statistics Institute.
Dr. Ruberg is the originator of the Tripartite Estimand Approach, and one of the key authors on its original publication.1 That treatise suggested the use of causal inference as a mechanism for estimating the treatment effect in the principal stratum of patients who would adhere to their randomized study medication. He has spoken on many occasions on this concept at statistical meetings and as part of his consulting practice. Furthermore, he has co-authored several publications that derived estimators for adherers (Adherers Average Causal Effect – AdACE) as well as its application to real clinical trials. He has participated in additional research to improve the original estimators including the use of multiple imputation and the derivation of variance estimators.
Dr. Ruberg is an oft-invited speaker with considerable experience in teaching and communicating statistical concepts in understandable and compelling ways.
/////
Qu, Y., Fu, H., Luo, J., Ruberg, S.J. (2020) A General Framework for Treatment Effect Estimators Considering Patient Adherence. Stat Biopharm Res 12:1, 1-18.
Qu, Y., Luo, J., Ruberg, S.J. (2021) Implementation of Tripartite Estimands Using Adherence Causal Estimators Under the Causal Inference Framework. Pharmaceutical Statistics, 20(1): 55-67. doi.org/10.1002/pst.2054.
Luo, J, Ruberg, S, Qu, Y (2021) Estimating the treatment effect for adherers using multiple Imputation. (Accepted in Pharm Stat – to appear)
Zhang, Y., Fu, H., Ruberg, S. J. & Qu, Y. (2021) Statistical inference on the estimators of the adherer average causal effect, Statistics in Biopharmaceutical Research, DOI: 10.1080/19466315.2021.1891965
/////
Arman Sabbaghi
Dr. Arman Sabbaghi is a Visiting Scholar in the Department of Statistics at the University of California, Berkeley during the 2021 – 2022 academic year, and an Associate Professor in the Department of Statistics and Associate Director of the Statistical Consulting Service at Purdue University. He became an Elected Member of the International Statistical Institute in 2020. He received his PhD in Statistics from Harvard University in 2014. Dr. Sabbaghi's research interests are in Bayesian data analysis, experimental design, and causal inference. Specific major objectives of his current research are (1) the development of new causal inference methods for the analysis of clinical trials and Big Observational Data plagued by nonadherence , (2) the creation of mathematical tools that facilitate the characterization of broad classes of experimental designs for the study and improvement of processes in engineering and the physical sciences, and (3) the development of efficient and interpretable statistical frameworks and machine learning algorithms for modeling and quality control in additive manufacturing systems.
Advances in cell and gene engineering technologies have given rise to an exponential growth of development of cell and gene therapies (CGT) around the world. The potential benefits of CGT have been explored in a broad range of therapeutic areas including oncology, rare diseases, diabetes, cardiovascular and CNS. In 2017, FDA approved Kymriah, a CAR-T therapy in the DLBL patients, which was considered a landmark approval for this novel therapy. What are cell and gene therapies? What are the unique challenges as compared to other conventional mechanisms such as small molecule and monoclonal antibody drugs? How can statisticians respond to the unique challenges and bring value to a clinical discussion? In this short course, a comprehensive review of CGT will be covered and followed by other topics listed below.
Course Outline: 1.Introduction to cell and gene therapy a.Types of CGT b. Concepts and rationale
2. Clinical and pharmacological considerations a. Endpoint selection b. Manufacturing challenges c. PK/PD modeling d. Operational considerations
3. Statistical considerations a. Design options in Phase 1 b. Design options in Phase 2/3
4.Regulatory considerations a. Review of regulatory guidance b. Regulatory concerns
5. Case studies
Background of Instructors: Dr. Weidong Zhang has 20 years’ experience in drug development in multiple therapeutical areas including oncology, inflammation and immunology and gene therapy technology. He has taught numerous short courses in multiple occasions including ASA/FDA industry statistical workshop, DIA, and ICSA, MBSW etc.
Dr. Srinand Nandakumar is a Senior Director of Biostatistics at Nurix Therapeutics. He has worked on several indications including Allogeneic CarT therapies, monoclonal antibodies and small molecule therapies as novel treatment approaches for treating cancer, immune disorders and CNS disorders. His areas of research interest include development of strategic designs and methodology of integrating RWE to clinical research.
Lynn Navale has over 20 years’ experience in clinical development, with 18 years’ experience working in oncology development. With a background in biostatistics and mathematics, she oversees the technical design, analysis, and data acquisition for clinical trials and clinical development strategy.
Ms. Navale has served as Vice President, Biometrics at Allogene since March 2021. Prior to Allogene, Ms. Navale was the Vice President of Biometrics at Kite Pharma, where she developed and led the Biometrics function including biostatistics, statistical programming, and data management and served as the Biometrics team leader for the U.S. and EU regulatory approvals of Yescarta. While at Kite, she led the statistical design of other Yescarta and Tecartus trials, including ZUMA-2, -3, -4, -5, and ZUMA-7, one of the first randomized trials of anti-CD19 CAR T cell therapy. Previously, from 2003 to 2014, she worked at Amgen in roles of increasing responsibility within Clinical Development Biostatistics where she worked on the US filing of Vectibix and led statistical efforts for the phase 1 through phase 3 development of trebananib. She began her career at Baxter BioScience and was the lead statistician for the trial that led to the U.S. regulatory approval of Advate. Ms. Navale has a B.S. in Math from the University of Michigan and an M.S. in Biostatistics from the University of California Los Angeles.
Donna Rivera is the associate director for pharmacoepidemiology at the US Food and Drug Administration Oncology Center of Excellence. She leads the Oncology Real-World Evidence Program, focusing on the use of real-world data and real-world evidence for regulatory purposes, as well as managing the real-world data research portfolio strategy and development of regulatory policy. Rivera has extensive research experience in the use of real-world data to advance health equity, observational study designs and methodological approaches, and appropriate uses of real-world data for medical development to increase access of effective therapies to patients. She is currently a Scientific Executive Committee member for the COVID-19 and Cancer Consortium and leads Project Post COVIDity, a collaborative real-world data effort to assess longitudinal sequalae of COVID-19 on patients post infection.
Nancy Dreyer is senior vice president and chief scientific officer, emerita for real-world solutions at IQVIA and adjunct professor of epidemiology at The University of North Carolina at Chapel Hill. She is responsible for driving innovation in medical product development and commercialization using passive and/or active collection of real-world data to generate evidence for regulators, clinicians, patients, and payers. A fellow of both the International Society of Pharmacoepidemiology and DIA, she is well-known for her thought leadership. Dreyer has helped advance the use of real-world evidence for regulatory purposes, influencing the content of recent guidelines by regulators in the US, Europe, and China, each of which cite one or more of her publications. Her substantial executive and field experience have helped hone her pragmatic views.
Theodore (Ted) Lystig is senior vice president and chief analytics officer at BridgeBio, where he provides leadership and guidance in the use of robust statistical and research design methods. He also holds the position of adjunct assistant professor within the division of biostatistics at the University of Minnesota. Lystig is an elected fellow of the American Statistical Association and an elected member of the International Statistical Institute. He is a founding officer and past chair for the ASA Section on Medical Devices and Diagnostics and an executive committee member of the Clinical Trials Transformation Initiative. Lystig is an internationally recognized industry leader in statistical methodology, especially in active surveillance for medical devices. He is a frequent speaker at international statistics meetings and has given invited seminars at venues such as Stanford, Harvard, and the US Food and Drug Administration.
Pallavi Mishra-Kalyani is a supervisory mathematical statistician in the Division of Biometrics V, Office of Biostatistics, which supports the Office of Oncology Drugs at the Center for Drug Evaluation and Research. Since joining the FDA in 2015, Mishra-Kalyani has contributed to the efforts to understand and address statistical issues related to the potential use of external controls, real-world data, and real-world evidence for regulatory purposes. Her research interests include statistical methods for observational data, causal inference, and nonrandomized trial design, and she has organized and participated in several statistics and oncology workshops, conferences, and working groups in these areas.
This townhall will be an open discussion with invited panelists who have a diverse and wide variety of experience in the pharmaceutical and medical device space using digital technologies. This open TownHall style session will cover many aspects of how digital technologies have changed the way that we do healthcare research. Fitting with the theme of the workshop, we will be prepared to discuss openly how technologies, the effect of the pandemic, war, access to healthcare, have shaped our approaches to healthcare research. From Industry, the statistician who analyzed data from the CHIEF-HF trial will discuss his experiences using technologies that made remote clinical trials possible. Another person from diagnostics will discuss a high-level overview of the modeling process, specifically simulation of disease processes; in particular, diabetes mellitus, and the lifestyle and medication models that have been created. For the FDA, panelists will be able to discuss the regulatory perspectives on statistical challenges from a remote clinical trial and data collection challenges from the use of mobile technologies.
Dr. Hong Lu is an assistant director in the Division of Biostatistics at CDRH of U.S. FDA. Dr. Lu supervises the statistical team that reviews in vivo diagnostic devices, which includes a great number of digital health products such as mobile medical apps and software devices. Dr. Lu holds a doctorate degree in statistics from the University of Michigan, Ann Arbor.
Dr. Andrew Potter is a mathematical statistician in the Division of Biometrics I at CDER of U.S. FDA supporting the review work in the Division of Psychiatry. He also leads digital health technology initiatives in the Office of Biostatistics in CDER. His research interests include the use of digital health technologies in clinical trials and the analysis of high frequency outcome data and is involved in working groups at FDA on this topic. He received his PhD in Biostatistics from the University of Pittsburgh and his bachelor’s in physics from the Cornell University.
CV Damaraju has more than 25 years of work experience in the Pharma Industry across multiple Therapeutic Areas. He is currently a statistical leader at Janssen R&D, LLC supporting the Medical Affairs, Cardiovascular & Metabolism franchise. He also serves on the BASS Program Committee and is an elected officer of the ASA NJ Chapter. His latest publication with the CHIEF HF study that got published in the Nature Medicine journal. His ongoing work involves design and analysis of clinical trials integrated with digital health technology and real world data streams.
Gail Kongable is the Senior Manager, Simulation and Modeling in Abbott Rapid Diagnostics Division. She is a Family Nurse Practitioner with over 30 years of work experience in the nursing field. She has over 21 years of experience with simulation and modeling research and analytical services. She has over 25 publications in peer-reviewed medical journals just within the last five years.
Rare diseases are defined differently in each region through its respective regulatory health agencies. These definitions are usually based on the population in that specific region to separate rare from common diseases. Intuitively, if a condition is regulated as a rare disease, there are a relatively small patient population and limited knowledge to properly diagnose and treat such a condition. This brings an obvious challenge in drug development, where clinical trials usually rely on central-limit-theorem-based statistics that requires a large sample size. Encouraged by many legislative incentives, more and more drug manufacturers are investigating promising orphan products in treating rare diseases. Statisticians also rose to the challenge to tackle the small sample size issues associated with orphan drug development. In the FDA draft guidance entitled “Rare Diseases: Common Issues in Drug Development”, natural history study was emphasized and is the first component listed in the document to be considered for the clinical development of an orphan product. A well-designed natural history study could provide background of the rare disease in terms of disease progression, endpoint selection & validation, and treatment duration. The same guidance also discussed the importance of external historical control usage to reduce the number of clinical trial subjects and gain efficiency in treatment effect demonstration. Real world data is an intuitive resource for both natural history studies and historical controls. However, there remains a gap in appropriate methodologies for real world data usage in rare disease setting due to selection bias and temporal effect. Other common real world data issues, such as missing data and reproducibility, are often less concerning in this setting. That creates a unique platform for statisticians and researchers to think outside the box and propose novel approaches. In this session, we will discuss FDA guidance on rare disease clinical program in detail and have a balanced representation from industry and regulatory. We will summary how to better design natural history studies and to utilize real world data to better serve the need of clinical development of rare diseases. Details will include statistical considerations, innovative methods, and trial designs, supported by real case examples.
Statistical leadership is vital to the continued improvement of pharmaceutical drug development. It is well documented that the cost and time of drug development continue to grow at rates that are unsustainable. The statistical leader plays an important role in ensuring that drugs are developed more efficiently via innovative designs and analyses, that the right candidates move forward, and that decisions are made with a quantitative framework that properly balances all available data. Additionally, there have been many learnings obtained and statistical leadership demonstrated throughout the COVID pandemic, and now is the time to capitalize on those learnings to ensure they are carried forward. There is tremendous opportunity for statisticians to lead the pharmaceutical industry into the next generation of drug development.
In this session, statistical leaders from across industry and regulatory will discuss the current state of statistical leadership in drug development and opportunities to utilize statistical leadership to drive drug development forward. These leaders will also reflect on the lessons learned historically that showcase the value of statistical leadership and discuss keys to the successful leadership. The intent of this session is to identify specific actionable next steps to partner together to dramatically shift the role of statistician as leader in all phases of the drug development process. This session is in partnership with Biopharmaceutical Statistics Leadership Consortium.
This will be a panel discussion in which panelists will have specific roles and provide responses pertinent to their function and experience. This panel will include confirmed participants in the following actual roles: Industry Statistician, Industry Clinician (EU), FDA Statistical Division Head, FDA Clinical Team Leader, ICH E9(R1) Addendum Expert, and former European Medicines Agency Statistician. In addition a panelist will respond from the perspective of prescribers and patients. A sampling of questions are, as follows: • What are relevant questions for different stakeholders (health authorities, prescribers, payers, sponsors, etc.) in a trial? Any examples? • Are estimand proposals driving innovation in terms of using more focused clinical questions and proposing novel ways to handle intercurrent events? Are we only seeing analytic approaches similar to the existing pre-estimand approaches? • How would estimand choices relate to labeling language? • Now that sponsors are submitting estimand descriptions in their protocols/SAPs, is anything changing in how the primary efficacy endpoint is analyzed? Or has the estimand framework been generally used to describe/support the standard analyses that have been performed? • Treatment policy strategy is widely recommended by health authorities. It can be implemented only if data can be retrieved after the intercurrent events handled by this strategy. For example, how likely are patients who have discontinued treatment be followed up at the targeted clinical visit? How much data needs to be retrieved for a meaningful analysis? • How are treatment discontinuations handled? Does the reason for discontinuation make a difference in how these are handled for the analysis? • Are there differences in preference for estimand strategies between different health authorities? • Should limitations in estimator (e.g., stronger assumptions and less desirable operating characteristics) influence the questions of interest in a trial?
Recent approvals of tumor-agnostic indications in Pembrolizumab, Larotrectinib and Entrectinib have generated tremendous interest in developing targeted medicines using the tumor-agnostic pathway. Although it has generated a lot of excitement, study designs as well as regulatory framework for tumor-agnostic pathway remain incipient. In this session, ASA BIOP Oncology Methods SWG Master Protocol Sub-team and DIA Innovative Design Scientific Working Group (IDSWG) oncology sub-team will jointly discuss statistical considerations in study designs with potential to support tumor agnostic approval pathway and potential challenges in such designs and strategies, including types of study designs when a pre-specified biomarker is involved, control of type I error, and number of tumor types included in the study etc. Considerations on biomarker and/or companion diagnostic development to fulfil regulatory requirement will be shared. Statistical considerations on common challenges and concerns in the approval of such an indication will be reviewed, including but not limited to, addressing homogeneous efficacy in certain tumor types with small sample sizes, evidence required in accelerated approval versus post-market requirements etc. Case studies as well as hypothetical scenarios will be provided during discussion.
The increasing availability of medical data and advancements in the field of artificial intelligence (AI)/machine learning (ML) have resulted in the rapid growth of AI/ML-based software as a medical device. Fundamentally, ML algorithms analyze large amounts of data to identify useful patterns for making predictions and recommendations. Because of their close dependence on the data, the performance of ML algorithms can be highly sensitive to shifts in the data due to changes in clinical practice patterns, patient case mix, epidemiology, and more. This has led the FDA, along with other regulatory agencies, to publish a document on Good Machine Learning Practice that includes the guiding principle "Deployed models are monitored in real-world use with a focus on maintaining or improving safety and performance." Nevertheless, there are many open statistical questions on how quality assurance and improvement of ML algorithms should be performed, including how to appropriately utilize real-world data streams and how to minimize the risk of introducing deleterious model updates. This session will bring together perspectives from academia, industry, and the FDA to discuss recent advances in statistical methodology for ensuring the long-term safety and effectiveness of AI/ML-based software as a medical device and the many challenges that lie ahead.
Bayesian methods have attracted great attention and been applied increasingly to clinical trials since the FDA issued the guidance on Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products in 2020. With its ability to utilize historical data or real-world data (RWD), Bayesian analysis allows us to shorten development timeline, reduce the number of participants in clinical trial or increase the amount of information for more efficient and robust statistical inference and thus to lower the total cost. One key assumption for the application of Bayesian analysis is consistency between historical and current datasets. Propensity score is often used to match patients and alleviate the concern of inconsistency. A power prior can further be applied for dynamic data borrowing based on the observed level of inconsistency. Moreover, information embedded in a surrogate endpoint or treatment effect on a surrogate endpoint is helpful for enhancing the assessment of the probability of success (PoS) of a phase III study. In this session, innovative Bayesian methods will be presented. It will cover a stratified propensity score-integrated power prior approach to augment a treatment group (in a single-arm study or a two-arm randomized controlled trial) from data sources such as real-world and historical clinical studies containing subject-level outcomes and covariates. A more robust bivariate Bayesian method to link the treatment effects on surrogate and phase III endpoints for the assessment of PoS for a phase III study will be discussed. Other recent novel Bayesian methods will also be shared by the speakers. Simulations and examples will be used to compare the performances and illustrate the applications of the methods to inspire future research.
In May 2021, the U.S. Food and Drug Administration (FDA) released a revised draft guidance for industry on “Adjustment for Covariates in Randomized Clinical Trials for Drugs and Biological Products”. This guidance discusses adjustment for covariates in the statistical analysis of randomized clinical trials in drug development programs. It specifically focuses on the use of prognostic baseline factors to improve precision for estimating treatment effects. Despite regulators such as the FDA and the European Medicines Agency recommending covariate adjustment, it remains highly underutilized leading to inefficient trials in many disease areas. This is especially true for binary, ordinal, and time-to-event outcomes, which are quite common in COVID-19 trials and are, moreover, prevalent as primary outcomes in many disease areas (e.g., Alzheimer’s Disease, stroke, …). Research and guidance on this topic could therefore not be more timely. In response to the FDA draft guidance on covariate adjustment, this session invites experts who represent a variety of viewpoints, coming from academia, Pharmaceutical industry and FDA. The aim of this session is to discuss and address some key obstacles that lead to the underutilization of covariate adjustment, which were brought up in the comments submitted to the FDA in response to their May 2021 draft guidance.
In 2020, generic drugs account for 88% of the prescription drugs in U.S. With the passage of the Generic Drug User Fee Act (GDUFA) in 2012, generic drugs have helped to reduce the time and cost of development of therapeutic products without compromising safety and effectiveness. Likewise, the Biologics Price Competition and Innovation Act (BPCI Act) of 2009, has created an abbreviated licensure pathway for biosimilar products that are demonstrated to be not clinically different from an FDA-approved biological product. The market share of biosimilar products has spiked in recent years, following the same trajectory of generic drugs.
However, statistical research is still lagging in these areas. For locally-acting generic drugs, a three-arm parallel clinical end-point BE study is often used to establish BE between a generic (T) and an innovator drug (R). Clinical endpoint BE studies, however, are much more expensive than traditional pharmacokinetic (PK) BE studies. Due to the potential inconsistency between the original NDA study and the ANDA study for generics, some clinical endpoint BE studies may over or under- estimate some study design parameters and end up enrolling more than a sufficient number of subjects or failing to pass BE due to insufficient power. For Biosimilar studies, when there are multiple reference products, (e.g., EU-approved product and US-licensed product), a pharmacokinetic/pharmacodynamic (PK/PD) bridging study is often conducted in order to bridge the clinical data from the original region (e.g., Europe) to the new region (e.g., United States) in support of the biosimilar regulatory submission in the new region. The purpose is to avoid duplicated clinical trials for clinical similarity between a proposed biosimilar product and the reference product in the new region provided that there is no relevant demographic difference in the two regions. How to optimize these study designs for BE and biosimilar studies in order to improve the effectiveness and efficiency of the study is a pressing task for both the regulators and applicants under both the Biosimilar User Fee Amendments (BsUFA II) and the Generic Drug User Fee Act II (GDUFA II).
In this session, speakers from FDA and academia/industry will propose innovative adaptive and alternative statistical designs for clinical endpoint BE studies and PK/PD biosimilar bridging studies. Statistical models and methods under the proposed statistical designs will be discussed. Power, sample size, and Type 1 error rate will be compared to traditional study designs using simulation. An FDA expert in the field will comment and provide recommendations based on these talks. Speakers and discussant from the FDA and academia/industry will present and discuss innovative complex study designs to optimize biosimilar and bioequivalence studies.
This session includes the following two presentations:
Presentation 1: Title: Study design for Assessment of Biosimilars with Multiple References Speaker: Shen-chung Chow, Professor, Duke University, Former Associate Director for Biosimilars at OB/FDA
Presentation 2: Title: A Novel Two-Stage Sequential Adaptive Comparative Clinical Endpoint Bioequivalence (BE) Study Design
Speaker: Wanjie Sun, Lead Statistician, DBVIII/OB/CDER/FDA
Discussant: Stella Grosser, Division Director, DBVIII/OB/CDER/FDA
The ICH E9 (R1) guidance on estimands and sensitivity analysis was finalized by the FDA in May 2021. The Pharmaceutical Industry Working Group on Estimands in Oncology has been continuing to review and refine key clinical trials issues in light of the estimands framework. This panel discussion addresses some of the work the Working Group’s task forces have been engaging in to look more closely at issues common in oncology trial design.
Question we'll discuss include:
How can we encourage consistent analysis and interpretation of Duration of Response and Time to Response in clinical trials?
What is the clinical question of interest if patients receive the option of subsequent therapy?
How does concern about causal estimands impact the way we do time to-event trials?
What do we mean by follow-up time in a clinical trial?
Statisticians have acquired a comprehensive understanding of the statistical principles applied to clinical trials. We have been abiding by those, and thus developing procedures and methods for clinical trial design and analysis. However, the COVID-19 pandemic has posed enormous challenges to statisticians who need to adapt to the ultra-fast development pace, while holding to the statistical principles.
Based on our experiences developing COVID therapeutics as well as preventive vaccines, it can be challenging to pre-specify the "important details"—as advised in ICH E9—of a clinical trial, with regards to design, conduct, or analysis. The disease can change, as new variants of the virus emerge; the treatment can change, as other interventions (e.g., vaccinations) are added as concomitant medications; and the patient population de facto can change, as CDC guidance on vaccination recommendations or personal protective equipment usage change. Moreover, with the continuously evolving pandemic, it becomes almost impossible to pre-specify the treatment effect or even the "background" event rate. As one of the consequences, the typical interim analysis strategy may fail to work. At the final analysis, cases have been reported for which the pre- and post-interim analysis results drastically differ, which obfuscates the interpretation of the findings even with the most straightforward design.
Meanwhile, the battle against the pandemic also provides opportunities for statisticians to develop and improve complex innovative trial designs (CID). Platform trials become almost necessary. In one master protocol, multiple therapeutic or vaccine candidates are evaluated by multiple cohorts (e.g., different patient populations based on oxygen use), and treatment arms and cohorts are added or dropped from ongoing trials based on clinical and statistical findings. In each group (with specific treatment and patient cohort), phase 1/2/3 seamless and adaptive designs are often applied, and sample size re-estimation becomes essential. Indisputably, the pandemic is expediting the progress of CID development.
In this session, speakers from the FDA, industry and academic will report the statistical challenges they have experienced when designing, conducting, analyzing, and reviewing COVID-19 therapeutic and vaccine clinical trials. They will also discuss the practical and innovative approaches which they have implemented or can potentially be applied to address the issues. Specifically, the following topics will be presented.
First, given the urgent need, simultaneously testing multiple treatment arms (e.g., different combinations of monoclonal antibodies) and multiple endpoints (e.g., hospitalization or death, and symptom resolution) can be advantageous and even necessary. At the same time, it is also desirable to stop the trial early, when an efficacy signal is detected from a treatment arm(s). This, however, can create multi-fold hurdles for the interim analysis and multiplicity control. The speaker will discuss details of the issues, together with possible solutions based on actual trial development experience.
Second, non-inferiority (NI) trials can be deemed necessary for COVID-19 therapeutic development because of ethical considerations. Nevertheless, identifying a clinically and statistically appropriate NI margin can be an extremely formidable task, since the "background" rates for infection or clinical outcome such as hospitalization or death are changing constantly. In this topic, the speaker will discuss the practical approaches for determining NI margin and for conducting sensitivity analysis to evaluate the possible consequences regarding the pre-specified NI margin.
Third, temporal effect—i.e., the effect of shifts in patients’ characteristics, trial conduct, and other features of a clinical trial—clearly cannot be ignored in COVID-19 trials, as in typical settings. The speaker will discuss a statistical procedure for considering temporal effects in COVID-19 clinical trials in the context of confirmatory decision-making.
Last but not least, the unique statistical challenges for developing a vaccine in an expeditious manner without compromising safety and effectiveness in the context of an outbreak will also be discussed, including determining correlate of protection, modifying ongoing Phase 3 clinical studies to evaluate long-term safety and efficacy when a vaccine becomes available during the study, and designing clinical studies to evaluate safety and efficacy of new candidate vaccines or new variants post vaccine authorization or licensure.
The goal of personalized medicine is to narrow the reference class to yield more patient specific effect estimates to support more individualized clinical decision making. Patients in a trial differ from one another in many ways that can affect the outcome of interest and the potential for benefit. Heterogeneity of treatment effects (HTE) is the variation in how individuals respond to a treatment. Treatment benefit often varies among individuals in a trial due to difference in important demographics or disease characteristics, such as gender, race, age, region, disease subtypes etc. HTE is also observed in the study-level parameters of meta-analysis because of an imbalance in important disease characteristics (known as “effect–modifiers”). Appropriate assessment of HTE is critical to regulators, pharmaceutical companies, policy makers, researchers, and patients. In current practice, the primary analysis focuses on the treatment benefit observed in the overall population. However, this "average" benefit may not be applicable to all patients in the trial due to possible heterogeneous benefit in different subgroups. The conventional subgroup analysis is often inadequate to address HTE due to uncontrolled false positive signal detection (treatment benefit) and high variability due to limited data in each subgroup. Bayesian methods are particularly useful in this context. FDA Center for Drug Evaluation and Research (CDER) started an initiative to publish drug trials snapshots (DTS) for the public. Estimated treatment effects from Bayesian hierarchical models are included in several published DTS to provide more precise efficacy results for different group of patients.
This session will focus on several case studies on the application of Bayesian method in estimating heterogeneous treatment effect in clinical trial and regulatory decisions. The real-life examples include but not limited to using Bayesian hierarchical model in preparing DTS, borrowing information from one region in supporting regulatory approval for another region, and borrowing information from a subtype to support a separate indication for another similar but different subtype of disease. This session will feature speakers from industry and regulatory agency.
Real World Data (RWD) have been emerging as important data sources for deriving Real World Evidence (RWE) for medical and healthcare policy research and decision making. The innovative use of RWD leads to improvement in trial design, reduction in study size and enhancement in statistical inference. RWE can also be used to answer questions that cannot be addressed using data from randomized clinical trials. Nonetheless, the analysis of RWD posts great challenging. Given the inherent non-interventional nature in the real-world setting, confounders need to be accounted for. Therefore, many existing methods for analyzing data from a randomized clinical trial will not be directly applicable and usage of causal inference framework is preferrable. Comparability of patients across treatments is a key requirement for a valid assessment of treatment effect. Propensity score based on individual patient data (IPD) is often used for patient matching to ensure the comparability. Moreover, many investigations using RWD involves multi-site datasets, sharing sensitive Individual Patient Data (IPD) is subject to strict regulations and is logistically prohibitive. In such scenario, propensity scores for individual patients cannot be derived by the user and the alternative methods should be applied to match RWD or count for heterogeneity across multiple data sources. In this session, speakers from regulatory, industry and academia will share their current thinking and new methodologies developed to meet the challenge in RWD analysis. They will cover novel methods for leveraging external IPD/RWD and the specification of summary statistics (without the need of IPD) provided by the data owners for data matching across multiple RWD sources for comparative analysis. Other technical issues such as handling of informative missing data may also be discussed. Healthier patients tend to have more missing covariates as their health information is less frequently captured in a real world setting.
Over the past decade, drug development in oncology has shifted from cytotoxic agents to drugs with new mechanisms of action, such as cancer immunotherapies, targeted therapeutics, T-cell engagers and others. A key challenge for these new agents is that the assumption “more is better” in terms of dosage may no longer hold true. As a result, health authorities and especially the FDA, are requiring more thorough dose finding and dose optimization prior to the initiation of pivotal trials. Initiatives such as the FDA’s project Optimus are great examples to demonstrate efforts on developing new guidance for cancer drug makers to test a wider range of doses early in development. These requirements and recommendations shift focus away from determining the maximally tolerated dose, which only considers the toxicities of the drug, towards identifying an optimal biological dose, which takes into account overall efficacy and tolerability. Such optimization requires consideration of complex mechanisms of action, schedule optimization, long-term drug tolerability, and possibly novel pharmacodynamic endpoints. Consequently, thoughtful study designs, translational data, and statistical modeling play an increasingly important role. The paradigm shift will require efforts from multiple parties including the regulators and industry sponsors, which perfectly fits to the theme of the ASA Biopharmaceutical Section Regulatory-Industry Statistics Workshop. This session will feature speakers and discussants from industry, academia and the FDA (all speakers confirmed) to share real examples, recent statistical innovations in this field and their views on these recent developments and impact on the oncology drug development in the pharmaceutical industry.
Are the log-rank and score tests valid* when covariate-adaptive randomization is used in oncology trials? This must have been the underlying concern behind FDA’s recent requests for re-randomization tests in studies using the Pocock and Simon’s minimization, which dynamically balances the treatment allocations across a large number of prognostic factors. Aside from minimization, another example of covariate-adaptive randomization is the stratified permuted block design commonly applied when the number of factors is small. Balancing treatment allocation across multiple prognostic factors via covariate-adaptive randomization is necessary in virtually all oncology trials now, which makes the above question relevant to both the practitioners and the regulators. The theory on valid inference about the treatment effect with time-to-event endpoints and covariate-adaptive randomization has been lacking until very recently. If the model is misspecified in trials with stratified randomization, the widely applied robust score test [Lin and Wei (1989); https://doi.org/10.1080/01621459.1989.10478874] was shown to be conservative by the groundbreaking work of Ye and Shao (2020; https://doi.org/10.1111/rssb.12392). This result, however, was not established for minimization other than via simulations. Interestingly, model misspecification caused by the omission of some factors from the analysis model is more common in trials when minimization is warranted. Recent work by Johnson, Gekhtman and Kuznetsova (draft manuscript, 2021) extends the theory developed by Ye and Shao (2020) to show that both the log-rank and the robust score tests are conservative under minimization if the model is misspecified.
In this session we propose to raise awareness about the emergent advances on this much debated problem and elucidate their implications through streamlined technical talks and a discussion.
*A hypothesis test is valid if its type I error is no larger than a given significance level.
Circulating Tumor DNA (ctDNA), the genetic material released from tumor cells into the blood, offers great opportunities in the molecular diagnosing and monitoring of cancer, such as early detection, patient selection, monitoring and predicting treatment responses in the neoadjuvant and adjuvant setting. However, there are still many challenges that need to be addressed before ctDNA can be used as an “early endpoint” to predict long-term cancer survival outcomes in early-stage cancers. In this session, we will discuss the challenges and variability in ctDNA detection, and assess clinical and statistical considerations that would help allow the use of ctDNA as a meaningful endpoint in regulatory decision-making.
Real-world evidence (RWE) is playing an increasing role in health care decisions, as indicated in the recent FDA draft guidances on RWE in 2021. The magnitude and heterogeneity of real-world data (RWD) brings an opportunity to utilize innovations in machine learning (ML) and causal inference to generate RWE for medical and regulatory decision-making. Causal inference from RWD is challenging due to confounding, and treatment switching, etc.. Propensity score (PS) are commonly used but require stringent model assumptions. Doubly robust estimation (DRS), including double score matching (DSM), augmented inverse propensity weighted, and targeted maximum likelihood have emerged as flexible methods for causal inference (correct specification of only the PS or the outcome model). However, incorporating ML in DRS is a challenging and active research topic. Approaches include the Super Learner, double ML, and model averaging. Estimating the individual treatment effects (ITE) is important in personalized medicine. Different ML techniques provide approaches for estimating the ITE. Most approaches first use ML to predict potential outcomes under each candidate treatment and then derive individualized treatment regimens (ITR). Recent literature shows the advantages of estimating ITR using DRS estimates of ITE. It remains unclear which ML based strategies work best for ITR. This session drives toward best practices for using ML in assessing causal inference both on a population and individual basis. We present recent work from industry, academia, and regulatory agencies on this topic. The session will especially be appealing researchers working on design and analysis of RWE. List of invited speakers: • Shu Yang (North Carolina State University): “A unified framework for DSM: theory, balance measure, and practice” • Di Zhang (FDA): “Regulatory overview of using machine learning in causal estimation” • Ilya Lipkovich (Eli Lilly and Company): “Evaluation of different analytic strategies for estimating optimal ITR in Real-world data”
The 21st Century Cures Act was signed into law nearly 5 years ago and it is near the end of PDUFA VI (2018-2022). These two regulations mark milestones for the FDA’s emphasis on enhancing both the use of real-world evidence (RWE) in regulatory decision-making and the agency’s capacity to review complex innovative designs (CID). Moreover, they provided a platform for knowledge sharing. The pharmaceutical industry has increasingly collaborated with multiple health agencies on both fronts since the regulations. As a result, many real cases that either incorporate RWE in the trial or implement CID have surfaced. They are either examples to follow or lessons to learn. Statisticians and clinical trial researchers have established several implementation guides for CID, ranging from adaptive designs to master protocols, addressing both operational and statistical issues. There is also a surge of methodologies in RWE research including propensity score models and Bayesian frameworks. Recently, master protocols with RWE components have emerged mostly in oncology and hematology therapeutic areas. Combining the two poses unique challenges. For example, if RWE is introduced to augment the control arm in a platform trial that only the concurrent control data are used for each treatment comparison, how the RWE can be leveraged in each comparison should be carefully calibrated with appropriate statistical models. In addition, challenges arise when RWE is used to refine parameter estimations for optimizing the design of clinical trials. The DIA Innovative Design Scientific Working Group (IDSWG) Oncology team is tasked to investigate both CID and RWE usage in clinical trials, in order to establish implementation guidelines tailored for oncology trials. This session is organized by this working group to discuss statistical and operational considerations in RWE incorporation within a master protocol setting. Case studies from real examples and simulation will be shared to illustrate this type of clinical trial design. Speakers from industry, academia and regulatory agencies will present guidance and lessons learned from their perspectives.
In recent years, the quantitative analysis of medical images is receiving mounting attention for extracting diagnostic, prognostic, or predictive information from medical images for patient diagnosis, treatment, and management. It has been known that readers who review and make the interpretation of these images play a critical role in this decision-making process. However, the variability among readers—radiologists, pathologists, different facilities, etc.—in their interpretive performance can be quite extensive due to their subjectivity, training and experiences, etc. Such reader variation, both intra- and inter-reader variability, therefore, makes the evaluation of medical products very challenging. In this session, we will invite speakers from different stakeholders to investigate multiple aspects of the challenges imposed by reader variation in clinical trials that involve medical imaging analysis. Based on their experiences in clinical trial design, analysis or regulatory review, the speakers will discuss 1) the measurement and assessment of reader variation, 2) study designs and analysis of reader variation reduction (e.g., with assistance of artificial intelligence), and 3) novel statistical methods for evaluating the safety and effectiveness/efficacy of medical products in the presence of reader variation. The topics presented in this session will provide insights and generate discussion among statisticians on how to handle reader variation in pre-market clinical trials involving medical images.
An unprecedented number of new cancer targets are in development, and most are being developed in combination therapies. The rationale for combination therapy is to combine drugs that work by different mechanisms, therefore improving efficacy and decreasing the likelihood of developing cancer resistance. A few key questions in combination therapy development are: how to select combination partners; what is the proper drug development stage to demonstrate the contribution of component (CoC), and to what degree CoC needs to be demonstrated, and how to better use external data (RWE or historical clinical trials) to design more efficient trials in combination setting? Back in 2013, FDA published guidance for the industry on the co-development of two or more new investigational drugs for use in combination. The guidance emphasizes new investigational drugs and advocates factorial design which in general may not be efficient. This session will build on last year’s session on a similar topic and we will further expand to the design strategies to improve the trial efficiency and also meet the regulatory requirement. In this session, we will discuss selection of combination partners using predictive biomarkers by synthesizing information from publications, historical trial data; design consideration of proof of concept trial with the goal of screening potential active combination with limited resources; Statistical methods to evaluate the contribution of components will also be discussed. We will share regulatory perspectives, including examples of combination trials, where multi-arm multi-stage (MAMS) might be considered.
Following the initial draft guidance for industry on master protocols for oncology trials in October 2018, FDA recently released the finalized guidance in March 2022, which includes considerations of specific types of designs in a master protocol, i.e., basket, umbrella, and platform trials that could potentially expedite drug development by increasing statistical and/or operational efficiencies. For oncology basket trials where a single investigational agent (or combination) is evaluated across different tumor types (baskets), “information-borrowing” as an effective strategy is widely utilized and conveniently handled in the Bayesian framework by most established approaches. However, the underlying hierarchical model assumptions may not hold due to patient-level heterogeneity, and basket homogeneity/exchangeability may be alternatively characterized to reflect potentially different efficacy benchmarks. In addition, the decision-making process involving which basket(s) may be terminated early due to futility and which basket(s) are promising for further late-stage development could be further improved by utilizing “dual criteria” that account for both the reference and target response rates rather than the current binary rules which can be arbitrary. This session will feature pharmaceutical industry and regulatory agency speakers to share their perspectives and insights on the aforementioned practical considerations of information-borrowing and informed decision-making in basket trials.
The design and analysis of medical device studies pose different statistical challenges for both diagnostic and therapeutic (non-diagnostic) medical devices. Some examples of diagnostic medical devices include complex equipment to produce images such as x-rays or scans, immunoassays, and blood glucose monitors; whereas therapeutic devices include pacemakers, catheters, and devices that use radiation to treat cancer and tissue defects. Statistical issues for these kinds of devices include the use of sham controls, inability to perform blinded studies, non-inferiority, repeated measures and historical controls. Diagnostic devices pose a very diverse set of challenges, especially with artificial-intelligence (AI) enabled medical devices. This session will address experiences with therapeutic and diagnostic devices with regulatory and statistical challenges and innovation in study design and methodologies.
Session Description: The FDA’s Oncology Center of Excellence recently launched Project Optimus, which will develop new guidance to address issues relating to dose optimization in early clinical trials assessing the safety and efficacy of oncology drugs. This signals paradigm-shifting with new scrutiny and more emphasis on optimizing the dose for novel oncology drugs from the FDA. This paradigm-shifting has sparked tremendous interest and also questions on why, when and how to optimize the dose. In this session, three confirmed speakers from FDA, industry, and academia will speak. The FDA speaker who is one of the leaders of FDA’s dose optimization effort, will share her experience, in particular why and when dose optimization is important for drug development. The speaker from industry who had extensive experience and had serve as a panelist in a dose optimization forum will discuss the experience, current practice, and challenges of dose optimization in industry. The speaker from academia will discuss the statistical challenges of dose optimization and provide some design strategies to address the challenges.
FDA speaker Title: Changing the Dosing Paradigm for Oncology Drugs Abstract: In this talk, we will review the historical dosing paradigm for oncology drugs, including why drugs have been dosed at the maximum tolerated dose (MTD). We will identify why this approach is not optimal for the development of non-cytotoxic agents. We will discuss the importance of planning a dose optimization strategy early in clinical development. We will highlight key components of a successful dose optimization strategy which include: sufficient characterization of pharmacokinetics, incorporation of pharmacodynamic endpoints, evaluation of dose and exposure- response relationships, and use of randomized dose trials when appropriate. Case examples will be incorporated to illustrate important concepts. At the end of the talk, participants will understand why dose optimization is an essential component of developing safe and effective oncology drugs and understand how to design and implement a successful plan for dose optimization.
Industry speaker Title: Consideration of oncology dose optimization designs from industry perspectives Abstract: The recent FDA Project Optimus has profound impact on oncology drug development. Across industry, we have seen feedbacks from the agency request more dose finding studies for oncology drugs. It becomes an important topic for drug companies how to balance the development speed and rigor in terms of dose optimization. Oncology drugs have historically been given a pass because of the belief that “more is better,” dating back to the genesis of chemotherapy. However, it is no long necessary true for agents with new mechanisms of action and greater efficacy and different safety profiles than older drugs. For example, chronic, low grade toxic effects may be more relevant and interfere with prolonged administration, hinder adherence and therefore may result in disease progression. Therefore, looking at toxicity other than DLT (dose limiting toxicity) beyond short time DLT winding (typically one cycle) is important. For subsequent dose expansion or phase II study, more studies are proposed to evaluate multiple dose levels (or with control), and subsequent confirmatory will also be adapted depending on the outcome of earlier trials. In this presentation, we will share our experience in address these questions with innovative statistical methodology and its implementation consideration.
Academia speaker Title: Bayesian Adaptive Designs for Dose Optimization Abstract: FDA recently released the Guidance for Benefit-Risk Assessment for New Drug and Biological Products and launched Project Optimus, which will develop new guidance to address issues relating to dose optimization in early clinical trials assessing the safety and efficacy of oncology drugs. This highlights the importance of dose optimization in the era of targeted therapy and immunotherapy, and the need of paradigm shifting from the maximum tolerated dose (MTD) to the optimal biological dose (OBD) with the goal of maximizing the risk-benefit tradeoff for patients. In this talk, from the statistical and trial design viewpoint, I will contrast fundamental differences between the identification of the OBD and the MTD, and highlight challenges of dose optimization. I will discuss adaptive design strategies to address these challenges, including model-based designs and model-assisted designs for dose optimization. Trial examples will be used illustrate the methodology.
In October 2021, FDA issued final guidance for industry (GFI) that describe pathways for animal drug sponsors to use approaches like adaptive study designs, real world evidence, and biomarkers to establish drug effectiveness; and more detailed recommendations on how to leverage data collected from foreign countries to support approval of their products in the U.S. FDA’s recommendations and current thinking were issued in four separate GFIs: • Use of Data from Foreign Investigational Studies to Support Effectiveness of New Animal Drugs (NADs) • Use of Real-World Data and Real-World Evidence to Support Effectiveness of NADs • Biomarkers and Surrogate Endpoints in Clinical Studies to Support Effectiveness of NADs • Adaptive and other Innovative Designs for Effectiveness Studies of NADs
These GFIs are intended to encourage animal drug sponsors to consider innovative approaches in investigations to support the approval of new animal drugs. The recommendations closely align with those already issued by FDA’s other medical product centers. However, animal drug evaluation presents unique challenges in the use of innovative approaches, due in part to small study sizes, difficulties in assessing animal response, and inherent variability of target populations. The session has three identified speakers. The FDA speaker will provide a broad introduction of the new guidance and the speaker from industry will offer industry perspective regarding opportunities to advance drug development using the innovative approaches outlined in the guidance documents. The speaker from academia will discuss current methods and opportunities for collaboration and research within the framework of the key statistical principles outlined in the guidance. This session will be a timely forum for statisticians and regulators in industry, academia, and FDA to discuss the new guidance and its implementation in veterinary drug approvals, along with lessons learned from human medical product applications.
Adaptive seamless designs gained popularity for reducing time and number of patients a clinical study takes to discover, develop and demonstrate the benefits of a new drug. Instead of conducting several development phases of a study, seamless designs combine two phases trials into one trial, possibly Seamless Phase 1/2, Phase 2a/2b or most commonly, Phase 2/3. Therefore, the adaptive seamless design is advantageous to the conventional study design on reducing the duration of the clinical trial and achieving greater efficiency by using data from both stages. Although the benefits are appealing, adaptive seamless designs require significant effort in trial planning and implementation regarding statistical methodology and operational considerations. This session will focus on researchers’ recent effort in innovative adaptive seamless clinical study designs. In particular, Dr Chow will present an overview of statistical methods for analysis of different types of seamless adaptive designs, considering same or different study objectives and endpoints at different stage, a case study will also be presented; Dr Jin will present a framework of seamless phase 2/3 design with multiple endpoints or with multiple treatment arms, without inflating the family wise Type 1 error under a mild assumption, simulations will also be presented with illustrative examples. Dr Scott from the FDA will provide a discussion of the two speakers’ presentation from regulatory and statistical perspectives.
Safety assessments present an important component of new drug applications in either pre-market or post-market settings. Central to the assessment of drug safety for pre-market application is to assess the adequacy of the testing for safety and to determine the significance of the adverse events and their impact on the approvability of the drug (risk/benefit analysis). It is also needed for describing the safety issues that should be included in product labeling should the drug be approved and whether additional safety studies and /or risk-management plans are needed. To flesh out these objectives, many questions are investigated, e.g., incidence, timing, duration, seriousness, reversibility, and recurrence of AEs, duration of follow-up, the impact of dose (reduction), timing and frequency of treatment, and effects of rescue treatments on AEs. In the absence of more information about the relationship of AEs to treatment duration, for example, selection of a specific number of patients to be followed for 1 year is to a large extent a judgment call based on the probability of detecting a given AE frequency level and practical considerations. For products intended for long-term treatment of non-life-threatening conditions, (e.g., continuous treatment for 6 months or more or recurrent intermittent treatment where cumulative treatment equals or exceeds 6 months), the ICH E1 and FDA have generally recommended that 1500 subjects be exposed to the investigational product (with 300 to 600 exposed for 6 months, and 100 exposed for 1 year). Hence, a large database has been helpful to make risk-benefit decisions to determine who may or may not benefit from the product. However, even though clinical trials provide important information on a drug’s efficacy and safety, it is impossible to have complete information about the safety of a drug at the time of approval. Some adverse events may not be observed in clinical trials due to their size and duration. Or that the precision of the adverse event may be inadequately characterized. Use of registries, large safety studies, existing large electronic health databases—like electronic health records systems, administrative and insurance claims databases, and registries—to keep an eye on the safety of approved medical products in real-time. Therefore, the true picture of a product’s safety actually evolves over the months and even years that make up a product’s lifetime in the marketplace.
This session will focus on new statistical techniques of looking at safety in product development. The speakers will talk on the questions outlined above through the following topics from their current research:
1. Bayesian Estimate of Risk Ratios and Absolute Risk Differences in Integrated Analysis of Pre-market Safety 2. Efficient Methods for Signal Detection from Correlated Adverse Events in Clinical Trials 3. Advanced Novel Visual Analytics of Drug Safety Data, Tools, and Resources
As the regulatory environment becomes progressively receptive towards utilizing real-world evidence, a spectrum of real-world data (RWD) incorporation techniques in trial conduct and analysis has seen increasing interest and adoption in different stages of drug development. One focus of leveraging RWD is to reduce sample size requirement for making efficacy claims, where a sufficient number of patients over a reasonable time period can be enrolled/included to meet desired statistical power to demonstrate efficacy of an investigational treatment. To this end, some approaches based on propensity scores are popularized and are primarily used to create matched groups for patients that are controlled for confounding given a set of baseline covariates. However, some limitations are present in practice when these methods are implemented. For instance, iterative balance checking on the measured confounders and tweaks to the exposure models are often inevitable to achieve joint balance, which can be time-consuming and less objective; when the available sample size in investigational treatment arm is very small, standard matching/weighting techniques may not be feasible due to the limited representation of target population; moreover, the subtlety of what should be matched or what is actually being estimated are often lack of full consideration when propensity-score-based methods are developed or applied. In this session, some proposals to address these issues will be illustrated among participants from regulatory agency and industry. Clinical trial examples will be discussed.
Novel approaches for flexible and efficient decision-making in drug development rely on innovative trial designs. The patient’s voice translated into conceptualization of “success” for a research outcome is a key driver in designing clinical trials, and is linked to the improvement of the treatment benefit/risk profile over existing standards of care, decreased health care costs, and/or accelerated time to approval. These metrics can in turn be linked to development efficiency. Efficiency can be viewed from the perspectives of multiple stakeholders, not only at the trial level but also at the program and portfolio levels. We begin the session with an overview of the concept of efficiency. The next presentation will provide an example: a trial in the medical device field for a pediatric population with multiple interim analyses and an adaptive time point of assessment. The session concludes with patients points of view and FDA perspectives on concepts of innovative trial designs.
presenters: • Jie Xu, JNJ Vision • Zoran Antonijevic, Abond CRO • Gigi McMillan LMU Bioethics Institute, Pat Furlong, Parent Project Muscular Dystrophy (PPMD) • FDA speaker
The topic of this session was inspired by a plenary session from RISW2021, in which the role of ‘statistical thinking' in the era of big data was examined, with emphasis on statistical collaboration across multiple quantitative disciplines. As the complexity of pharmaceutical sciences problems grows at high pace, the challenges of statistical innovation do so as well. And the very definition of what “statistical innovation” means keeps evolving. It’s often unclear to today’s professionals what will be the next direction of statistical innovation in a few years from today and how our profession will be shaped by the world of Data Science and AI. In this session we build upon these questions by examining how different statistical innovation groups approach this challenge. We believe the modern innovation is about building a complex combination of skills that go far beyond the technical abilities of any one individual and leverage the power of collaboration across various quantitative disciplines. We illustrate the concept by presenting a few case studies highlighting this strategy. The first talk will cover industry strategies on how to build and maintain “cutting edge” future skills to remain at pace with rapid pharmaceutical development. It will include a case study of large cross-functional project to embed quantitative decision making in the clinical operations space. The second talk will be from an FDA speaker, providing insight into how the agency foresees the key topics of future innovation in clinical trials and builds its analytical and computational capabilities accordingly. This will be illustrated with overview of PDUFA VII initiative. We will conclude with a panel discussion to tie it all together and to provide additional perspectives from academia, technology companies, and various industry representatives. The aim of this session is to assist statistical professionals in getting a balanced view of possible future paths in stat innovation
The document ICH E9 (R1) has brought much attention to the concept of estimands in the clinical trials community. With the release of the FDA guidance on estimands and sensitivity analysis in clinical trials in May 2021, statisticians in the pharmaceutical industry have been revising templates of protocols and statistical analysis plans to incorporate the "five attributes" of an estimand—treatment, population, variable, intercurrent evens, and population-level summary, the "five strategies" for addressing intercurrent events—treatment policy strategy, hypothetical strategies, composite variable strategies while on treatment strategies, principal stratum strategies, and other critical concepts such as sensitivity analysis, supplementary analysis, etc.
Most of the ongoing efforts focus on identifying, understanding, and handling various intercurrent events in a clinical trial in the estimand framework. Nonetheless, there are valuable even crucial points made in the ICH E9 (R1) addendum, but were not fully fleshed out, nor well discussed in the literature.
In this session, speakers from the FDA and industry will report several cases and scenarios they have experienced or noticed when implementing the statistical principles in the addendum in their work and draw attention to several facets of estimands besides intercurrent events.
First, in the context of observational studies, we will discuss how weighting schemes can be connected to estimands, or more specifically to one of its five attributes identified in the addendum, and the attribute of a population within the Rubin Causal Model. Three estimands are examined from both theoretical and practical perspectives. Factors that may be considered in choosing among these estimands are discussed.
Second, we will discuss how to develop a novel estimand by integrating tumor burden in the treatment effect evaluation in oncology clinical trials. Although intercurrent events and missing data are not the major issues in the settings we consider, we will illustrate that the sequential process emphasized in the addendum—trial objective, estimand, main estimator/estimate and sensitivity estimator/estimate—is still valuable for estimand development and implementation.
Third, we observe that hypothetical strategies are somewhat overused based on our experience. Moreover, the hypothetical strategies are often accompanied by mixed-effects model repeated measure (MMRM) approaches for handling unobserved data. We will explore the potential risks associated with the hypothetical strategies plus the MMRM approach by simulation studies and discuss several alternative options of this approach.
Surrogate endpoints have been used in the drug development in various therapeutic areas to predict clinical effect. The use of surrogate endpoints could substantially expediate the drug development programs, leading to the increase interest in developing and analyzing surrogate endpoints. However, it is always challenging to evaluate the surrogacy in clinical trials and even more difficult to articulately quantify the relationship of surrogate endpoint and clinical endpoint, which could be crucial at the trial planning stage.
In this session, we will present and discuss some recent advances on using surrogate endpoint to predict clinical effect. Dr. Luan Lin from Biogen will present a new modeling framework to evaluate surrogacy of a biomarker on a clinical endpoint. She will demonstrate the application of the method to a rare and progressive neurodegenerative disease trial. Our second speaker, Dr. Ronan Fougeray from Servier, will present a method to incorporate information from a surrogate endpoint for early futility decision making in an adaptive clinical trial. He will demonstrate the application of the method to a confirmatory phase III oncology trial. Dr. Therri Usher from the FDA will lead the discussion for this session.
Artificial Intelligence (AI) and Machine Learning (ML) are making headlines in clinical research, especially in areas such as drug discovery, digital imaging, disease diagnostics, and genetic testing. Although the vision of AI/ML in precision medicine is alluring, there is a need to distinguish genuine potential from hype. In this session, we will invite thought-provoking speakers from FDA, Industry and Academia to talk about the statistical challenges AI/ML are facing in precision medicine and how to overcome their limitations to unleash the great potential of machine learning-powered precision medicine.
Group sequential design (GSD) is one of the greatest statistical innovation in modern statistics. Sequential analysis started about 100 years ago by Dodge and Romig (Dodge and Romig, 1929), and was further enhanced by Wald by adding sequential hypothesis testing (Wald, 1945). As it stands today, GSD and its variations have become the most popular method in trial design and monitoring. Many statisticians consider GSD a standard and straight forward approach in a clinical trial. Is this a true statement? Is there any chance we may misuse GSP in clinical trials? Are there any caveats or precautions we need to consider other than statistics?
In this session, we will invite renowned experts in this field to address those questions above. The presenters will give a historical review of the development of GSD and their variations. Most importantly, real-world experiences regarding the application of GSD in trial design and monitoring will be presented. Regulatory agencies will share their experience when they interact with sponsors on implementation and interpretation of GSD. Future directions of GSD development will also be discussed.
FDA’s ‘Project Optimus’ encourages sponsors to move away from conventional dose-finding for modern cancer therapies to achieve better dose optimization. In oncology and cellular therapy, the dose determination with the optimal benefit-risk profile has always been a critical topic to drive the success of drug development, which requires comprehensive data driven decision making in phase I and II trials before committing to pivotal trial. To protect patients from toxic dose levels while efficiently explore the possible therapeutic dose levels, adaptive dose finding design with considerations more than dose limiting toxicity (DLT) in cycle I is recommended for candidate dose level nominations. Additionally, to further characterize the exposure response relationship, multiple arms with randomization of patients to two or more candidate doses can be utilized to obtain a full spectrum of information and guide decision making. In cellular therapy, the dose optimization is even more challenging due to the complex mechanism of action, variability in dose levels, and manufacturing capability. Thus, to address the unconventional dose-response relationship, umbrella trial with multiple dose levels is extremely important. This calls for careful calibration of the appropriate design and estimation. Therefore, as inspired by Project Optimus, we are proposing a session to invite experts from FDA, industry, and academia to discuss and share their insights on innovative strategies guiding decision making for dose optimization in oncology and cellular therapy phase I and II studies. It will cover but not limited to the below aspects: 1. Novel dose finding designs beyond dose-limiting toxicities and/or from beyond the first treatment cycle; 2. Modeling effort incorporating a full spectrum of information for exposure response characterization; 3. Master protocol, umbrella trial and platform trial design for estimation and decision making; 4. Information borrowing from historical studies to improve the estimation accuracy; 5. Adaptive randomization in umbrella trial (i.e., multiple dose expansion cohorts).
Introduction
A major goal of any clinical development program is to implement the most efficient clinical trials to demonstrate the clinical benefit of a new drug. Traditionally, in oncology drug development, in order to achieve this goal and gain regulatory approval of a new drug, sponsors had to establish safety and antitumor activity, and then to demonstrate efficacy benefits against an active comparator, usually SOC, with randomized comparative clinical trials being the gold standard. As the dynamic of oncology drug development changed with increasing demand for reduced time to drug approval and demonstration of greater clinical benefits with a transition from conventional chemotherapy to targeted agents the traditional drug development paradigm has changed.
Motivation
Single-arm trials may now be sufficient to support regulatory approvals (conditional/accelerated) of targeted cancer drugs in molecularly selected patient populations or in situations when conducting a large comparative trial is impractical or unfeasible, e.g., in population of patients with rare cancers. One of the biggest limitations of single arm trials is lack of reference for comparison.
Problem Statement
Can we use prior data of study patients as a clinically relevant reference for comparison? I.e., can we use each study patients as his/her auto-control comparing the benefit achieved on the last prior therapy with the benefit of study drug? Due to the natural history of cancer progression-free or disease-free time is shorter, and responses usually decline on subsequent lines of therapy. Spontaneous remissions are possible but unfortunately extremally rare. If a new drug has an anti-tumor effect it may change the natural history of the disease and provide improved treatment benefit comparing to the prior therapy.
Statistical approach (examples)
Response data - BOR observed with a study drug vs. BOR from the last prior therapy: proportion of patients with equal or better BOR on a study drug; proportion of patients with better BOR on a study drug - Formal test to compare paired proportions - Visualization (e.g., Sankey plot?)
Time-to-event data - Growth modulation index (GMI) = "Trt effect on study drug" / "Trt effect on last prior therapy" [Von Hoff (1998)] - GMI of 1 or above as a sign of activity, GMI of 1.33 or above as a marker of meaningful clinical activity of a new treatment - Visualization (e.g., waterfall plot?) - Formal test to compare right-censored paired data - Using time on treatment as a surrogate endpoint for the time patients are deriving clinical benefit when PFS data is unavailable Visualization (e.g., K-M plot?)
Summary
Intra-patient analysis in a single arm trial is not a replacement of a randomized controlled trial but rather a way to generate supportive evidence (surrogate measures) when an alternative solution is not feasible and/or time consuming or when an interpretation of surrogate endpoints is beneficial either for hypothesis generation and/or for regulatory interactions.
Questions for discussion
Statistical perspective: shall we consider prospectively planning intra-patient analyses with precise definition of endpoints and formal statistical hypotheses? Shall we optimize data collection to capture more details on prior therapy in a more standardized way? What is the best statistical methodology to use?
Regulatory perspective: can intra-patient analysis support a regulatory claim or make a review process easier?
Payer perspective: can intra-patient analysis also help supporting payer negotiations?
Covariate adjustment through linear or nonlinear models is often used in the analysis of clinical trial data, as it leads to efficiency gains when the covariates are prognostic for the outcome of interest. Careful consideration is required when adjusting for covariates in nonlinear models such as logistic regression and Cox regression. For these nonlinear models, inclusion of baseline covariates can change the key treatment effect (estimand). For example, the conditional treatment effect can differ from the unconditional treatment effect due to non-collapsibility as described in the draft FDA guidance released on May 2021.
It is important that the statistical analysis aligns with the estimand of interest. For unconditional treatment effect, covariate adjusted estimators have been developed over the past few years which are typically robust to misspecification of the used regression models. Despite the extensive literature and recommendations by FDA on the statistical theory and properties of these methods, practical experience is limited, and a few open questions require further discussion, especially for time-to-event outcome. For example:
• What are the considerations of choosing between conditional vs unconditional effect as the primary estimand? Are there specific scenarios in which the unconditional (or conditional) treatment effect is more clinical meaningful? How to communicate conditional and unconditional treatment effects to all stakeholders involved in clinical trial?
• How to implement existing methods in estimating unconditional treatment effect in clinical trials and how to address their limitations.
Evaluations of these methods are ongoing within communities. This session will focus on addressing some of the above-mentioned scientific questions and share experience of real-life implementation of covariate adjustment methods. This session will feature prominent participants from industry, academia and regulatory agency, including one speaker and three panelists from FDA.
The 21st Century Cures Act (Cures Act) in 2016 has encouraged the sponsor to explore and investigate the use of Real-World Data (RWD) and Real-World Evidence (RWE) in facilitating regulatory decision-making, including efficacy and safety demonstration of a drug. Since its enforcement, more and more real-world databases have been initiated with higher data quality and often built-in data extraction and analysis tools. On the other hand, FDA must fulfill its mandate to issue guidance about the use of RWE to help support an approved drug seeking a new indication or meeting post-approval study requirements. In the last quarter of 2021, the FDA published a series of guidance documents on RWD utilization standards and strategies. Each guidance has its specific purpose, ranging from electronic health records (EHRs) and medical claims (“Real-World Data: Assessing Electronic Health Records and Medical Claims Data To Support Regulatory Decision-Making for Drug and Biological Products”), to registries (“Real-World Data: Assessing Registries to Support Regulatory Decision-Making for Drug and Biological Products Guidance for Industry”) in support of submissions for drug and biologic approval, including data standards specific to RWD (“Data Standards for Drug and Biological Product Submissions Containing Real-World Data”) and other considerations (“Considerations for the Use of Real-World Data and Real-World Evidence To Support Regulatory Decision-Making for Drug and Biological Products”) in regulatory decision-making. These draft guidance documents had an open period asking drug developers and external stakeholders to weigh in on strategies utilizing RWD. Since then, there have been several comments on the public docket and much more discussions in the public domain on this particular subject. This panel will bring together experts and stakeholders from the industry, RWD vendors and regulatory agencies to discuss different aspects of the guidance and its implications. Panelists from the FDA will speak to the requirements and standards required from RWD sources and how RWD could expedite their review process. Data vendors will address concerns and difficulties, and the impacts on implementing all or portions of these guidance. Panelists from the pharmaceutical/academic setting will address the challenges faced while using these guidance documents. The panel will address points of concurrence and identify areas that could be improved with additional guidance or clarity. Possible next steps for furthering the use of RWD in clinical research will also be explored.
AI/ML methods are often mentioned in connection with health data. Applications currently are in the field of diagnosis of disease and prediction of events. For an example for the 1st cat. in an ICU setting see https://pubmed.ncbi.nlm.nih.gov/32152583/, while for the latter category reference is made to BAYER's app MyIUS (https://www.bayoocare.com/en/) developped for a hormonal birth control device. However it seems that application of AI/ML methods in the setting of an initial NDA has not yet gained much interest. This session would therefore aim to contribute to the application of AI/ML methods in the regulated field of late stage clinical development. Topics will be as follows: 1. Industry perspective: Case studies where AI/ML methods have been applied in the drug development process: a. Which methods have been applied and in which settings? b. Where have these methods provided additional insights? c. How have these methods be communicated to internal stakeholders? d. Is there an optimal balance between the prediction performance of an AI/ML model and its interpretability? 2. Regulatory perspective: a. Is FDA considering the application of these methods? b. Do these methods have the potential to impact the review process and are there cases where such methods have been used? c. Given adequate and validated prediction performance, could AI/ML algorithms trained with submission data subsequently be used for guiding treatment decisions, i.e. be a “digital CDx”? 3. Operational topics: a. AI/ML–methods to increase data quality. 4. Technical perspective: a. Validation and quality control of AI/ML analyses software coded in open source languages like R/Python. b. Life cycle, CI&CD and continuous quality control of AI/ML algorithms used with patient data (including post-approval scenarios). We are currently in contact with stakeholders that can cover these aspects. The session would consist of 3 presentations (industry and regulatory) plus a panel discussion.
Tissue biopsy remains the standard of practice for cancer diagnosis. However, it may be invasive and costly. Liquid biopsy-based tests using circulating tumor DNA/cell-free DNA (ctDNA/cfDNA) is developing rapidly and finding its application in precision medicine companion diagnostic (CDx). Recently, there are also a lot of research for Liquid biopsy-based test regarding its potential in monitoring a patient's response to treatment. In this session, we will discuss challenges with study design and statistical analysis for evaluating such liquid-based diagnostic tests.
The benefit-risk assessment of a new medical product is complex and involves trade-offs between often conflicting multiple efficacy and safety endpoints, in addition to the different methodologies for benefit and risk assessments. Therefore, succinctly describing the benefit-risk profile and communicating the trade-offs between benefits and risks in a clear and transparent manner, using all available evidence, is critical for regulatory decision-making and individual patient management. Bayesian analysis, in addition to conventional approaches, provides an alternative framework to perform such quantitative assessments of the benefit-risk trade-off by allowing formal utilization of prior information and integrating different sources of information and uncertainty. With an increased emphasis on improving the benefit-risk assessment process at FDA, there has been an increase in efforts on the sponsors’ side for quantitative benefit-risk assessment, often within a Bayesian framework. Innovative Bayesian methods for benefit-risk assessment, along with empirical examples, will be presented in this session. Demonstration of a software package developed by the presenters may also be included. Through such illustrations, presenters will discuss various research experiences with Bayesian benefit-risk methods, including its strengths, limitations, and potential future applications.
Randomized clinical trials (RCTs) are the gold-standard for demonstrating efficacy and safety of a new drug, device, or intervention because it can ensure balance and avoid any confounders. However, RCTs can be very expensive and can take many years to complete especially for many rare diseases and small-size clinical trials mainly due to the lack of enough patients. In addition, for life threatening diseases commonly seen in oncological and hematological area, RCTs may not be possible and could be unethical. Therefore, a single arm studies by utilizing Real World Data (RWD) from electronic health record systems, literature, or registry either serving as a control, or prior information to further assist in trial planning or efficacy estimation of the new drug become attractive. Given the consideration of comparability between study data and real-world data in terms of study design components, patient population, efficacy and safety measurements, the practical use of RWD in the efficacy analyses can be challenging. In particular, it is well known that RWE can only correct for biases due to measured confounding, not for unmeasured confounding. Missing data occurring due to the limitation of the data collections between the on-going study and the historical data can be another serious concern. Although many different sensitivity Analyses, such as Bayesian twin regression, E-value, high-dimensional propensity score have been proposed to study the robustness of an association to potential unmeasured confounding and missing data, how to make the final decision by incorporating the sensitivity analyses is not yet clear. This session is intended for audiences who are interested in utilizing the RWD/RWE in oncological and hematological clinical trials gain a clear understanding of challenges and learn how to correctly use sensitivity analyses to study efficacy robustness in the presence of unmeasured confounding. Speakers from regulatory agencies, pharmaceutical companies, and academia will share their latest research, practical trial examples, and potential methods.
Buyse (2010) extended generalized pairwise comparisons (GPC) for the analyses of multiple outcomes using their hierarchical importance order. Under the GPC, each patient in the Treatment group is compared with every patient in the Control group; and first the more important outcomes (e.g., death), then less important endpoints (e.g., a non-fatal outcome such as disease progression in an oncology study) are considered. “Win” as a result from such comparisons is a very attractive idea from Pocock et al. (2012). Over the past decade, the win ratio (ratio of win proportions, Pocock et al. 2012), the net benefit (difference in win proportions, Buyse 2010) and the win odds (odds of win proportions, Dong et al. 2019) have been developed and comprehensively studied. These three win statistics (win ratio, win odds and net benefit) test the same null hypothesis of equal win probabilities in two groups. As nonparametric methods, they can handle semi-competing risk situations (i.e., fatal outcomes plus non-fatal outcomes) and non-proportional hazard situations (e.g., delayed treatment effect typically seen in Immuno-Oncology). Their flexibility allows a composite of multiple endpoints in any data type (e.g., time-to-event, continuous, ordinal). The win statistics have been used in practice (e.g., design and analysis of Phase III trials) and in support of regulatory approvals (e.g. tafamidis for treatment of cardiomyopathy per the ATTR-ACT trial). In this session, we will have two speakers and a panel to present and discuss win statistics with respect to (1) theoretical development, (2) regulatory experience, and (3) practical considerations; (4) future perspectives.
References: 1) Buyse M. 2010. Generalized pairwise comparisons of prioritized outcomes in the two-sample problem. Statistics in Medicine 29(30):3245-3257.
2) Pocock SJ, Ariti CA, Collier TJ, Wang D. 2012. The win ratio: a new approach to the analysis of composite endpoints in clinical trials based on clinical priorities. European Heart Journal 33(2):176-182.
3) Dong G, Hoaglin DC, Qiu J, Matsouaka, RA, Chang Y, Wang J, Vandemeulebroecke M. 2019. The win ratio: on interpretation and handling of ties. Statistics in Biopharmaceutical Research 12(1):99-106.
4) Maurer MS, Schwartz JH, Gundapaneni B, et al. 2018. ATTR-ACT Study Investigators. Tafamidis Treatment for Patients with Transthyretin Amyloid Cardiomyopathy. New England Journal of Medicine. 379 (11):1007–1016.
5) Label for VYNDAQEL/VYNDAMAX https://www.fda.gov/media/126283/download (e.g., see Page 13).
Diagnostic tests often involve more than one/single analyte or biomarker, e.g. multivariate index assay which combines the values of multiple variables, or complex genomic signatures such as microsatellite instability (MSI), tumor mutation burdens (TMB), etc. The analytical and clinical validation for these complex biomarkers can be different from single analyte/biomarker validations, which can create different challenges for study design and statistical analysis. In this session, we will discuss the various challenges associated with analytical and clinical validation studies for evaluating such complex biomarkers.