All Times EDT
Since Professor Sir David Cox proposed the hazard ratio (HR) and its inference procedure in 1972, the HR has been routinely used for quantifying the between-group-difference for time to event data. More generally, the proportional hazard model has been commonly used for the association and prediction analysis. Other models in survival analysis, for example, the accelerated failure time model and transformation models are occasionally utilized in practice. Ideally, a summary measure for the group contrast should be model-free to avoid the model misspecification at the analysis. However, there are very few model-free measures available. The most popular one is either based on the median failure time or the event rate at a specific time point of the study. Both measures are local summaries and cannot catch the short- or long-term survival profile.
In the past few years, issues of utilizing HR have been extensively discussed. The validity of the HR estimate depends on a strong model assumption; that is, the ratio of two hazard functions is constant over time. When the proportion hazards assumption is not met, the HR estimate is difficult, if not impossible to interpret. In fact, the parameter for which the empirical HR estimates is not a simple, weighted average of the true HR over time and it generally depends on the censoring distributions. This is highly undesirable. Moreover, even when the proportional hazards assumption is plausible, the physical/clinical interpretation of a single ratio such as the HR estimate between two groups is difficult to assess the clinical utility of the study therapies, since there is no reference hazard value available as a benchmark.
This short course will have two parts. In the first half, we will cover brief history of development of survival data analysis, basic theories of survival data analysis for one- and two-sample problems. In the second halfthis short course, we will illustrate issues of the HR and present an alternative, that is, the t-year mean survival time (restricted mean survival time, RMST). It is the mean event-free time up to a specific time point, say, t-year. The RMST was introduced in the statistical literature in 1949, but had not got much attention until recently. Here is the list of the topics we will discuss in this short course.
Topics to be covered
The first half
- Brief history of development of the survival data analysis methodology - One-sample problem o Survival function, hazard function, and cumulative hazard function o Kaplan-Meier method - Two-sample problem o Testing: Log-rank test and large sample properties via martingale processother non-parametric tests o More flexible nonparametric testing procedure o Estimation:timation: Hazard ratio and its inference based on the partial likelihood - Regression o Cox regression with general covariates o Large sample inference on the Cox model via martingale process and empirical process o Regression-based prediction and resampling method o Regression analysis for clustered data, recurrent events, and competing risk.
The second half: More on estimation of between-group difference - Issues of Hazard Ratio (HR) - Alternative measures - Restricted mean survival time (RMST) - Power comparison between RMST-based test and HR-based tests - Empirical choice of a truncation time for RMST - Study design based on RMST - RMST for stratified analysis of survival data. - RMST to non-inferiority trials - RMST to quantify long-term survival benefit - RMST to assess duration of response - RMST in the presence of competing risks - Generalization of RMST for recurrent event time data analysis
Overview: The Bayesian paradigm provides a hierarchical and practical way of expressing complicated models through a sequence of simple conditional distributions making them useful for simple to complex data structures required to address multiple phases of clinical trials, particularly for medical device trials for which the use of usual placebo controlled experiments are often difficult conduct. Over the recent years there have been tremendous efforts on developing Bayesian analytics for leveraging data from sources outside of prospectively designed study, referred to as external data such as various Real-World-Data (RWD) sources, historical clinical data, and data from multiple trials within a grand hierarchical structure. Thus, development of appropriate statistical models and related inference are warranted that are not only based on solid theoretical guarantees but also making sure that such complex models are applicable and interpretable in practical settings for modern clinical trials. Thus, one of the primary aims of the proposed short course is to present the modern analytical tools that are easily accessible to practitioners by providing a glimpse of theoretical backgrounds supplemented by many practical examples derived from real case studies. This will be accomplished by illustrating a set of numerical examples (using standard software) beginning with simple two-arm trials to more complex hierarchical models that work with several data irregularities (e.g., missing values, censored observations, etc.) which are commonly faced by practitioners of clinical trials and observational studies.
Primary Focus: The tutorial will begin with a quick overview of general purpose Bayesian methods for randomized controlled trials (RCTs) using various study designs with special emphasis on medical device trails. The second part of the tutorial will involve more realistic and complex models that have recently emerged in the modern era used by pharmaceutical industries and regulatory agencies (e.g., FDA), and then showcase the use of modern Bayesian machine learning (BML) methods through various real case studies. Throughout the tutorial practical applications and worked-out examples will be emphasized without getting into the theoretical underpinnings of the methods, but relevant literature will be provided for those wishing to learn more in-depth notions of Bayesian ML tools. Participants with basic knowledge of probability theory and statistical inferential framework would find the short course useful in expanding their standard toolkit to advanced use of Bayesian analytical methods for clinical trials. The concepts and methods discussed will be demonstrated using the popular software packages (R and SAS) developed by the presenters, but they are implantable by any other software capable of coding Markov Chain Monte Carlo (MCMC) methods.
Course Outline and main topics:
Part I - Introduction to Bayesian Methods for Clinical Trials 1. Basics of Bayesian Methods for RCTs 2. Predictive Distributions and Sample Size Determination 3. Computational Methods using Monte Carlo Methods
Part II - Primer on Bayesian Software 1. JAGS through R 2. SAS through PROC MCMC and PROC BGLIMM
Part III – Hierarchical Models in Clinical Practice 1. Linear and generalized linear models 2. Multi-level models 3. Penalized regression models with data irregularities (e.g., missing and censored values) 4. Bayesian Machine learning methods accounting for estimation uncertainty
Course Outcomes: i. Attendees will become familiar with the essential concepts and computational methods of the Bayesian analytics ii. Attendees will learn how to deal with practical data irregularity issues that arise especially in multilevel modeling. iii. Attendees will become comfortable with using software to conduct Bayesian inference using machine learning models iv. Attendees will have access to worked out examples presented using R and SAS v. Attendees will likely to have a limited access to an early edition of a book draft that the instructors are currently working as co-authors
Instructors’ background:
Professor Sujit Kumar Ghosh is currently a Full Professor in the Department of Statistics at North Carolina State University (NCSU). He has over 25 years of experience in conducting, applying, evaluating and documenting statistical analysis of biomedical and environmental data. Prof. Ghosh is actively involved in teaching, supervising and mentoring graduate students at the doctoral and master levels. He has supervised over 35 doctoral graduate students and recently published a popular book titled "Bayesian Statistical Methods" co-authored with Brian Reich which is being used as a textbook at several universities. He has delivered multiple webinars sponsored by ASA Biopharmaceutical section and also presented webinar for the DIA's KOL lecture series organized by the BSWG. He has served an invited discussant for several session at the previous ASA Biopharmaceutical industry and regulatory workshop in recent past. He has published over 115 refereed articles and have been He is an elected fellow of the ASA and has also served as the Deputy Director at SAMSI (NC).
Dr. Amy Shi is a senior research statistician developer in the Advanced Statistical Methods Department at SAS Institute Inc. Her main responsibility is developing and enhancing the Bayesian capabilities of SAS software, with a focus on generalized linear mixed models, discrete choice models, and multilevel hierarchical settings. She is the developer of the BGLIMM procedure. She has a PhD in biostatistics from the University of North Carolina at Chapel Hill. Amy has given numerous tutorials and taught short courses at various statistical conferences, such as JSM, CSP, and Bayes-Pharma.
In recent years, the rapid increase in the volume, variety, and accessibility of digitized RWD and RWE has presented unprecedented opportunities for the use of RWD and RWE throughout the drug product lifecycle. We believe RWD and RWE will be leading the statistical innovation in healthcare industry and regulatory decision making for the next decades. In clinical development, RWD and RWE have the potential to improve the planning and execution of clinical trials and create a virtual control arm for a single arm for accelerated approval and label expansion. From the product lifecycle perspective, effective insights gleaned from RWE bring about informative relative benefits of drugs, comparative effectiveness, price optimization, and new indications. Aiming to present a wide range of RWE applications throughout the lifecycle of drug product development, we have written a book “Real-World Evidence in Drug Development and Evaluation” which was published in February 2021. We are excited to share the comprehensive RWD/RWE knowledge to researchers at 2021 Regulatory-Industry Statistics Workshop.
The goal of the short course is to serve as resources for practitioners who wish to apply these modern statistics and analytics in drug research and development. This short course will cover the essential statistical methodology for causal inference and recent practical case studies that adopted RWD and RWE in the clinical development and evaluation.
Course outline: Part I: Introduction of RWD and RWE in drug development and evaluation • Background and introduction • RWD data source • Agency guidance • Challenges & opportunities
Part II: Analytical tools for RWD and RWE • Causal inference basics • Propensity score adjustment for RWD • Sensitivity analysis for unmeasured confounding
Part III: Utilizing RWD and RWE in clinical development • Utilize synthetic control to support single arm study • Utilize natural history study for rare disease development • Utilize RWD/historical data for label expansion • Practical considerations of using RWD/historical data in the clinical development
Part IV: Utilizing RWD and RWE in post-marketing drug development • Patient-reported outcome and analysis • Benefit-risk assessment method • Real-world evidence in health care decision making • Statistical methods used for post-marketing surveillance
Recommended Textbook: Yang, H. and Yu, B. (2021). Real-World Data and Evidence in Drug Development and Evaluation. Chapman and Hall/CRC. Boca Raton, FL. https://www.taylorfrancis.com/books/edit/10.1201/9780429398674/real-world-evidence-drug-development-evaluation-harry-yang-binbing-yu
Instructor background: Dr. Binbing Yu is a Director in the Oncology Statistical Innovation group in AstraZeneca. He serves as the statistical expert across the whole spectrum of drug R&D process, including early-clinical and clinical research, design, operation and manufacturing, clinical pharmacology, oncology medical affairs and post-marketing surveillance. He obtained his PhD in Statistics from the George Washington University. His primary research interests are clinical trial design and analysis, cancer epidemiology, cause inference in observation studies, PK/PD modeling and Bayesian analysis. He was previously the Biometry Section Chief in the National Institute on Aging. He has nearly 80 publications in scientific and statistical journals and published a book on statistical methods on immunogenicity.
Dr. Bo Lu is a Professor of Biostatistics in the College of Public Health, the Ohio State University. He is an elected fellow of the American Statistical Association. He obtained his PhD in Statistics from the University of Pennsylvania. His primary research interest covers causal inference with observational data, matching/weighting adjustment for complex designs including multiple treatment arms/time-varying treatment initiation/with complex survey weights, Bayesian nonparametric modeling for heterogeneous causal effects, and statistical methods for survey sampling. He has been PIs for both federal and local government-funded research grants on causal inference methodology. He also has extensive collaborations with Pharmaceutical industry on utilizing causal inference methods to leverage RWD in drug discovery.
Dr. Qing Li is a senior manager in the statistical methodology group under the statistics and quantitative science department at Takeda Pharmaceutical Company. His responsibilities include statistical methodology development and consultation for real-world-evidence and advanced adaptive design from proof-of-concept to late phase studies across multiple therapeutic areas including oncology, gastroenterology, rare disease, and vaccine. His research interests include propensity score methods, RWE, adaptive designs, Immuno-Oncology design and surrogate endpoints. He obtained his MS and PhD degree in biostatistics from the University of Iowa.
Classical clinical trials are conducted in sequence defined by phases, for example, phase 1, 2 and 3, with the last phase aiming for confirming treatment effect and regulatory approval. The high attrition rate and increasing cost in drug development demand innovations on clinical trial design. Adaptive seamless designs (ASD) combine two phases, i.e. phase 2 and 3, with an opportunity to modify the design at the interim analysis and potentially increase the probability of success of the study.
An ASD offers variety of forms. Most commonly used ASDs include treatment/dose selection and population enrichment with flexibility of sample size re-estimation and early stopping. Treatment/dose selection method has been used in many therapeutic areas, while population enrichment designs are mostly employed in oncology studies. Significant progress and advancement of statistical methodology development around ASD have been achieved in the statistical community. However, there are still confusions and challenges when it comes to designing a specific trial. For example, when there is great uncertainty about the true effect size, how does the initial assumption of treatment effect affect the overall study power? What do other factors such as interim analysis schedule and choice of boundaries affect operating characteristics? Should all patients from the two stages used for final statistical inference? What is the regulatory agency’s position on this type of design? Is there a best practice for some scenarios, i.e. when to use what?
In this short course, concepts and theoretical background will be first introduced to facilitate better understanding the merit of ASDs. Great details will be focused on solving practical issues on how to implement the ASDs including treatment selection and population enrichment. We propose a structured, comprehensive strategy that can help achieve adequate power and cost-effective sample size in trial design when there is great uncertainty about the true effect size. Case studies in oncology and other therapeutic areas will be discussed. Demonstrations will be performed using the R/Shinny based software DACT (the Design and Analysis for Clinical Trials). Attendees will be granted to access to DACT for training purpose.
In statistical methodology research and statistical practice in wide-ranging applications areas from Medicine to Finance, statistical simulations are indispensable in showing operating characteristics of a given design against alternatives and in comparing models targetting to explain the variability and overall profile of a given outcome variable of interest. Depending on the response variables of interest in such simulations, univariate or multivariate, iterative or non-iterative, simulation designs must be considered very carefully to produce generalizable, repeatable, and reproducible conclusions in any given platform and this task is much more difficult and under-recognized than it is imagined. In this short course, we will introduce simple to more complex simulation designs and the importance of simulation size; we will describe potential pitfalls that may not be easily recognizable and suggest what data and metadata to be captured for a clear description of the simulation results. We plan to carry out examples both in SAS and R to show similarities and differences between the two platforms. The course will be provided in four modules: Module-1: Simulating data for univariate random variables following Gaussian Distribution, Student-t-Distribution, Gamma Distribution and its special cases, Beta Distribution, Binomial Distribution, Poisson Distribution, etc. Module-2: Simulation designs for one-sample hypothesis testing for continuous, binary, and survival endpoints. In this module, we will also illustrate iterative simulation designs such as Phase-I Dose Escalation Design, and Simon’s Two-stage designs. Module-3: Simulation designs for two- or more-sample hypothesis testing for continuous, binary, and survival endpoints. One of the main focus here will be Empirical Power calculations for Randomized Clinical Trials. Module-4: Simulation designs for Multivariate random variables and designs that require iterative processing.
Digital health technologies (DHTs) are technologies that use computing platforms, digital connectivity, software, and/or sensors for healthcare and related uses. These technologies span a wide range of applications from general wellness to medical devices. These products are also used as diagnostics, therapeutics, or adjuncts to medical products (devices, drugs, and biologics). They may also be used to develop or study medical products. DHTs include use of electronic technologies such as artificial intelligence (AI), software as a medical device, and mobile medical applications.
DHTs are moving healthcare from the clinic to patients by improving understanding of patient behavior and physiology outside traditional clinical settings and enabling early therapeutic interventions. DHTs, such as sensors and other telehealth tools, provide important opportunities in clinical trials to gather information directly from patients at home (decentralized clinical trials), and to gather frequent or continuous medical data from patients as they go about their lives. DHTs can use advanced algorithms, susceptible to potential errors, which may lead to malfunction or misinterpretation of health data. Therefore, regulatory science tools and methods, such as simulations to test algorithm performance, need to be developed to protect data integrity and improve overall reliability of DHTs.
In this course, the first speaker, Susan Murphy, will discuss "will discuss micro-randomized trials & reinforcement learning for constructing personalized mobile DHTs for behavioral modifications with application to individuals at risk of adverse cardiovascular events. Chad Gwaltney will then discuss case studies where clinical trial endpoints are being developed based on data from continuous remote monitoring of symptoms and physical activity. The third speaker, Andrew Potter, will discuss statistical perspectives on DHTs from FDA/CDER. In the fourth presentation, Berkman Sahiner and Matthew Diamond will present an overview of the CDRH Digital Health Center of Excellence approach to DHTs, with a focus on artificial intelligence/ machine learning-based radiological devices.
Under the 21st Century Cures Act, the FDA is directed to develop a program to evaluate how real-world evidence (RWE) can potentially be used to support approval of new indications for approved drugs or to support/satisfy post-approval study requirements. This brings new opportunities to utilize statistical innovations and advances that are critical to assess and address data quality as well as establish causal inference based on real world data (RWD) for regulatory decision-making. However, designing a valid RWD-based study and generating high-quality RWE face numerous challenges such as confounding, treatment switching, and missing information including informative censoring. To date, propensity score methods have been predominantly used to design and analyze RWE studies in a regulatory setting with the focus of confounding control. Although propensity score methods can be a useful tool to address the aforementioned statistical challenges, recent innovations such as targeted maximum likelihood estimation (TMLE) that can utilize an ensemble of machine learning algorithms can be a much more powerful and efficient tool to address those challenges. TMLE also has a doubly-robust property that can protect against propensity score model, or, outcome regression model misspecification (but not both).
This course is a continuation of a previous short course entitled “SC4 - Causal Inference for Real-World Evidence: Propensity Score Methods and Case Study”, which was provided at the 2020 ASA Biopharmaceutical Section Statistics Workshop. The previous short course covered causal inference framework, propensity score-based methods (matching and inverse probability weighting) and discussed how to design and analyze an RWD-based study using these methods under a question of interest that is coherent to ICH-E9 regulatory definition of estimand. As a continuation, this proposed short course will consist of two parts. In part 1, Dr. Hana Lee from FDA will provide a general overview on causal inference and various methods for confounding control including propensity score method, outcome regression-based method (g-computation), and doubly robust method. Dr. Lee will explain how recent innovations in statistics such as various machine learning algorithms can be used to draw causal inference, with a brief introduction to TMLE. Then Dr. Susan Gruber, an expert in TMLE and Founder and Principal of Putnam Data Sciences LLC, will cover topics in targeted learning, machine learning, and TMLE. In addition, Dr. Gruber will discuss statistical considerations on the use of TMLE for an RWE study (e.g., how to select candidate machine learners and associated tuning parameters) and how to write a statistical analysis plan accordingly. Then this course will wrap up with a case-study example that demonstrates a practical use of TMLE for drug safety and efficacy evaluation. Mock R code will be provided during the case-study illustration.
During the training, participants will be able to learn (1) foundation of causal inference framework and necessary assumptions, (2) various classes of causal methods for confounding control, (3) theory and application of TMLE including best practices of TMLE to a RWE question of interest, and (4) how to write a statistical analysis plan following targeted learning framework and using TMLE.
Instructors Background: 1.Martin Ho is the Head of Biostatistics at Google. He provides biostatistical leadership in total product life cycles of clinical and consumer products at Google from ideation, research and development, early phase and confirmatory clinical studies for regulatory approvals, to post-market health technology assessments. Prior to Google, Martin was Associate Director at the Office of Biostatistics and Epidemiology at Center for Biologics Evaluation and Research and Associate Director at Center at the Office of Clinical Evidence and Analysis at Center for Devices and Radiological Health, U.S. Food and Drug Administration. Before joining the FDA, he worked as a biostatistician at various contract research organizations for 10 years. Martin earned a master's degree in statistics from University of Wisconsin, Madison.
2. Susan Gruber: Susan Gruber, PhD, MPH, MS, is Founder and Principal of Putnam Data Sciences, LLC, a statistical consulting and data analytics consulting firm. Dr. Gruber received her master's in public health and a doctoral degree in Biostatistics from the University of California at Berkeley, and a master's degree in Computer Science from the University of California at San Diego. Former positions include Senior Director of the IMEDS Methods Research Program for the Reagan-Udall Foundation for the FDA, and Director of the Biostatistics Center in the Department of Population Medicine at Harvard Pilgrim Health Care Research Institute. Dr. Gruber's work focuses on the development and application of data-adaptive methodologies for improving the quality of evidence generated by observational and randomized health care studies. She is a leading expert in Targeted Learning, an efficient double-robust approach to obtaining unbiased estimates of causal parameters. In addition to authoring foundational papers, she wrote the first publicly available software for applying TMLE in point treatment settings, and to estimating the marginal mean outcome of a multiple time-point intervention. She is also an expert in Super Learning, an ensemble machine learning approach to predictive modeling. Dr. Gruber's previous short courses and webinars include: (1) An Introduction to Targeted Maximum Likelihood Estimation of Causal Effects. Putnam Data Sciences Targeted Learning Webinar Series. March, 2020. (2) Targeted Learning for Data Adaptive Causal Inference in Observational and Randomized Studies. Third Seattle Symposium on Healthcare Data Analytics, Seattle, Washington, October, 2018. (3) Beyond Logistic Regression: Machine Learning for Propensity Score Estimation. FDA CDER Office of Biostatistics Division of Biometrics VII, Silver Spring, Maryland, January, 2018. (4) An Introduction to Super Learning for Prediction. Takeda Pharmaceuticals, Inc., Boston, Massachusetts, June, 2017.
Abstract: As the paradigm of drug development shifts to personalized medicine and targeted therapies, pool of eligible clinical trial patients becomes increasingly smaller and there is a need for rapid learning and confirmation of clinically meaningful treatment effect. Master protocol, including umbrella, basket, and platform trials, promotes innovation in clinical trials and aims at improving efficiency, avoiding duplication and competition, and accelerating the drug development process. Though master protocols used to be primarily sponsored by nonprofit organizations, academic institutes and government agencies mostly in oncology area, there is a growing trend of conducting clinical trials using master protocol in recent years from the pharmaceutical industry in oncology and other therapeutic areas. Regulatory agencies across the globe have issued guidance on master protocols. In 2018, the ASA Biopharmaceutical Section Oncology Scientific Working Group (SWG) was chartered to explore innovative statistics in oncology drug development and a sub-team on master protocol in oncology was formed. In this short course, instructors from the Oncology SWG master protocol sub-team will provide an overview on master protocol, its regulatory landscape, statistical methodologies, special statistical considerations, challenges and opportunities both statistically and operationally. The last part of this short course will include a case study of a Pediatric platform trial: NCI-COG Pediatric MATCH, with detailed illustrations on how to implement the considerations discussed in the earlier part of the short course. See below the structure of the short course:
Part 1: Overview of master protocol a. Overview of master protocol: Definitions, Examples b. Review of Regulatory Landscape: US and Rest-of-World regulatory guidance c. Practical Considerations
Part 2: Novel statistical methodologies used in master protocol trials, statistical and operational consideration
Part 3: Case Study of Pediatric Platform Trial: NCI-COG Pediatric MATCH
Instructors’ background: The confirmed instructors have given the short course in Deming Conference 2020 under the same title. Nicole and Cindy are the co-leads of Master Protocol sub-team under the ASA Biopharmaceutical Section Oncology Scientific Working Group (SWG) since 2018. Jingjing, Nicole and Cindy are the current co-leads of Joint DIA-ASA Master Protocol Working Group. The ASA WG manuscript “Practical guide and recommendations for master protocol framework: basket, umbrella and platform trials” is under review.
Relevant presentations and short course by the instructors:
1. Master Protocol and Its Application, Session D Tutorial, Deming Conference, Dec. 2020 2. Type-I Error Considerations in Master Protocols with Common Control in Oncology Clinical Trials, FDA Oncology Center of Excellence & ASA Biopharmaceutical Section’s Virtual Discussion, Oct., 2020 3. Master Protocol in Pediatric Cancer Trials, ASA Biopharmaceutical Regulatory/Industry Statistical Workshop 2020, Sep., 2020 4. Practical Considerations for Master Protocol Framework, -Basket, Umbrella and Platform Trials, JSM, Aug. 2020
Target audience:
1. Statisticians working in the pharmaceutical industry 2. Epidemiologists/statisticians working in the field of epidemiology 3. Statisticians working on applications from clinical medicine 4. Researchers who are interested in applications and methods of modern meta-analysis methods
Prerequisites for participants: None.
Computer and software requirement: Some familiar with R programing.
Description: Comparative effectiveness research aims to inform health care decisions concerning the benefits and risks of different prevention strategies, diagnostic instruments and treatment options. A meta-analysis is a statistical method that combines results of multiple independent studies to improve statistical power and to reduce certain biases compared to individual studies. Meta-analysis also has the capacity to contrast results from different studies and identify patterns and sources of disagreement among those results. The increasing number of prevention strategies, assessment instruments and treatment options for a given disease condition have generated a need to simultaneously compare multiple options in clinical practice using rigorous multivariate meta-analysis methods. This short course, co-taught by Drs. Chu and Chen who have collaborated on this topic for more than a decade, will focus on most recent developments for multivariate and network meta-analysis methods. This short course will offer a comprehensive overview of new approaches, modeling, and applications on multivariate and network meta-analysis. Specifically, the instructors will discuss the contrast-based and arm-based network meta-analysis methods for multiple treatment comparisons; network meta-analysis methods for multiple diagnostic tests; multivariate methods to visualize, detect, and correct for publication biases; and multivariate meta-analysis methods estimating complier average causal effect in randomized clinical trials with noncompliance.
This proposed course will be a sequel to my BIOP2019 short course on composite endpoints (with 65+ attendants). This follow-up is intended to keep the applied audience up-to-date with the rapid and exciting methodological developments in the field and to provide more hands-on instruction on their real-life implementation.
Composite outcomes combining death and (possibly recurrent) nonfatal events, such as hospitalization and tumor progression, have long been the endpoint of choice for the primarily analysis of many clinical trials. The past couple of years, in particular, have seen tremendous progress in the statistical methodology for such endpoints. Compared with existing approaches introduced in the 2019 prequel, the new methods are noteworthy in: 1. Clear and interpretable definition of estimands in compliance with the ICH-E9(R1) guideline; 2. Fuller utilization of recurrent nonfatal events while prioritizing death; 3. Versatile modeling strategies for general outcome types. In addition to the methodological advancements, several user-friendly R-packages have also emerged to assist in their practical implementation. Given such remarkable developments, this follow-up course is highly necessary and will benefit both statisticians and clinicians alike in their research and practice.
Syllabus: 1. Introduction. 1.1 Definition and examples; 1.2 ICH-E9(R1) and its implications; 1.3 Traditional methods. 2. Nonparametric analysis. 2.1 Estimands and their inference (curtailed win ratio, restricted proportion/mean-time in favor of treatment); 2.2 Sample size calculation; 2.3 Real examples using R-packages WR and rmt. 3. Semiparametric regression analysis. 3.1 Overview of regression models; 3.2 Model specification, inference, and sensitivity analysis (win-ratio regression, generalized semiparametric proportional odds model); 3.3 Real examples using R-packages WR and GSPO. 4. Prospect for future work (interim analysis, high-dimensional baseline characteristics, etc.).
Statistical innovations are instrumental to construct a roadmap for deriving Real-World Evidence (RWE) from various sources of Real-World Data (RWD) in order to establish causal interpretations of the evidence for regulatory considerations. To seize this opportunity, a group of regulators, industry professionals, and academic researchers formed the ASA BIOP RWE Scientific Working Group in April 2018 with the purpose of supporting the development of RWE, informing best practices to address statistical considerations, and discussing a path forward with the use of RWD/RWE for regulatory decision-making. As the Working Group has entered Phase 2, we propose a reporting session to share some interim results on three key statistical considerations for RWE: (1) Estimand - What are we going to estimate? As defined in ICH E9(R1), estimand is a precise description of the treatment effect reflecting the clinical question posed by the trial objective. The first talk will discuss how to apply this first key step in real-world setting. (2) Data - What are we using to estimate the estimand? As stated in FDA’s RWE Framework, the strength of RWE depends on the reliability and relevance of the underlying data. Data should be selected based on their suitability to address specific regulatory questions. The second talk will discuss this second key step, statistical considerations regarding the use of RWD. (3) Inference – How are we estimating the estimand based on the data? RWE studies create a challenge for establishing causal inference that must be addressed to support effectiveness and safety decisions. Appropriate applications of causal inference roadmap are critical. The third talk will discuss this third step and a path forward for causal inference using RWD. The last key step is the interpretation of findings.
Drug development process remains lengthy and costly. Improving trial designs and methods will provide great opportunity but also associate with challenge. Regulatory agencies continue to encourage sponsors to apply innovative trial designs and methods by issuing various new guidelines. We are basically in a new era of modernizing new drug developments. In this session, industry, regulatory and academic speakers will share their experiences and thoughts on applying innovative designs and methods to enhance clinical trial flexibility and efficiency. More specifically, in certain disease areas, there may be a large placebo effect. A two-stage sequential parallel comparison design can be utilized to address the issue. With the design, treatment effects can be assessed for different patient subgroups defined by the outcome of stage 1 and then be combined for various purposes. Methods for handling intercurrent events and missing data for the specific design in the estimand framework stipulated in ICH E9(R1) will be covered in the presentations. In addition, enhanced doubly robust causal estimands will be discussed for trials with incomplete data and imperfect/no randomization. Simulation studies, performed to evaluate the finite sample performance, demonstrate clear superiority of the new method compared to several commonly used doubly robust procedures even when the propensity score and outcome models are mis-specified. Examples will be used to illustrate the applications of the novel designs and methods. Speakers: • Hui Quan (confirmed), Sanofi: Novel design for dealing with large placebo effect and the corresponding method for missing data handling, hui.quan@sanofi.com • James Hung (confirmed), FDA: Sequential Parallel Comparison Designs in CNS Clinical Trials, hsienming.hung@fda.hhs.gov • Ming Tan (confirmed): Enhanced Doubly Robust Causal Estimands in Clinical Trials with Incomplete Data and Imperfect/No Randomization. Georgetown University: Ming.Tan@georgetown.edu
The COVID-19 pandemic has had effects on multiple operational aspects of clinical trials. Three notable areas of impact are an increase in pandemic-related missing data, disruption to patient recruitment, and acceleration of decentralized trials with alternative methods to conduct patient follow up. In the area of missing data, operational metrics that are leading indicators of potential issues with the primary efficacy endpoints have been helpful. Work will be presented on sophisticated metrics that have been developed that can flag problems even for complex endpoints such as progression-free survival (PFS). These measures can be used as QTLs and/or in centralized study monitoring at a study level; and can be summarized to quantify risk at a portfolio level. In the area of patient recruitment, recruitment in complex trials is often facilitated by the use of stochastic models. The COVID-19 pandemic brought an additional layer of complexity to recruitment modelling: as countries went into shutdown, sponsors faced the dilemma of re-planning study delivery, re-balancing their portfolios “on the fly”. That translated into the need to adapt the recruitment model to account for effect of COVID-19. Work will be presented on an approach that blends Poisson-Gamma models, real-time recruitment data updates and epidemiological modelling of COVID-19 spread. In the area of decentralized trials, the use of alternative mechanisms to conduct patient follow-up, such as telemedicine and remote assessments, in addition to core clinic visits has been accelerated by the COVID-19 pandemic. Clinical trial sponsors had to react quickly to incorporate different methods for managing the conduct of ongoing clinical trials assuring the robust collection of safety and efficacy data. Lessons learned will be shared on how the impact of changes in study conduct were managed and consequences to the planned analyses.
Platform trials combine the advantages of multi-arm trials with the flexibility of classical drug development based on independent studies. Based on an overarching master-protocol, treatments can enter and leave the platform trial over time. An important factor for the substantial statistical efficiency gain of platform trials is the potential to share data from control arms. In this session we will discuss statistical issues arising from the sharing of controls focusing especially on challenges associated with the incorporation of non-concurrent controls in treatment control comparisons. While, non-concurrent controls can improve the statistical efficiency, they potentially may introduce bias through time trends if these are not appropriately accounted for. Other aspects, include the impact of correlated test statistics caused by shared controls, potential information leakage of control group effect estimates which may become known when results from stopped treatment arms are reported, and the potential to change the control group, for example if one of the experimental treatments becomes the new standard of care.
In this session, we discuss Bayesian and frequentist methods to maximize the information gain from shared controls, while maintaining the robustness of clinical trials with respect to type I error rates and bias.
FDA often relies on Advisory Committees to provide independent advice on whether a drug in development is safe and effective. The FDA advisory committee meeting (ACM) is a critical part of the NDA/BLA process if the committee is called for review. In this session, we will discuss the objective and process of the ACM, and share the experience of different statistician roles from past ACM.
Speakers from the FDA, industry, advisory committee, and consulting company may be invited to share their insights: 1) Industry perspective: How does a sponsor prepare for the meeting and what role does sponsor statistical function play? 2) FDA perspective: What is the role of a FDA statistician in ACM and how does he/she prepare the meeting? 3) Consulting company: Share the experience and advise on how to make the meetings more effective and productive. 4) Statistical advisor in AC: What is the unique role for the statistical advisor(s) in AC? What material and information must be ready before he/she can vote?
Patients with cancer or other recurrent diseases may undergo a series of therapies in the course of their disease. Time-to-event analyses comparing treatments can be complicated by when subsequent therapies, such as salvage treatments are given during follow-up. For example, acute lymphoblastic leukemia patients receive hematopoietic stem cell transplantation (HSCT) as a subsequent therapy or a kidney transplantation may be given to patients being treated with dialysis. In clinical trials, subsequent therapy may be a considered an intercurrent event that may cause complication in the interpretation of the experimental therapy. Additionally, the use of subsequent therapy may be nonrandom. For example, certain subsequent therapies may be administered only to patients who had beneficial responses to the experimental treatment, resulting in an imbalance in the use of subsequent therapy between study arms. ITT approach that ignores subsequent treatments may answer one scientific question of interest, but may not be relevant in certain clinical contexts where the primary interest is in quantifying a treatment effect without a contribution of a subsequent therapy. Clear specification of a scientific question of interest, study design, and analysis with a causal interpretation are essential for a successful clinical trial.
In this session, speakers will present perspective of FDA, academic and industry experts on the novel methods to analyze the impact of subsequent therapy in time-to-event endpoint.
The ICH E9 (R1) Addendum on Estimands and Sensitivity Analysis in Clinical Trials (Step 4) was finalized in November 2019 and subsequently implemented by many regulatory agencies. Where are we now? How has the ICH E9(R1) Addendum improved drug/biologic development? What are the common challenges and ways to address them? How can we improve interdisciplinary communication? What open questions remain, and what future directions should we consider? These and other questions will be discussed in this session by clinical and statistical representatives from the industry and the FDA. Talks will cover experience with using the estimand framework from industry or regulatory perspective: 1) common challenges and ways to address them 2) keys to productive interdisciplinary collaboration 3) estimand framework use impact on drug/biologic development 4) open questions and future directions. Duration: 75 minutes (all speakers and panelists confirmed). 1. Talk 1: Devan Mehrotra (Merck, ICH E9 (R1) Expert Working Group member, statistical) – 20 minutes. 2. Talk 2: Miya Okada Paterniti (FDA, Lead Physician, clinical) – 20 minutes. 3. Panel Discussion – 20 minutes: Panel Member 1: David Lebwohl (Intellia Therapeutics, clinical), Panel Member 2: John Scott (FDA, CBER, statistical), Panel Member 3: Bohdana Ratitch (Bayer, statistical), Panel Member 4: Sylva Collins (FDA, CDER, statistical). 4. Audience Questions and Answers – 10 minutes.
The increased cost and high attrition rate demand innovations from every aspect of drug development. In response to the challenges, FDA launched a Complex Innovative Trial Design (CID) Pilot Meeting Program in 2018 to support the use of complex adaptive, Bayesian, and other novel clinical trial designs. In early 2020, The Experimental Cancer Medicine Centre (ECMC) network from UK published guidelines for all stages of Complex Innovative Design (CID) trials in the hope of reducing the time it takes to get innovative treatments to patients with cancer.
Complex innovative designs include but not limited to complex adaptive designs such as multiple arms and multiple stages and Bayesian designs. CID designs have been applied to a few therapeutic areas across all phases with many focusing on oncology and rare diseases. CID trials present many challenges including clinical operation, interpretation, and statistical challenges around type I error control and inference. In addition, CID designs require additional simulations and sponsor-regulatory review meetings due to the complexity of the methods.
In this session, recent development of complex adaptive designs and Bayesian methods will be discussed. Experts from both industry and regulatory agencies will share their experiences regarding the design implementation, sponsor-regulatory agency interactions, best practices and lessons learned.
Artificial intelligence (AI) uses algorithm and software to combine large amount of data to learn automatically from patterns of features in the data and interpret underlying complex phenomena. AI, currently at the center of the medical horizon, is expected to be used on an ongoing basis to change care pathways by expediting early detection and improve patient access to needed healthcare. The regulatory issues and challenges with endpoint selection, study design, subject selection and statistical analyses are currently under scrutiny given the complexity and sophistication to which data is analyzed as opposed to traditional statistical methods. In this session, we will present and discuss statistical and regulatory challenges in diagnostic medical devices based on artificial intelligence.
In the past 40 years, the field of statistics has witnessed tremendous methodological innovations that have contributed immensely to the development and regulatory evaluation of medical products, including drugs, biologics and medical devices. Examples of such methodological innovations include those in the areas of adaptive design, covariate adjustment, multiple endpoints, missing data, estimand, subgroup analysis, causal inference, Bayesian techniques, artificial intelligence, machine learning, biomarkers, diagnostic test evaluation, to name but a few. However, as new technologies and new data sources emerge, more methodological innovations are in demand, for example for generating real-world evidence from suitable real-world data, utilizing master protocols, and developing precision medicine. In this session, an experienced group of industry and regulatory statisticians will discuss some methodological innovations achieved in the field of statistics in the past 40 years and how these innovations help improve the development and regulatory evaluation of medical products, as well as look ahead to the challenges and opportunities we face. Drs. Greg Campbell of GCStat Consulting, LLC, Paul Gallo of Novartis, John Scott of FDA/CBER and Lilly Yue of FDA/CDRH, have been confirmed to present from industry and FDA perspectives.
The Journal of Biopharmaceutical Statistics (JBS) is processing a special issue on real-world evidence (RWE). This special issue features innovative statistical methodologies and case studies that incorporate real-world data (RWD) or RWE into medical product development and regulatory decision-making, especially in areas of unmet medical needs. For instance, in clinical trials for treatment of rare diseases or rare, genetically targeted subsets of common diseases that need efficacious treatment, randomization may not be feasible due to subjects availability and ethical concerns. Using external controls by borrowing information from RWD sources or historical trials can provide relevant evidence (i.e., RWE) to strengthen the evidence from single-arm trials. The use of novel hybrid approaches to forming a control arm in oncology trials will be another highlight in this issue. Further, this issue covers innovative adaptive designs that utilize historical information from different populations in planning for pediatric trials. With the advances of technology and increasingly available RWD/RWE, the design strategies and data analyses in drug development will be enhanced substantially in efficiency. This session will focus on recently published special-issue articles with innovative thinking from the perspectives of the pharmaceutical industry and FDA. The objective is to highlight new and diverse uses in RWD and RWE and stimulating discussion and/or further research and innovation. Speaker 1 from industry. Tentative topic: assessing the safety and effectiveness of a Malaria vaccine using real-world data. Speaker 2 from industry or government. Tentative topic: a case study of glaucoma device approval with RWE --- China regulatory experience. Speaker 3 from government. Tentative topic: leveraging multiple real-world data sources in single arm clinical studies using propensity score-integrated composite likelihood approach.
Targeted therapies naturally have subgroups. This session discusses current oversights in stratified analyses comparing treatments in randomized controlled trials (RCTs):
1. Using efficacy measures such as Odds ratio (OR) and hazard ratio (HR) can make a prognostic biomarker appear predictive (e.g. OAK trial results in Gandara et al. 2018), targeting wrong patients, because the inference is affected by a covert factor even with ignorable treatment assignment in an RCT. As shown analytically and with real immunotherapy patient level data, OR and HR cannot meet the causal Estimand requirement of ICH E9R1.
2. Subgroup Mixable Estimation (SME) will give causal inference in an RCT, by accounting for the prognostic effect - the covert factor hiding in plain sight when mixing Relative response (RR) and ratio of median survival times (RoM).
3. Current computer packages compound the mistakes by mixing on the logarithmic scale, creating an artificial Estimand for the whole population which changes depending on how the population is divided into subgroups.
In this session, different perspectives from FDA and industry experts will be shared on these causal estimands.
An unprecedented number of new cancer targets are in development, and most are being developed in combination therapies. Back to 2013, FDA publish a guidance for industry on ecodevelopment of two or more new investigational drugs for use in combination. The guidance emphasizes on new investigational drugs and advocates factorial design. It has been challenging from industry perspective due to its demand on larger trial and longer development time. In this session, we will show how we can design smart trials to adequately show component contribution in combination therapy.
The rationale for combination therapy is to use drugs that work by different mechanisms, thereby decreasing the likelihood that resistant cancer cells will develop. Among all the potential combinations of immunotherapy, chemotherapy, targeted therapy, what strategies to take for selecting which combinations to further develop and how to design combination therapies clinical trials from Phase I to Phase III? In this session, we will present a new approach to predict efficacy endpoints (such as PFS, ORR, DoR and tumor size) of combination therapies based on efficacy information from each component of the combination and discuss its statistical implications for Phase II and III trial design and monitoring. We will also look at leveraging historical data into the design and analysis of phase 2 randomized controlled combination trials to improve efficiency of drug development programs to reduce sample size without loss of power. In summary, this session not only celebrates the innovations until today, but also looks toward the future of new statistical techniques in facilitating data-driven Phase II/III designs and decision-making in oncology combination therapy development.
The session will highlight real life experiences implementing Bayesian designs in proof of concept and pivotal COVID-19 vaccine trials. The speakers and discussant will reflect on the key methodological considerations, regulatory interactions, and operational challenges. Bayesian sequential proof-of-concept (POC) trial to investigate the efficacy of Bacillus Calmette-Guérin (BCG) in providing protection against COVID-19 infections through "trained immunity" will be presented. The main design consideration is to provide a framework to rapidly establish a POC on the vaccine efficacy of BCG under a constantly evolving incidence rate and in absence of prior efficacy data. The trial design is based on taking several interim looks and calculating the predictive power with the current cohort at each look. Decisions to stop the trial for futility or stopping enrollment for efficacy are made based on the last two predictive power computations. We will also discuss the statistical aspects of the Pfizer Phase III Bayesian adaptive vaccine trial. This trial incorporates multiple interim analyses, each based on achieving a sufficiently high Bayesian posterior probability of vaccine efficacy. The trial also incorporates early stopping for futility based on Bayesian predictive probabilities. The talk will include key statistical aspects and regulatory challenges of the design. We will then also give the regulatory perspective of the the use of master protocols to accelerate drug development during the pandemic. The 3 presentations would be followed by an open discussion led by the Session Chair.
In randomized clinical trials, adjustments for baseline covariates at both the design and analysis stages are highly encouraged by regulatory agencies. At the design stage, covariate-adaptive randomization (CAR) is most widely applied as “Balance of treatment groups with respect to one or more specific prognostic covariates can enhance the credibility of the results of the trial" (EMA, 2015); at the analysis stage, covariates can be used to gain efficiency as "Incorporating prognostic baseline factors in the primary statistical analysis of clinical trial data can result in a more efficient use of data to demonstrate and quantify the effects of treatment with minimal impact on bias or the Type I error rate" (FDA, 2021).
In response to the FDA draft guidance on covariate adjustment released in May 2021 and the heated discussion around that, this session invites experts from academia, regulatory and industry to present state-of- art methods from a practical perspective and the connections to the FDA draft guidance. An oriented Q&A will be hosted, where the panelists will discuss the broad impact, provide opinions for better practice, and respond to open questions from meeting attendees.
The ICH E9 (R1) addendum outlines a clearly defined estimand which describes the quantity to be estimated for a specific study objective in clinical trials. Estimand framework requires alignment of the study objectives, study design, study conduct, data analysis, and interpretation of results. A clearly defined estimand provides additional benefits in trial designs, analysis and interpretations. After identifying the trial objective for a clinical study, an estimand is defined with the specifications of the five attributes to address a specific study objective. One important attribute of the construction of the estimand is handling intercurrent events (IEs). Appropriate statistical methods need to be planned as primary analyses to evaluate the estimand which may require certain assumptions for handling the IEs, and then sensitivity analyses are needed to assess the assumptions.
In this session, we will review the estimand framework, give some examples of estimands and some advanced statistical methods for handling IEs, from different aspects and stages of clinical trials. Speakers from pharmaceutical industry and regulatory agency who are highly engaged in this area will share their recent research experiences.
With an increasingly competitive and dynamic oncology space, advancing successful therapies and terminating unsuccessful therapies earlier is crucial. Additionally, traditional oncology clinical trial endpoints such as objective response rate and progression-free survival may not fully characterize the clinical benefit of innovative oncology products due to the atypical patterns of response (e.g., delayed response, durable response) and progression (e.g. pseudo progression) with immune checkpoint inhibitors. These difficulties lead to enormous interest and research in exploring novel early endpoints in oncology clinical trials. With an efficacy endpoint which could be measured early but still be able to fully characterize the clinical benefit, the pharmaceutical industry and the regulatory agency hope to optimize the compound development with go/no-go decisions, minimize the exposure of ineffective therapies to the cancer patients, and expedite the development of oncology drug and biologic product. A lot of research have been done in this area such as (1) developing alternative metrics incorporating the atypical response and progression dynamics, response duration, and optimal bars to fully characterize the clinical benefit with immune checkpoint inhibitors; 2) validating its surrogacy with clinical endpoint overall survival; and (3) evaluating the early endpoint minimal residual disease to help the pharmaceutical industry planning of its use as a biomarker in clinical trials or to support marketing approval in hematologic disease. In this session, we anticipate 2 speakers from regulatory agency and industry with extensive experiences in oncology development to share their insights about new statistical developments and related applications in evaluating efficacy of cancer therapies in the areas discussed above, followed by panel discussions.
Reference intervals (RIs) represent estimated distributions for reference values from healthy populations of comparable individuals and are an integral component of laboratory diagnostic testing and clinical decision-making in human and veterinary medicine. Establishment of RIs involve intricate study design and statistical issues, including how to select individuals for a reference population and how to report results from a reference database (e.g. linear regression, quantile regression, nonparametric estimation of stratified data). There are existing recommendations for the establishment and use of RI from the Clinical Laboratory and Standards Institute (CLSI) and the American Society for Veterinary Clinical Pathology (ASVCP), but challenges remain. This session is designed to present current practices as well as discuss clinical and statistical challenges in the establishment and use of RIs.
This session will start with an introductory presentation by the speaker from University of Wisconsin-Madison to highlight the procedural steps for establishing de novo population-based RIs as well as bring light to alternatives applicable to specific situations. The speaker from FDA’s Center for Veterinary Medicine (CVM) will then share her experiences in using RI in the review of animal drug applications. Topics include the challenges and opportunities associated with the use of RI for the interpretation of clinical pathology data in target animal safety studies in veterinary species. Finally, the speaker from FDA's Center for Devices and Radiological Health (CDRH) will discuss study design and statistical analysis issues for medical tests with RI and give a summary of current developments and challenges on RI.
A clinical trial with an extensive amount of missing data may not provide reliable evidence to support the efficacy or safety of an intervention. Although strategies and statistical methods for dealing with missing data have been available and commonly applied, the need for novel statistical methods in the innovatively designed small-size trials (e.g., rare diseases and pediatric patient populations) brings in increasing attention. For example, in evaluating a drug for rare diseases a single arm study may be considered with an external control. The external control might be formulated from historical trials or real world data (RWD). How to successfully handle the missing data in both the single arm trial and the external control and analyze the trial with missing data is under debate among statisticians. Another challenging example is the handling of missing data in patient reported outcomes. When historical control is considered, the version of PRO instruments may be different and the missing data could have different impact because of the version difference, the assessment of treatment effect may have increased complexity.
In this session, we will invite experts from regulatory agencies, academia and the pharmaceutical industry to share with us their unique experiences and methods in trials with significant amount of missing data. A panel will be followed for sharing experience and explore potential solutions
The COVID19 public health emergency has impacted all clinical trials including ANDA bioequivalence studies. Challenges arose from quarantines, site closures, travel limitations, interruptions to the supply chain for the investigational product, and many more. These challenges led to difficulties in recruitment, meeting protocol-specified procedures including administering or using the investigational product or adhering to protocol-mandated visits and lab/diagnosis testing, as well as statistical analysis and interpretation of the study results. How to minimize the impact of COVID19 on the study conclusion, assure the safety of trial participants, maintain compliance with good clinical practice and minimize risks to trial integrity is an imposing task for everyone involved. In this session, statisticians from the industry and the agency will discuss the statistical considerations in the modification of study design, trial conduct and statistical analysis in ANDA bioequivalence studies in the presence of unprecedented public health emergency. This session will include two presentations followed by a discussion. The first speaker (Pina D' Angelo, VP from Innovaderm) will discuss how to handle missing data caused by the pandemic in efficacy and equivalence studies. Various methods of imputation will be compared and case studies will be presented with the impact of different imputation methods on the efficacy and bioequivalence result. The second speaker (Wanjie Sun, Lead Mathematical Statistician from FDA) will discuss the challenge of COVID19 on bioequivalence studies and general recommendations, from study design to statistical analysis in PK and clinical bioequivalence studies. In particular, Dr Sun will focus on adaptive designs for clinical endpoint BE studies to cope with the impact of the pandemic. Dr. Stella Grosser (Division director of OB/DBVIII, FDA) will provide a discussion of the two speakers' presentation from regulatory and statistical perspectives.
Adaptive clinical trial designs have been employed to clinical development for a few decades. In 2010, FDA issued the first guidance for industry, which signified the booming and the demand of innovative adaptive designs. While the drug development community has benefited from this innovation, the complexity of the adaptive design also poses great challenges. In response to the challenges, FDA released two new guiding documents for adaptive designs and complex innovative designs in 2019. In a review of the EMA and FDA advice letters over the last >5 years regarding proposed adaptive phase II or phase II/III trials, 20% of the 59 studies were not accepted. Concerns that led to the rejections by the agencies were insufficient justifications for the adaptation, inadequate type I error rate control, and study bias (Van Norman, 2019; ElsaBer et al, 2014). Time required for reviews of adaptive trials for the FDA and EMA was a median of 12.2 and 14 months, respectively, which were 6 to 7 weeks longer than the review times of non-AD studies by the FDA and EMA. Frequently cited issues in reviews of AD were inadequate statistical power, risk of ineffectively evaluating doses, type I errors, and inadequate blinding (Van Norman, 2019).
In this session, well-known experts in adaptive designs from both industry and regulatory agencies will share their real-world experiences regarding the design implementation and sponsor-regulatory interactions. Participants will benefit from the successful stories as well as the lessons learned from each presenter. A comprehensive review of adaptive designs and case studies in variety of therapeutic areas will be presented. Future directions of adaptive design will also be discussed.
Unique challenges and opportunities are presented for drug development involving small populations such as orphan disease and pediatric. Although there are several incentive programs to expedite the drug approval process of such development, the same regulatory (FDA) approval standard applies as for common conditions i.e. substantial evidence regarding the effectiveness and safety of the drug. However, recruiting pediatric patients with rare conditions is challenging due to limited number of patients, and some other ethical concerns. Given the requirement for substantial evidence coupled with limited access to patients in small population, innovative design and analyses are much needed to leverage historical data when appropriate, optimize the value of patient data, and minimize patient risk. For some rare diseases, patients experience recurrent episodes of the outcome or event of interest. Drug developers occasionally consider a design which incorporates multiple episodes. However, within patient correlation and its impact on the efficacy evaluation are often not considered or are underestimated. We will discuss several innovative methods of borrowing historical data and use of Bayesian sequential methods in small populations as well as highlight applications. A presentation will present a simulation study to explore the operating characteristics when the design and analysis include recurrent episodes. Correlation structures, and recurrence rates were considered, and their impacts were assessed. In this presentation, the gains and the concerns of such design will be discussed; An academic presenter will discuss Bayesian sequential monitored trial that allows early stopping for efficacy or futility. Another representative will present their research and experience on using a Bayesian two-stage adaptive design to enrich the trial decision making by borrowing information from adult trial through a case example.
As new statisticians just start their careers or when statistical leaders progress to new and more senior levels in their careers, there is a need to learn and adapt their leadership skills. Often, the skills that enabled a statistician’s success in his or her current role will not enable him or her be fully effective at a level with greater scope and responsibility without some adjustments to the skill set and the way those skills are applied. Statistical thought leaders from academia, industry, FDA and consulting services will share their experience and perspectives by looking at some of the key characteristics that leaders need across different leadership levels and discussing how those characteristics evolve across various levels of leadership to enhance career development, including the balance and blending of technical and “softer skills.” This townhall session will include a panel and open audience engagement to raise the awareness of the importance of soft skills for career advancement and to explore practical guidance for how to lead in one’s current role while looking ahead for opportunities to prepare/develop leadership skills for the next role at different stages of one’s career.
Questions to be discussed: 1. What skills are foundational to leadership as a statistician? In other words, what skills need to be developed in one’s early career that will continue to be important as the statistician progresses to more senior levels? 2. How the communication style has needed to change as one progresses to more senior levels as a statistician? 3. What leadership level is the most difficult transition? 4. What leadership skills do more senior leaders need than leaders at earlier levels didn’t
Nonalcoholic steatohepatitis (NASH) is associated with an increased risk of liver-mortality and cardiovascular disease. Currently, a liver biopsy is the only generally acceptable method for the diagnosis of NASH and accessment of its progression toward cirrhosis. No approved drug treatments are approved for NASH. Therefore, a therapeutic agent needs to be developed as quickly as possible using expediated regulatory pathway and innovative study design. Many challenges exist in the clinical development of a therapeutic for NASH such as a majority of patients are undiagnosed, uncertainties in assessment of disease progression, and the requirement of sequential liver biopies. In the paper published (C. Filozof, et. al. 2017), the authors present an approach for the clinical development of a therapeutic for NASH. The approach involves adaptive seamless clinical trial design, consideration of surrogate endpoint that is “reasonable likely to be associated with outcomes”, and accelerated approval pathway. Three adaptive seamless design approaches have been proposed - Phase 2/3, 3/4 and Phase 2/3/4. These designs may include interim analyses, and/or sample size readjustments. The seamless design may reduce the overall sample size of the trial while allowing patients to continue after each one of the phases. These designs can take advantage of the limited the number of patients willing to have multiple liver biopies. The adaptive seamless design would need to include a process that aimed to control the overall Type 1 error rate due to the multiple stages. The procedure for review of biopsies (independent read) will need addressed, specifically reading criteria, how to handle discrepancies and their impact on analysis. The presenters for this sessionIn the presentation, we will review the study design options for NASH including interim analysis, the clinical endpoints including surrogate endpoints, and sample size considerations at each phase of adaptive seamless design.
Over its long history, the Biopharmaceutical (BIOP) Section of the American Statistical Association (ASA) has fostered community and created a shared sense of purpose among statisticians in the medical product industry. BIOP was founded as the Pharmaceutical Subsection of the Biometrics Section in 1968. As a response to the subsection’s tremendous growth in membership over the succeeding 12 years, it was granted full section status in February 1980. This approval came with one condition, that at least 500 ASA members would join the new Section. Once the membership milestone was reached, the Biopharmaceutical Section (BIOP) formally came into existence on January 1st, 1981. Since its inception, the Biopharmaceutical Section has been one of the largest and most active sections within the ASA. BIOP sponsors the annual Regulatory-Industry Statistics Workshop and the biennial Nonclinical Biostatistics Conference which have been very successful at creating community among statisticians from across government, industry, and academia. BIOP members receive valuable benefits such as web-based training courses; paper, poster, and scholarship awards; membership to scientific working groups; leadership training; and mentoring services. This panel discussion will celebrate the 40th Anniversary of the Biopharmaceutical Section and discuss how the Section will continue to lead the way to address the challenges of medical product development in the 21st century. Six past chairs will participate in this panel discussion.
Heterogeneous diseases are medical conditions which have several etiologies, phenotypes or present a high variability in clinical manifestation. Heterogeneity adds complexity for clinical researchers to conduct clinical studies particularly in the choice of endpoints. In COVID-19, for example, severe hospitalized patients may have difficulty recovering within 14 days upon receipt of treatment but may benefit from not progressing into ventilator use or death. In general, patients for a specific phenotype may perform well in one specific endpoint, but the exact endpoint may not have a strong effect for other patients in the overall population exhibiting other phenotypes. In this case, if a single endpoint is used, the benefit gained for other individual outcomes is not counted and are diminished. The standard approach in this case is to define a composite outcome based on the individual outcomes. Using the COVID-19 as an example, the composite outcome might be time to either ventilator use of death. However, such an approach treats all individual outcomes equally, which ignores the difference in severity between them (e.g., death is worse than ventilator use). In addition, a less-important outcome may “mask” a more-important outcome. In contrast, a win ratio analysis accounts for relative priorities of the components. More precisely, the win ratio is defined as the ratio of the number of patients on the investigational arm who fared better versus worse (wins versus losses) in comparison to the control arm over a set of individual outcomes ranked by clinical importance (i.e., the odds of ``winning'). In this session, we will focus the discussion on different application examples and different research angles on the win ratio and its application in heterogenous disease or diseases where benefit is derived in multiple domains. We will also discuss novel/open statistical issues including confidence intervals, impact of censoring, and competing risks in this area.
Since 2014, synthetic controls, defined as a statistical method used to evaluate the comparative effectiveness of an intervention using a weighted combination of external controls, in successful regulatory submissions in rare diseases and oncology is growing. In the next few years, the utilization of synthetic controls is likely to increase within the regulatory space owing to co-existing improvements in medical record collection, statistical methodologies and sheer demand.
As fully powered randomized controlled trials are generally infeasible, novel and efficient analytic strategies are sought after with every challenge encountered in these applications. Some of the existing methods or application evolve around using synthetic control patients from control arms of earlier phase trials or patient registries, incorporating prior information from early phase trials, computing shrinkage estimators of effects in rare subsets of disease, sharing of control groups across protocols, etc. In this session, we will focus on new defensible strategies arising from existing or novel applications of synthetic controls. The focus is on practical relevance and real applications rather than theoretically elegance of the method. Examples of these strategies include, dynamic adaptive designs with a calibration on the use of synthetic controls at the interim analysis, statistical methods for use of multiple controls such as multiple treatment propensity scores and matching within quantiles in serial propensity scores, composite likelihoods and use of propensity score weights, enhanced doubly robust methods for PS weighting of synthetic controls. Perspectives are also included to discuss examples which have been seen in the regulatory setting and to provide perspective on what are pitfalls when using these methodologies.
In precision medicine that targets the right treatments to the right patients at the right time, a tri-center draft guidance by CDRH, CDER and CBER recommends that therapeutic products be co-developed with their IVD companion diagnostics (CDx). Due to the nature of a drug and its CDx often developed by different companies in different phases, a lag may occur when a market-ready CDx test is unavailable for the pivotal trial of a drug to utilize for enrollment, which may eventually impact on the drug approval due to lack of a cleared CDx. In such a situation, a clinical trial assay (CTA, a prototype of CDx) is in use to enroll patients, while a proposed CDx assay is undergoing analytical validation. Hence, there’s a need to design an IVD bridging study between the CTA and CDx assays to demonstrate that the performance characteristics of the proposed CDx are similar to the CTA, particularly in analytical concordance and clinical utility. The involved validation steps are recommended to reflect an unbiased intent-to-treat population by CDx results. In pivotal clinical trials, the original trial samples can serve as a sample pool for the validation. Challenges may arise on estimating efficacy for CDx defined intend-to-treat population using the results from a pivotal trial enrolled by CTA status and a bridging study, given that the efficacy data of the pivotal trial by CTA can be missing or biased for a CDx defined population due to the discrepancy between CTA and CDx, and need to be adjusted for CDx status. Such an adjustment depends on many factors, e.g. prevalence of a biomarker status (e.g. CDx+), level of agreement between CDx and CTA, and trial sample retest availability. These challenges call for special attentions to trial designs when concurrently validating a drug and its CDx. In the session, we’ll present an overview of IVD CDx bridging study designs, and address the challenges and possible solutions. The speakers are from FDA, pharma and device companies.
Real World Evidence (RWE) has been highlighted by regulatory decision makers as an important source to complement clinical trials, and bring new insights for drug development and postmarket research. Recently, the underlying data supporting the generation of RWE has been rapidly expanding from traditional clinical care and electronic health record (EHR) sources to now mobile devices, biosensors, or other wearables. The breadth and heterogeneity of real world data, and its position within the big data landscape, unlocks an important opportunity to consider new analytic capabilities that assist in drug development, particularly in the realm of machine learning (ML) and its role in the generation of RWE. In 2019, the FDA published a discussion paper highlighting its approach towards premarket review for ML-driven analytics. This paper envisions a regulatory framework for evaluation of future contributions of ML, and in particular, focuses on ML’s unique property of adapting to data dynamics over time in ways that traditional statistical approaches fail to reveal. Recent discussions between the FDA and the patient engagement advisory committee (PEAC) surfaced a need to concretely define the standards related to model validation, evaluating the training data population, and defining the level of communication and transparency necessary when submitting filings that leverage ML algorithms. This proposed session seeks to discuss validation and best practices critical to the assessment of ML methods for regulatory use, while also surfacing ML methods that have successfully assisted regulatory decision making in the past. These methods can include increasing sample sizes and efficiency of cohort selection, assisting in novel variable development and missing data, and ensemble approaches to improving comparative effectiveness estimates.
Randomized clinical trials are traditionally conducted in the context of demonstrating efficacy and safety of medical products. For many disease indications, high placebo response in clinical trials can hamper the detection of treatment effect. Enrichment designs and other innovative designs have been proposed to address these issues and concern over inadequate recruitment of patients for pediatric and rare disease trials.
Enrichment designs, such as sequential parallel comparison designs (SPCD) and sequential multiple assignment randomized trials (SMART) by means of re-randomization, have been implemented in clinical trials recently and have shown success and efficiency. However, the advantages of re-randomization in enrichment designs and its practical use still need to be further explored.
This session is intended for audiences who are interested in clinical trials that implement re-randomization and who want to gain a clear understanding of the utility and technical challenges. Speakers from regulatory agencies, pharmaceutical companies, and academia will share their latest research, practical trial examples, and potential solutions.
The conversation about estimands has moved into implementation across the industry. Statistician and their clinical colleagues are sorting through the various disease areas and study designs to create precise definitions of estimands that are clinically meaningful and acceptable to regulatory agencies. This session will feature an enactment of meetings between industry statisticians and clinicians with their regulatory counterparts. A hypothetical Alzheimer’s Disease treatment will be the focus of the discussion. The session will consist of three parts. The first part will portray an End of Phase 2 Meeting between the industry and regulatory participants. In this part of the session, the industry Sponsor will present its rationale for its Phase 3 trial estimands. Regulators will challenge and advise the Sponsor. The second part of the session will be after the hypothetical trial is complete and the Sponsor will share results with regulators in a Pre-NDA meeting. There will be challenges and debates. The third part of the session will be a floor discussion and Q&A from the audience.
The hypothetical study and results will be constructed to maximize the opportunity for different perspectives and approaches. Both statistical and clinical viewpoints will be included. Rather than a series of formal presentations and a discussant, ideas and proposals will be presented by the Sponsor, and regulators will share their perspectives in the context of the regulatory interactions. At the end of the session, the audience will be asked to vote on the clinical questions of interest and corresponding estimands considered useful at each stage in the development.
In clinical studies, a result with statistical significance is often misinterpreted as a clinically important result. However, statistically significant results are not necessarily clinically significant, and vice versa. Statistical significance is used to measure the probability of a study's results being due to chance while clinical significance is used to determine whether the results of the trial are likely to impact current medical practice. In this talk, we will invite three experts from FDA, Industry and Academia to discuss real examples in related to statistical significance and clinical significance and how to balance them in evaluating treatment effects.
The design of time to event trials relies on modelling the complex relationship between number of events, sample size and patient follow-up. This relationship is further complicated by the fact that the survival curves rarely satisfy the assumption of proportional hazards. We present three scenarios in which the usual sample size calculations for event-driven trials, based on the assumption of a constant hazard ratio, can lose considerable power while more sophisticated modelling approaches either at the design or interim monitoring stages can recover the lost power. (1) For immuno-oncology trials characterized by delayed response one may use "modestly weighted" logrank statistics for the final analysis (Magirr and Burman, 2018) that, unlike the Harrington-Fleming weights, ensure adequate power while also controlling the strong null hypothesis. (2) It is often difficult to determine at the design stage whether to increase sample size, increase follow up or combine the two options to obtain a target number of events. At the interim analysis stage, however, one can obtain Bayesian methods to learn from the observed data and select the combination that produces the shortest study duration for a specified predictive power. Type-1 error is controlled by frequentist methods. (3) Maurer and Bretz (2013) have extended use of graphical methods for multiplicity control to group sequential designs. This application, combined with related alpha-spending approaches to deal with complexities in the evaluation of multiple hypotheses, will be presented. In addition, methods for taking advantage of known correlations to relax group sequential bounds are presented as extensions of designs with a single analysis. Speakers: Pranab Ghosh, Pfizer; Rajat Mukherjee, Cytel; Keaven Anderson, Merck. Discussant: Cyrus Mehta, Cytel.
Rare disease in different biological indications demands alternative strategies for the feasibility of clinical studies. Small patient populations limit study design and implementation. Also, the genetic basis or the co-morbidities associated with many rare diseases might be confounding factors in the study outcomes. The US Food and Drug Administration had issued guidance on rare diseases, common issues (January/February 2019) and natural history studies (March 2019) in drug development, and gene therapy for rare diseases (July 2018). Some strategies to consider: • Improved study design (e.g., factorial design, response-adaptive randomization) to minimize the participants. • Use of synthetic control arm and adjustment for the confounding factors either by improved study design or by using propensity score matching. • Efficient statistical analysis or incorporation into larger evidence context (small individual studies with definitive evidence). • Applying effective statistical outcomes measures (e.g., continuous, surrogate, or composite endpoints). • Planning and early collaboration with regulatory authorities could generate a pathway to drug development in timely manner • Focused patient recruitment for those who exhibit a higher probability of the outcome
Novel statistical methods are applicable in orphan drug development such as sample size re-estimation, dynamic borrowing through power priors, and fallback tests for co-primary endpoints. Also, model-based approaches, such as nonlinear mixed effects modeling and Bayesian approaches could be applied. In this session, we plan to discuss clinical designs, adaptable for rare diseases and different statistical methodologies.
Endpoints are a cornerstone of scientific research in drug development and development of clinically relevant, accurate, reliable, and reproducible endpoints is critical to deploy them successfully in the context of use. This field has been heavily researched and documented via publications and regulatory guidances in the context of clinical trials, while it is still nascent in the context of real-world data which is fast evolving to be a formidable data source for evidence generation in scientific and regulatory applications, in large part due to the 21st Century Cures Act and the FDA RWE framework. Given inherent differences in the ascertainment and quality of outcomes in the real-world vs clinical trial settings, the development of endpoints using data from real-world sources, such as electronic health records (EHR), claims, and registries carry special nuances that demand close attention and robust evaluation. It is only recently that consortiums facilitated by organizations like Duke Margolis Center for Health Policy Research have started to closely evaluate and specify a scientific framework for the development of real-world endpoints. Understanding how real-world endpoints perform within a pre-specified framework as well as how they compare with those used in clinical trials is an important step in assessing their validity and enhancing their utility across use cases, including regulatory decision making. This session will discuss the methodology and key considerations involved in the development of real-world endpoints, including the scientific framework, data considerations and analytical strategy, highlighting case studies with applications in the regulatory setting. While the main focus will be on endpoints in oncology, particularly tumor response-related endpoints, relevant examples from other therapeutic areas will also be considered.
This session aims to discuss different tools to support R-based submissions in a biopharmaceutical regulatory setting. Special attention given to the validation of the software. Statistical analysis software used for new drug applications in the biopharmaceutical regulatory setting has to fulfill specific requirements. Although the SAS programming language is the current standard, the food and drug administration (FDA) does not require use of any specific software for statistical analyses. The FDA provides a clear definition of validation (see Glossary of Computer System Software Development Terminology), which can be broken down into three core components: accuracy, reproducibility, and traceability. Having these three components in mind, we present ideas and suggestions reflecting the current thinking of the R Validation Hub working group and collaboration partners, which is a cross-industry initiative funded by the R Consortium. Our mission is the support the adoption of R within a biopharmaceutical regulatory setting. A comprehensive framework is discussed in a white paper (see A risk-based approach for assessing R package accuracy within a validated infrastructure). For the risk assessment, we differentiate two types of R packages: 1) Core and recommended packages that are shipped with the basic installation and a rigorous software development lifecycle assures minimal risk, and 2) Contributed packages that may vary in their accuracy and development rigor, which could be assessed by various metrics. We focus our attention on validating contributed packages. We present some of the tools that provide workflow to evaluate the quality of a set of R packages: The R package riskmetric, an associated shiny application to perform risk assessments, and discuss things to consider when testing R packages. Lastly, we offer some insights into standard tables for submissions and the r2rtf package.
Central to assessing drug safety in pre-market drug development is the identification and evaluation of adverse events (AEs) during clinical trials. Some serious AEs may occur only after the six months of exposure typically seen in a premarket randomized trial; other AEs may increase in frequency or severity with time. Selection of the number of patients to be followed is based on the estimated probability of detecting a given AE frequency. Patient or product registries with primary data collection can increase the sample size available for making risk-benefit decisions. Common study designs for long-term extension studies can include uncontrolled, open-label trials, with patients recruited from multiple studies and with different inclusions/exclusion criteria, prior or concomitant medications, and lengths of follow up. Other trials have mid-study modifications from a placebo-controlled to an open label design when it is apparent that the original trial is not enrolling enough participants. Yet other studies, particularly for well-established indications, use a randomized design with active comparators. Emphasis has been placed on designs and data collection for efficient assessment of safety that provide robust assessment of the long-term risks of investigational drugs. Patient or product registries can be used for broader, valid, and resource-efficient assessment of safety. The focus is on practical designs for pre-market safety registries and appropriate statistical methods for risk assessment. Design considerations include strategies to avoid type 2 errors, inclusion of active control for contextualization of signals, and operational efficiency for better retention of patients. Analyses of drug safety databases need to consider biases and gaps in data. A robust discussion of the outlined issues will be presented. Session speakers and panelists have extensive experience in these settings.
The development and evaluation of COVID-19 vaccines, treatments, and tests continues to evolve to meet new challenges, including the emergence of new SARS-CoV-2 variants. In this session, leading biostatisticians will speak on the latest developments and statistical aspects of vaccine, treatment, and test evaluation. A panel session will follow, in which statistical and regulatory aspects will be discussed, with questions encouraged.
In response to the high interests and rapid development of statistical methods utilizing historical and real-world evidence (RWE) for clinical trials and observational studies, we propose a session consisting of five experts in this area from academia, regulatory branch, and industry. The session aims to share and discuss their most recent development in the research borrowing information from historical and RWE to improve the design and decision making of clinical studies. A unique aspect of the session is to cover the perspectives of this important topic from three angles: 1) Academic researchers (Speakers Peter Müller and Brian Hobbs) will share their new methodological development that addresses the modeling, inference, and theory from methodological perspective; 2) Regulatory expert (Speaker Sue-Jane Wang) will provide the perspective of regulatory advancement on the use of RWE with novel research; and 3) industry leaders (Speakers Sammi Tang and Heinz Schmidli) will share the experiences in the implementation and application using statistical methods. The session is likely to attract a large audience in the space of modern clinical trial methods, especially those who are interested in the approaches utilizing real-world ad historical data to improve the efficiency of clinical trials and drug development.
The presentation titles of the five speakers are: 1) Brian Hobbs, Bayesian mediation modeling for phase III oncology trial design; 2) Peter Müller, Single arm trials with a synthetic control arm built from RWD; 3) Heinz Schmidli, Robust use of external control information in clinical trials; 4) Rui (Sammi) Tang, A roadmap to using historical controls in clinical trials – by Drug Information Association Adaptive Design Scientific Working Group (DIA-ADSWG); 5) Sue-Jane Wang, Performance of LTMLE in the presence of missing patterns for prospective control- matched longitudinal cohort studies.
The recent advances in cell and gene therapy bring an exciting new era in drug development. While transformative clinical responses to cell and gene therapy have been observed in hematological indications and rare diseases, e.g. B-cell malignancies, multiple myeloma, hemophilia and sickle cell disease (SCD), there are also many unique challenges in the design and analysis of cell and gene therapy clinical trials. The statistical challenges facing cell and gene therapy development include but not limit to: (1) how to identify optimal dose considering safety and efficacy outcomes simultaneously; (2) appropriate borrowing from historical information; (3) utilization of RWE as external control for single arm registration enabling trial; (4) use of master protocol (e.g. multiple expansion cohorts or platform trials) to optimize operational resource, cost and timeline for cell and gene therapy early development; (5) How to design efficient trial in confirmatory RCT to mitigate potential non-proportional hazards patterns which would result in power loss? (6) estimand framework for single arm and RCT in cell and gene therapy.
This session will feature speakers from industry, academia and regulatory agency to share their insights and highlight the recent statistical innovations to address the challenges of cell and gene therapy trials. List of invited speakers:
• Yuan Ji (University of Chicago): “The Joint i3+3 (Ji3+3) design for phase I/II adoptive cell therapy clinical trials”
• Brian Hobbs (University of Texas at Austin): “Statistical design considerations for trials that study multiple indications”
• Zhenzhen Xu (FDA): “Design for immune-oncology clinical trials involving non-proportional hazards patterns”
• Yang Song (Vertex): “Confirmatory statistical procedures with multiple IAs in a single arm setting”
The ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials has been of great interest to industry and regulatory statisticians and has raised important issues of interpretation and implementation. The Pharmaceutical Industry Working Group on Estimands in Oncology,with over 30 industry statisticians representing over 20 companies plus regulatory, clinical, and academic participants, has been working since 2018 on developing a common understanding of the estimands framework, supporting dialog between industry and regulatory participants, and developing best practices.
This session's presentations are drawn from white papers and journal manuscripts representing the work of three of the Working Group's active subteams, the Treatment Switching Subteam, Censoring Mechanisms Subteam, and Hematology Subteam. These subteams have surveyed the field, engaged in extensive discussions across the industry, and developed an overview and recommendations within their respective areas that should be of broad interest to industry statisticians and the clinical trial community. These presentations should be of interest both to people in involved in oncology clinical trials and people involved in implementing and understanding the practical implications of the ICH E9 (R1) estimands guidance.
COVID-19 pandemic impacted so many people as well as economy worldwide in the year 2020. FDA and pharmaceutical companies are at the frontline bringing treatment options and vaccines to protect the public. In the last two months of 2020, Pfizer, Moderna and AstraZeneca delivered positive news regarding their vaccine trials. Many other companies were working hard as well hoping to get more protective vaccines to the market.
Vaccine research and development normally would take 10-15 years. However, this process was accelerated significantly for COVID-19 vaccines. From March 2020 to the first study reporting positive results, it took 8 months! This acceleration required not only hard work, but also innovations from all aspects of the research and development, statistics included. Statisticians played a strategic role in designing clinical trials, defining clinical endpoints, analyzing the study data. Seamless design, adaptive design, Bayesian design, sequential monitoring, etc., were seen in many trials. The development of these novel, innovative statistical methodologies in the earlier years have come to fruition at the right time.
Statisticians from FDA, academia and industry will share the grueling but yet rewarding experiences in developing COVID-19 vaccines. Potential speakers for the session include:
Dr. Tsai-Lien Lin, CBER, FDA Dr. Bo Fu, Sanofi Pasteur Dr. Satrajit Roychoudhury, Pfizer Dr. Peter Gilbert, Fred Hutch
Bayesian framework may provide high utility for efficient and innovative drug development. Some of Bayesian methods advantages include flexibility in decision making, ability to incorporate prior information, and a natural accommodation of heterogeneity. The past three decades have witnessed vast development and impact of Bayesian adaptive designs in clinical trials. The following recently published FDA guidance mention Bayesian methods explicitly: 1) Adaptive Design Clinical Trials for Drugs and Biologics Guidance for Industry (December 2019) 2) Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products (December 2020) 3) Demonstrating Substantial Evidence of Effectiveness for Human Drug and Biological Products (December 2019; draft). Evolution in approaches and current challenging issues in Bayesian adaptive design use in modern clinical trials will be discussed, including the following: 1) statistical pitfalls in elicitation of prior distribution for Bayesian adaptive designs, including how to systematically quantify the prior effective sample size and robustness in prior specification 2) how to handle patient heterogeneity in designing trials, including some recent proposals on how to adaptively identify and test patient heterogeneity, and how to efficiently select the promising subgroups as the trial proceeds 3) statistical issues of potential conflict between concurrent and non-concurrent control data in platform trials, including an in-depth understanding of the non-concurrent control arm data use and its impact on Type I error rate 4) new statistical challenges that arise from the development of trials for COVID-19 treatment, including a discussion of a Bayesian integrated sequential trial design to expedite the drug development process for COVID-19 and improve the efficiency of the use of resources. Four experts (3 speakers and 1 discussant) from academia, regulatory agency and industry have confirmed to join the session.
Platform trials can leverage the efficiency of multi-arm trials with the flexibility of classical drug development based on independent studies. Based on an overarching master-protocol, treatments can enter and leave the platform trial over time. Operational and statistical efficiency gains can be achieved by a shared trial infrastructure, adaptive design features and (control-arm) data. As the platform trial concept extends the framework of current clinical trials, traditional concepts need to be adjusted to the new paradigm. Current regulatory guidance requires control of the study-wise type I error rate. In contrast to this, it has been argued that a per-treatment error rate should be controlled in platform trials. However, when multiple treatments are tested in a common framework, it seems reasonable to assess the overall operating characteristics of the platform trial as the probability of false positive decisions, the probability to identify the best treatment and so on. In this session, we will discuss novel statistical approaches to address multiplicity in platform trials, discuss which error rates to control (and for which family of hypotheses) and consider the role of criteria as the independence of hypothesis, sharing of control data and the phase of the trial. Session Structure 1. Talk academia 2. Talk industry 3. Discussant FDA 4. Discussant EMA
Digital data, endpoints, and analyses – regulatory guidance for clinical trials The 21st Century Cures Act (Cures Act) has ushered in multiple pathways to incorporate the perspectives of patients into the development of drugs, biological products, and devices in FDA’s decision-making process. One important development has been in the rapid adoption of Digital health technologies (DHTs) in pharmaceutical trials. The capabilities offered by DHTs allow modernization of clinical trials designs, including the use of real-world evidence, and clinical outcome assessments, which will speed the development and review of novel medical products. Digital health data collected from the DHTs capture patient reported outcomes that describe physical activities and broader functional status of the patients. Endpoints based on these data are quite varied in multiple contexts of use, definitions, and time frames. Issues relating to within-patient variability in digital measures, validation of endpoints and methods of analysis are of prime importance to pharmaceutical researchers and regulatory reviewers alike. A consolidated regulatory guidance in this direction to address novel digital endpoints, data collection & standardization strategies, patient privacy concerns, and analysis techniques is expected to be very timely and useful. In this session, we explore aspects of digital health technologies useful for adoption in clinical trials and applicable regulatory perspective/guidance.