Following the commitments of PDUFA VI initiative, the FDA recently updated (draft) guidance on adaptive designs, prominently emphasizing the role played by simulation in designing complex clinical trials. Simulation is a well-recognized tool in designing adaptive trials, as evidenced by number and variety of publications on this subject. However, there is still lack of clarity and consistency regarding what constitutes a good “back bone” of a simulation process and how to document it. The latter is of crucial importance in situations when the simulation report becomes a key design justification document. To address this need, a group of industry statisticians with extensive experience in designing adaptive trials got together (under the sponsorship of DIA Adaptive Design Working Group) and summarized a core set of requirements constituting “good simulation practices” for a few types of commonly used adaptive designs. This course and the companion reference paper are products of this collaboration. While this work was motivated by the goal of creating a quality simulation report, this course focuses instead on the proper planning and customization of the simulation experiment to the trial design or problem at hand. Key components of trial simulation will be covered, linking the question of interest to statistical models and assumptions, documenting those appropriately to facilitate discussion within cross-functional teams. These concepts will be illustrated using examples of popular designs such as dose-escalation, dose-ranging trials, confirmatory trials with stopping rules and sample size re-estimation, and multi-stage design. General topics such as adequate error control (in both Bayesian and Frequentist settings) will be covered as well. The goal of this course is to develop a basic understanding of what’s essential when designing a “simulation experiment” to justify trials’ design, and how to balance simulation efficiency with scientific rigor.
1. Introduction and motivation: adaptive designs and regulatory landscape, rising role of simulation in clinical trial design
2. Modeling’s and assumptions-essential part of simulation; transition from real life to mathematical models and linking it to study questions
3. Individual simulation “building blocks”
4. Some common examples of adaptive design simulations illustrating concepts in #2-3: dose-escalation and dose-ranging trials, early stopping rules and sample size re-estimation, confirmatory multi-stage designs
5. Simulation size and assuring adequate error control (false positive and false negative conclusions in the context of both Bayesian and Frequentist design)
6. Conclusions: review of current state of simulation field with discussion of possible future developments.
Ref: C. Mayer, I. Perevozskaya, S. Leonov, V. Dragalin, Y. Pritchett, A. Bedding, A. Hartford, P. Fardipour, G. Cicconetti. “Simulation Practices for Adaptive Trial Designs in Drug and Device Development”, to appear in Statistics in Biopharmaceutical Research, 2019
• Inna Perevozskaya, Ph.D. is currently leading a US division of global Advanced Biostatistics and Data Analytics group within GSK. The group provides statistical and strategic leadership within GSK as well as consultations to leaders across R&D, specifically focusing on innovative clinical trial design and quantitative decision making. In this role, Inna serves as adaptive design consultant to teams across various therapeutic areas and helps to shape strategic decision making with respect to statistical innovation within GSK. Inna is a core member of BIO Innovative Clinical Trial Taskforce and DIA Adaptive Design Working Group; for the latter, she co-leads a sub-team dedicated to simulation best practices across industry. Prior to joining GSK, Inna has held positions of increasing responsibility and leadership as a project statistician and adaptive design consultant within Merck, Wyeth and Pfizer. She holds a MS in Mathematics degree from Moscow State University and PhD in Statistics from University of Maryland, where she specialized in novel dose-escalation designs for oncology. Her research/consulting experience has resulted in ~30 publications in peer reviewed journals and several awards.
• Greg Cicconetti, Ph.D. is a research fellow in the Statistical Innovation Group at AbbVie. Greg began his career as an assistant professor of statistics at Muhlenberg College before joining industry in 2005. In his roles at GlaxoSmithKline and AbbVie, Greg has gained extensive experience in survival and longitudinal trials, Bayesian methodology, and statistical learning. He has used simulation on the trials he has supported to guide teams regarding trial design, monitoring, and sensitivity analyses. In his current role Greg assists study teams in determining decision criteria to be used at interim analyses, effectively marrying simulation and visualization to build team consensus. Greg is also a member of the DIA Scientific Working Group on Adaptive Designs and the ASA Biopharmaceutical Section’s Software Working Group.
Clinical biomarkers are becoming indispensable in designing clinical trials owing to but not limited to 1) heterogeneity of patient population defined by their molecular profiles 2) complex disease etiology that demands deep and broader understanding of biology 3) challenges and problems in evaluating PK/PD activities of new agents. Rich biomarker data has presented drug developers unprecedented opportunities to design clinical trials with better precision and efficiency. For example, biomarker enrichment designs allow us to test the drug in a full spectrum of patient populations according to their biomarker status. In addition, analysis of biomarker data in clinical trials can provide guidance for objective and robust decisions to de-risk clinical development.
The first part of this course will give a comprehensive overview of clinical biomarker and major technologies such as next generation sequencing (NGS) for biomarker discovery and quantification. Statistical considerations and challenges such as data normalization biomarker threshold development and using biomarker for decision making in clinical development will be discussed in details.
The second part of this course will focus on the strategy of the biomarker assisted study designs that is important to assessment of biomarker performance and reliability in regards to patient stratification for safety and efficacy. Specifically, novel designs including Bayesian adaptive designs and their merits and limitations will be discussed. Statistical methodologies and implications on regulatory submissions will also be presented. Case studies will be discussed for illustration.
On Behalf of the ASA Safety Working Group
Traditional evaluation of the strength of evidence for establishing the efficacy and safety of health interventions is two-tiered. In the top tier is the gold standard of randomized clinical trials (RCTs), and in the lower tier are observational studies and other sources of real world evidence (RWE). However, this two-tiered view of evidence from clinical investigations is not nuanced enough for today’s needs and methodologies. There is growing demand for fast, timely and relevant public health data on patient safety. This has resulted in increasing expectations for well-designed, well-executed and well-reported observational studies. In addition, there is a rise in demand for using RCTs to understand treatment effects in a more real world setting. To face these challenges and opportunities, the ICH and various regulatory authorities are coming up with guidance to incorporate RWE and RCTs into relevant decision making. These ideas are reflected in the recent update of ICH E2C for periodic safety update report, the E6/E8 renovation paper, the ICH E9 R1 estimand discussion, as well as the recent FDA framework on the use of real world evidence (2018). These opportunities are particularly relevant and potentially rewarding in safety monitoring and evaluation, and serve as the motivation for the ASA Safety Working Group to form a new work stream on Integrating and Bridging RCT and RWE for Safety Decision Making. This tutorial session will be based on the research of this work stream. It may include the following topics: 1. Statistical and design considerations for real world evidence in health decision making 2. Statistical and design considerations for randomized pragmatic trials 3. Selected topics on advanced analytics the multi-source safety data
Ideally, causal effects of novel medical treatments are estimated from randomized clinical trials with complete follow-up and perfect adherence. However, when loss to follow-up and/or non-adherence occur the intention-to-treat effect can under-estimate the effect of treatment relative to a placebo, or over-estimate the effect of treatment relative to an active comparator. Under-estimating the effect of treatment is particularly concerning when assessing safety outcomes. Since loss to follow-up and non-adherence are inherently post-randomization events, attempting to adjust for these events using traditional statistical methods can induce bias and lead to spurious results. However, methods to adjust these analyses for differential loss to follow-up without inducing bias exist. These methods can improve the utility of trial results for clinical practice by 1) ensuring accurate estiamtes of the potential for harm, and 2) providing estimates of per-protocol effects which are patient-centered causal effects. This workshop will help trialists understand novel methods to adjust for post-randomization variables are required, and provide worked examples of how to apply these methods in practice. Participants should have some familiarity with regression but need not have prior experience with causal inference methods. Participants will be provided with a sample dataset and sample code in R, Stata, and SAS, and should bring a laptop with their preferred statistical software installed.
The primary efficacy endpoint of many phase-III clinical trials consists of multiple distinct types of outcomes. Such composite endpoints are most common in cardiology trials where heart failure, myocardioinfaction, stroke, and death are combined into Major Adverse Cardiac Events (MACE) and sometimes in oncology trials as well. The use of composite endpoints has many advantages including a larger number of events and avoidance of the multiplicity issue. However, it also presents some unique challenges such as coherent formulation of a composite-measure estimand in accordance with the recently published ICH E9(R1) guidelines and statistical methods that account for the internal hierarchy among the component outcomes. Recent years have seen great strides in meeting those challenges with newly developed methodology such as the win ratio and proportion in favor of treatment. However, not all who work with composite endpoints are familiar with these exciting developments or are cognizant of their comparative merits over the traditional way of focusing on the first component outcome. In this proposed course, we aim to give a systematic methodological overview on the design and analysis of clinical trials with composite endpoints. An outline of the course is presented below. Whenever new methodology is on the table, the lecture will be complemented, and hopefully reinforced, by in-class demonstration of real data analysis using R. 1. Rationale and Challenges 2. Two-Sample Comparison 2.1. The Win Ratio (WR) and Net Benefit (NB) 2.2. Null and alternative hypotheses 2.3. What are the estimands? 2.4. Sample size calculations 3. Semiparametric regression 3.1. The Proportional Win model: WR in regression 3.2. Efficiency considerations 3.3. Model diagnostics and remedial steps 4. Group sequential trials and adaptive designs 4.1. Stage-wise analysis 4.2. Futility rules 4.3. Sample size calculations for group sequential trials
In statistical methodology research, simulations are among the ways to show the operating characteristics of the proposed method against the existing methods. Depending on the response variables of interest, simulation studies must be designed very carefully to produce generalizable and reproducible conclusions independent of the statistical platforms used and this task is much more difficult and under-recognized than applied statisticians think. In this short course, we will have four modules that will cover univariable and multivariable simulation models as well as conditional or iterative simulation models as a set of simulation projects. In all these projects, we will describe potential pitfalls that may not be easily recognizable and suggest what metadata to be captured to achieve computing efficiency as well as reproducibility. We plan to carry out examples both in SAS and R to show similarities and differences between two platforms, and plan to have a ‘design studio’ to provide guidance to simulation challenges shared by the attendees.
The course will be designed as four modules: Module-1: Simulating data for univariate random variables following Gaussian Distribution, Student-t-Distribution, Gamma Distribution and its special cases, Beta Distribution, Binomial Distribution, Poisson Distribution, etc.
Module-2: Simulation designs for one-sample hypothesis testing for continuous, binary, and survival endpoints. In this module, we will also illustrate iterative simulation designs such as Phase-I Dose Escalation Design, and Simon’s Two-stage designs.
Module-3: Simulation designs for two- or more-sample hypothesis testing for continuous, binary, and survival endpoints. One of the main focus here will be Empirical Power calculations for Randomized Clinical Trials.
Module-4: Simulation designs for Multivariate random variables and designs that require iterative processing. We will compare and contrast SAS and R in terms of efficiency in simulation design.
Abstract: In this short course, we will describe the new FDA CDER/CBER draft guidance on adaptive designs. The 2018 document, which replaced the 2010 draft, provides guidance on the appropriate use of adaptive designs for clinical trials to provide evidence of the effectiveness and safety of a drug or biologic. We will describe the key principles for designing, conducting, analyzing, and reporting the results from a clinical trial with an adaptive design. We will also address a number of special topics, such as the use of simulations in adaptive design planning and the use of Bayesian adaptive design features. At the conclusion of this short course, participants should be able to: • Define an adaptive design and discuss important advantages and limitations of adaptive designs. • Describe four important principles for clinical trials with an adaptive design. • Provide examples of the types of design modifications that can be incorporated into an adaptive design. • Outline the types of information FDA needs to evaluate an adaptive design and to evaluate results from a trial with an adaptive design. • Discuss special considerations in adaptive design, including the use of simulations, the use of Bayesian features, adaptations in time-to-event settings, and adaptations based on a potential surrogate or intermediate endpoint.
Instructors: John Scott is Director of the Division of Biostatistics in the FDA's Center for Biologics Evaluation and Research, where he has also served as Deputy Director and as a statistical reviewer for blood products and for cellular, tissue and gene therapies. Gregory Levin is Associate Director of the Division of Biometrics II in the Office of Biostatistics in the FDA’s Center for Drug Evaluation and Research, and has primarily been involved in reviews of pulmonary, allergy, rheumatology, metabolism, and endocrinology products. John and Greg have multiple publications on adaptive designs and were lead writers of the 2018 draft guidance.
In late 2016, the US Congress passed into law `The21st Century Cures Act’, which instructed FDA to update its guidance on adaptive designs. The legislation refers to adaptive designs as `modern' and `novel' methods, and pushes the use of the innovative methods to the highest level.
Although the need for the flexible sample size design (FSSD), a kind of adaptive designs, became clearer over time, the applications have not been satisfactory. Confusions and misunderstanding associated with the fully flexible sample size designs widely exist. Issues, which largely contribute to the delay of the applications of the new methods, include confusions about the objectives of FSSD, confusions on how to evaluate the adaptive performance of FSSD, misconception that the traditional group sequential design (GSD) is more efficient, lack of understanding of full potential of FSSD, unawareness of how to do design optimization.
This short course will address the above issues base on the instructors’ research. The presentation will touch upon different FSSDs but focus on the method developed by Cui, Hung and Wang (Biometrics, 1999) or the CHW method. It will be shown that under the CHW design, the sample size of a clinical trial can be determined before or after the start of the trial (Cui et al., Cont. Clin. Trials, 2017). With the design optimization, the CHW design is uniformly or approximately uniformly more efficient than GSD (Cui and Zhang, Stat. in Med., 2018). The findings will largely change the ways to size and design clinical studies. Mathematical arguments, application examples, and simulation results will be given in the classroom to facilitate the understanding.
After the completion of the training, the attendees are expected to have much better understanding and appreciation of the essence of the flexible sample size design and develop the hands-on capability to design optimal flexible sample size trials to improve the quality of drug development programs.
Instructor 1 Lu Cui is a Sr. Director in Statistics and Research Fellow at the Department of Data and Statistical Sciences at AbbVie. As the Head of Immunology Clinical Statistics, Lu is leading the statistical support for the company’s immunology drug development programs covering multiple disease areas including GI, rheumatology and dermatology. As an applied statistician, stimulated by his over 20 years of experience of working for the government and in the pharma industry, Lu has continued statistical research on a broad range of topics. Lu, in particular, has a great interest in adaptive clinical trial designs and its applications. He has over 14 publications on the subject, devoting to the improvement of the quality and efficiency of clinical trials. The method or so called the CHW method which he co-invented has become one of the most popular flexible sample size designs and been implemented in the statistical computing software. Lu received his PhD in Statistics from the University of Rochester, NY. He is a recipient of the FDA Award of Merit, and an Elected Member of International Statistical Institute since 2006.
Instructor 2 Lanju Zhang is a Director in Statistics and Research Fellow at the Department of Data and Statistical Sciences at AbbVie. He is leading a group providing statistical support to emerging immunology clinical programs. His research interests include adaptive design, multi-region clinical trials, real world evidence, and nonclinical statistics. He has published two books (both with Springer) and more than 40 papers, including 12 papers and book chapters on adaptive designs. He is an Associate Editor of Journal of Biopharmaceutical Statistics. He received his PhD in Statistics in 2005 from University of Maryland Baltimore County, MD.
Real world data and evidence (RWD&E) have been increasingly used in drug development and healthcare decision-making since the passage of the 21st Century Cures Act on December 9, 2016. The US FDA is developing a framework and guidance for evaluating RWD&E to support approvals of new drugs or devices, or new indications for previously approved drugs, and to support post-approval studies for monitoring safety and adverse events for further regulatory decision-making. Whereas pharmaceutical companies use RWD&E to support clinical development activities and to seek evidence to inform health technology assessment (HTA) decisions, the healthcare community uses RWD&E to develop guidelines and decisions to support medical practice and to assess treatment patterns, costs and outcomes of interventions. Although high performance computing tools, artificial intelligence and machine learning algorithms have been conveniently applied to RWD, there are still substantial challenges in deriving RWE from RWD and in using the RWE in drug development and healthcare decision-making. This short course aims to provide the audience with practical interdisciplinary approaches and applications using RWD&E in product development, regulatory decision-making, and healthcare delivery, with case studies given throughout the presentation.
Course outline: 1. Introduction 2. Real World Data 3. Statistical and Machine Learning Methods for Healthcare Decision Analysis 4. Disease Diagnosis, Patient Heterogeneity and Adherence 5. Health Technology and Health Economic Assessment 6. Risk Models and Outcome Prediction 7. Benet-Risks Assessment 8. Causal Inference Using Real World Data 9. Analysis of Data Generated from Mobile Devices 10. Public Health Surveillance and Pharmacovigilance 11. Real World Data to Support Clinical Development 12. Pragmatic Trials and CER Trials
A Bayesian approach provides the formal framework to incorporate external information into the statistical analysis of a clinical trial. There is an intrinsic interest of leveraging all available information for an efficient design and analysis. This allows trials with smaller sample size or with unequal randomization. Examples include early phases drug development, occasionally in phase III trial, and special areas such as medical devices, orphan indications and extrapolation in pediatric studies. Recently, 21st Century Cure Act and PUDUFA VI encourage the use of relevant historical data for efficient design. An appropriate statistical method in this context needs to leverage “borrowing” of information while considering the heterogeneity between historical and current trial. In this short course, we'll cover the statistical frameworks to incorporate trial external evidence with real life example.
We will introduce the meta-analytic predictive (MAP) framework for borrowing historical data. The MAP approach is based on Bayesian hierarchical model which combines the evidence from different sources. It provides a prediction for the current study based on the available information while accounting for inherent heterogeneity in the data. This approach can be used widely in different applications of clinical trial.
In the second part of the short course, we will focus on three key applications of the MAP approach in clinical trial. These applications will be demonstrated using the R package RBesT, the R Bayesian evidence synthesis tools, which are freely available from CRAN. The aim of the short course is to teach the MAP approach and enable participants to apply the approach themselves with the help of RBesT.
I. Introduction: Motivation and general framework (15 min)
II. Methods for the analysis of a new trial using historical controls (45 min) a. Overview of available methods b. Meta-analytic Predictive (MAP) Prior and extension
III. Practical implementation I (45 mins) a. Introduction b. Prior derivation with RBesT with real life examples
IV. Design a new trial using historical controls (30 min) a. Design a new trial with MAP: Key consideration b. Complexities and challenges for implementation
V. Practical implementation II (30 mins) a. Real life example of designing a new trial with RBesT b. Assessment of operating characteristics
VI. Extension of Meta Analytic Framework (30 mins) a. Meta-analytic Combined (MAC) Approach b. Extrapolation
VII. Practical Session III (30 mins) a. Implementation of MAC using RBesT b. Example of extrapolation
VIII. Concluding Remarks and Discussion (15 min)
Dr. Satrajit Roychoudhury is a Senior Director and a member of Statistical Research and Innovation group in Pfizer Inc. Prior to joining, he was a member of Statistical Methodology and Consulting group at Novartis. He started his career as a research statistician in Schering-Plough Research Institute (now Merck Co.). He has 12+ years of extensive experience in working in different phases of clinical trials. His primary expertise includes implementation of innovative statistical methodology in clinical trials. He has co-authored several publications/book chapters in this area and provided statistical training at major conferences. His areas of research include the use of survival analysis, model-informed drug development and Bayesian methods in clinical trials.
Dr. Sebastian Weber is working as Associate Director in the Statistical Methodology and Consulting group at Novartis. He joined Novartis 5+ years ago as his first industry job. He is leading the historical control working group at Novartis for more than 4 years. Sebastian has extensive experience in designing Oncology phase I dose-escalation trails, use of historical control data in clinical trials and since most recently is involved in pediatric drug development programs, where he applies extrapolation concepts. His research interests include the application of pharmacometrics in statistics, model-based drug development and application of Bayesian methods for drug development