Following the commitments of PDUFA VI initiative, the FDA recently updated (draft) guidance on adaptive designs, prominently emphasizing the role played by simulation in designing complex clinical trials. Simulation is a well-recognized tool in designing adaptive trials, as evidenced by number and variety of publications on this subject. However, there is still lack of clarity and consistency regarding what constitutes a good “back bone” of a simulation process and how to document it. The latter is of crucial importance in situations when the simulation report becomes a key design justification document. To address this need, a group of industry statisticians with extensive experience in designing adaptive trials got together (under the sponsorship of DIA Adaptive Design Working Group) and summarized a core set of requirements constituting “good simulation practices” for a few types of commonly used adaptive designs. This course and the companion reference paper are products of this collaboration. While this work was motivated by the goal of creating a quality simulation report, this course focuses instead on the proper planning and customization of the simulation experiment to the trial design or problem at hand. Key components of trial simulation will be covered, linking the question of interest to statistical models and assumptions, documenting those appropriately to facilitate discussion within cross-functional teams. These concepts will be illustrated using examples of popular designs such as dose-escalation, dose-ranging trials, confirmatory trials with stopping rules and sample size re-estimation, and multi-stage design. General topics such as adequate error control (in both Bayesian and Frequentist settings) will be covered as well. The goal of this course is to develop a basic understanding of what’s essential when designing a “simulation experiment” to justify trials’ design, and how to balance simulation efficiency with scientific rigor.
1. Introduction and motivation: adaptive designs and regulatory landscape, rising role of simulation in clinical trial design
2. Modeling’s and assumptions-essential part of simulation; transition from real life to mathematical models and linking it to study questions
3. Individual simulation “building blocks”
4. Some common examples of adaptive design simulations illustrating concepts in #2-3: dose-escalation and dose-ranging trials, early stopping rules and sample size re-estimation, confirmatory multi-stage designs
5. Simulation size and assuring adequate error control (false positive and false negative conclusions in the context of both Bayesian and Frequentist design)
6. Conclusions: review of current state of simulation field with discussion of possible future developments.
Ref: C. Mayer, I. Perevozskaya, S. Leonov, V. Dragalin, Y. Pritchett, A. Bedding, A. Hartford, P. Fardipour, G. Cicconetti. “Simulation Practices for Adaptive Trial Designs in Drug and Device Development”, to appear in Statistics in Biopharmaceutical Research, 2019
• Inna Perevozskaya, Ph.D. is currently leading a US division of global Advanced Biostatistics and Data Analytics group within GSK. The group provides statistical and strategic leadership within GSK as well as consultations to leaders across R&D, specifically focusing on innovative clinical trial design and quantitative decision making. In this role, Inna serves as adaptive design consultant to teams across various therapeutic areas and helps to shape strategic decision making with respect to statistical innovation within GSK. Inna is a core member of BIO Innovative Clinical Trial Taskforce and DIA Adaptive Design Working Group; for the latter, she co-leads a sub-team dedicated to simulation best practices across industry. Prior to joining GSK, Inna has held positions of increasing responsibility and leadership as a project statistician and adaptive design consultant within Merck, Wyeth and Pfizer. She holds a MS in Mathematics degree from Moscow State University and PhD in Statistics from University of Maryland, where she specialized in novel dose-escalation designs for oncology. Her research/consulting experience has resulted in ~30 publications in peer reviewed journals and several awards.
• Greg Cicconetti, Ph.D. is a research fellow in the Statistical Innovation Group at AbbVie. Greg began his career as an assistant professor of statistics at Muhlenberg College before joining industry in 2005. In his roles at GlaxoSmithKline and AbbVie, Greg has gained extensive experience in survival and longitudinal trials, Bayesian methodology, and statistical learning. He has used simulation on the trials he has supported to guide teams regarding trial design, monitoring, and sensitivity analyses. In his current role Greg assists study teams in determining decision criteria to be used at interim analyses, effectively marrying simulation and visualization to build team consensus. Greg is also a member of the DIA Scientific Working Group on Adaptive Designs and the ASA Biopharmaceutical Section’s Software Working Group.
Clinical biomarkers are becoming indispensable in designing clinical trials owing to but not limited to 1) heterogeneity of patient population defined by their molecular profiles 2) complex disease etiology that demands deep and broader understanding of biology 3) challenges and problems in evaluating PK/PD activities of new agents. Rich biomarker data has presented drug developers unprecedented opportunities to design clinical trials with better precision and efficiency. For example, biomarker enrichment designs allow us to test the drug in a full spectrum of patient populations according to their biomarker status. In addition, analysis of biomarker data in clinical trials can provide guidance for objective and robust decisions to de-risk clinical development.
The first part of this course will give a comprehensive overview of clinical biomarker and major technologies such as next generation sequencing (NGS) for biomarker discovery and quantification. Statistical considerations and challenges such as data normalization biomarker threshold development and using biomarker for decision making in clinical development will be discussed in details.
The second part of this course will focus on the strategy of the biomarker assisted study designs that is important to assessment of biomarker performance and reliability in regards to patient stratification for safety and efficacy. Specifically, novel designs including Bayesian adaptive designs and their merits and limitations will be discussed. Statistical methodologies and implications on regulatory submissions will also be presented. Case studies will be discussed for illustration.
On Behalf of the ASA Safety Working Group
Traditional evaluation of the strength of evidence for establishing the efficacy and safety of health interventions is two-tiered. In the top tier is the gold standard of randomized clinical trials (RCTs), and in the lower tier are observational studies and other sources of real world evidence (RWE). However, this two-tiered view of evidence from clinical investigations is not nuanced enough for today’s needs and methodologies. There is growing demand for fast, timely and relevant public health data on patient safety. This has resulted in increasing expectations for well-designed, well-executed and well-reported observational studies. In addition, there is a rise in demand for using RCTs to understand treatment effects in a more real world setting. To face these challenges and opportunities, the ICH and various regulatory authorities are coming up with guidance to incorporate RWE and RCTs into relevant decision making. These ideas are reflected in the recent update of ICH E2C for periodic safety update report, the E6/E8 renovation paper, the ICH E9 R1 estimand discussion, as well as the recent FDA framework on the use of real world evidence (2018). These opportunities are particularly relevant and potentially rewarding in safety monitoring and evaluation, and serve as the motivation for the ASA Safety Working Group to form a new work stream on Integrating and Bridging RCT and RWE for Safety Decision Making. This tutorial session will be based on the research of this work stream. It may include the following topics: 1. Statistical and design considerations for real world evidence in health decision making 2. Statistical and design considerations for randomized pragmatic trials 3. Selected topics on advanced analytics the multi-source safety data
Ideally, causal effects of novel medical treatments are estimated from randomized clinical trials with complete follow-up and perfect adherence. However, when loss to follow-up and/or non-adherence occur the intention-to-treat effect can under-estimate the effect of treatment relative to a placebo, or over-estimate the effect of treatment relative to an active comparator. Under-estimating the effect of treatment is particularly concerning when assessing safety outcomes. Since loss to follow-up and non-adherence are inherently post-randomization events, attempting to adjust for these events using traditional statistical methods can induce bias and lead to spurious results. However, methods to adjust these analyses for differential loss to follow-up without inducing bias exist. These methods can improve the utility of trial results for clinical practice by 1) ensuring accurate estiamtes of the potential for harm, and 2) providing estimates of per-protocol effects which are patient-centered causal effects. This workshop will help trialists understand novel methods to adjust for post-randomization variables are required, and provide worked examples of how to apply these methods in practice. Participants should have some familiarity with regression but need not have prior experience with causal inference methods. Participants will be provided with a sample dataset and sample code in R, Stata, and SAS, and should bring a laptop with their preferred statistical software installed.
The primary efficacy endpoint of many phase-III clinical trials consists of multiple distinct types of outcomes. Such composite endpoints are most common in cardiology trials where heart failure, myocardioinfaction, stroke, and death are combined into Major Adverse Cardiac Events (MACE) and sometimes in oncology trials as well. The use of composite endpoints has many advantages including a larger number of events and avoidance of the multiplicity issue. However, it also presents some unique challenges such as coherent formulation of a composite-measure estimand in accordance with the recently published ICH E9(R1) guidelines and statistical methods that account for the internal hierarchy among the component outcomes. Recent years have seen great strides in meeting those challenges with newly developed methodology such as the win ratio and proportion in favor of treatment. However, not all who work with composite endpoints are familiar with these exciting developments or are cognizant of their comparative merits over the traditional way of focusing on the first component outcome. In this proposed course, we aim to give a systematic methodological overview on the design and analysis of clinical trials with composite endpoints. An outline of the course is presented below. Whenever new methodology is on the table, the lecture will be complemented, and hopefully reinforced, by in-class demonstration of real data analysis using R. 1. Rationale and Challenges 2. Two-Sample Comparison 2.1. The Win Ratio (WR) and Net Benefit (NB) 2.2. Null and alternative hypotheses 2.3. What are the estimands? 2.4. Sample size calculations 3. Semiparametric regression 3.1. The Proportional Win model: WR in regression 3.2. Efficiency considerations 3.3. Model diagnostics and remedial steps 4. Group sequential trials and adaptive designs 4.1. Stage-wise analysis 4.2. Futility rules 4.3. Sample size calculations for group sequential trials
In statistical methodology research, simulations are among the ways to show the operating characteristics of the proposed method against the existing methods. Depending on the response variables of interest, simulation studies must be designed very carefully to produce generalizable and reproducible conclusions independent of the statistical platforms used and this task is much more difficult and under-recognized than applied statisticians think. In this short course, we will have four modules that will cover univariable and multivariable simulation models as well as conditional or iterative simulation models as a set of simulation projects. In all these projects, we will describe potential pitfalls that may not be easily recognizable and suggest what metadata to be captured to achieve computing efficiency as well as reproducibility. We plan to carry out examples both in SAS and R to show similarities and differences between two platforms, and plan to have a ‘design studio’ to provide guidance to simulation challenges shared by the attendees.
The course will be designed as four modules: Module-1: Simulating data for univariate random variables following Gaussian Distribution, Student-t-Distribution, Gamma Distribution and its special cases, Beta Distribution, Binomial Distribution, Poisson Distribution, etc.
Module-2: Simulation designs for one-sample hypothesis testing for continuous, binary, and survival endpoints. In this module, we will also illustrate iterative simulation designs such as Phase-I Dose Escalation Design, and Simon’s Two-stage designs.
Module-3: Simulation designs for two- or more-sample hypothesis testing for continuous, binary, and survival endpoints. One of the main focus here will be Empirical Power calculations for Randomized Clinical Trials.
Module-4: Simulation designs for Multivariate random variables and designs that require iterative processing. We will compare and contrast SAS and R in terms of efficiency in simulation design.
Abstract: In this short course, we will describe the new FDA CDER/CBER draft guidance on adaptive designs. The 2018 document, which replaced the 2010 draft, provides guidance on the appropriate use of adaptive designs for clinical trials to provide evidence of the effectiveness and safety of a drug or biologic. We will describe the key principles for designing, conducting, analyzing, and reporting the results from a clinical trial with an adaptive design. We will also address a number of special topics, such as the use of simulations in adaptive design planning and the use of Bayesian adaptive design features. At the conclusion of this short course, participants should be able to: • Define an adaptive design and discuss important advantages and limitations of adaptive designs. • Describe four important principles for clinical trials with an adaptive design. • Provide examples of the types of design modifications that can be incorporated into an adaptive design. • Outline the types of information FDA needs to evaluate an adaptive design and to evaluate results from a trial with an adaptive design. • Discuss special considerations in adaptive design, including the use of simulations, the use of Bayesian features, adaptations in time-to-event settings, and adaptations based on a potential surrogate or intermediate endpoint.
Instructors: John Scott is Director of the Division of Biostatistics in the FDA's Center for Biologics Evaluation and Research, where he has also served as Deputy Director and as a statistical reviewer for blood products and for cellular, tissue and gene therapies. Gregory Levin is Associate Director of the Division of Biometrics II in the Office of Biostatistics in the FDA’s Center for Drug Evaluation and Research, and has primarily been involved in reviews of pulmonary, allergy, rheumatology, metabolism, and endocrinology products. John and Greg have multiple publications on adaptive designs and were lead writers of the 2018 draft guidance.
In late 2016, the US Congress passed into law `The21st Century Cures Act’, which instructed FDA to update its guidance on adaptive designs. The legislation refers to adaptive designs as `modern' and `novel' methods, and pushes the use of the innovative methods to the highest level.
Although the need for the flexible sample size design (FSSD), a kind of adaptive designs, became clearer over time, the applications have not been satisfactory. Confusions and misunderstanding associated with the fully flexible sample size designs widely exist. Issues, which largely contribute to the delay of the applications of the new methods, include confusions about the objectives of FSSD, confusions on how to evaluate the adaptive performance of FSSD, misconception that the traditional group sequential design (GSD) is more efficient, lack of understanding of full potential of FSSD, unawareness of how to do design optimization.
This short course will address the above issues base on the instructors’ research. The presentation will touch upon different FSSDs but focus on the method developed by Cui, Hung and Wang (Biometrics, 1999) or the CHW method. It will be shown that under the CHW design, the sample size of a clinical trial can be determined before or after the start of the trial (Cui et al., Cont. Clin. Trials, 2017). With the design optimization, the CHW design is uniformly or approximately uniformly more efficient than GSD (Cui and Zhang, Stat. in Med., 2018). The findings will largely change the ways to size and design clinical studies. Mathematical arguments, application examples, and simulation results will be given in the classroom to facilitate the understanding.
After the completion of the training, the attendees are expected to have much better understanding and appreciation of the essence of the flexible sample size design and develop the hands-on capability to design optimal flexible sample size trials to improve the quality of drug development programs.
Instructor 1 Lu Cui is a Sr. Director in Statistics and Research Fellow at the Department of Data and Statistical Sciences at AbbVie. As the Head of Immunology Clinical Statistics, Lu is leading the statistical support for the company’s immunology drug development programs covering multiple disease areas including GI, rheumatology and dermatology. As an applied statistician, stimulated by his over 20 years of experience of working for the government and in the pharma industry, Lu has continued statistical research on a broad range of topics. Lu, in particular, has a great interest in adaptive clinical trial designs and its applications. He has over 14 publications on the subject, devoting to the improvement of the quality and efficiency of clinical trials. The method or so called the CHW method which he co-invented has become one of the most popular flexible sample size designs and been implemented in the statistical computing software. Lu received his PhD in Statistics from the University of Rochester, NY. He is a recipient of the FDA Award of Merit, and an Elected Member of International Statistical Institute since 2006.
Instructor 2 Lanju Zhang is a Director in Statistics and Research Fellow at the Department of Data and Statistical Sciences at AbbVie. He is leading a group providing statistical support to emerging immunology clinical programs. His research interests include adaptive design, multi-region clinical trials, real world evidence, and nonclinical statistics. He has published two books (both with Springer) and more than 40 papers, including 12 papers and book chapters on adaptive designs. He is an Associate Editor of Journal of Biopharmaceutical Statistics. He received his PhD in Statistics in 2005 from University of Maryland Baltimore County, MD.
Real world data and evidence (RWD&E) have been increasingly used in drug development and healthcare decision-making since the passage of the 21st Century Cures Act on December 9, 2016. The US FDA is developing a framework and guidance for evaluating RWD&E to support approvals of new drugs or devices, or new indications for previously approved drugs, and to support post-approval studies for monitoring safety and adverse events for further regulatory decision-making. Whereas pharmaceutical companies use RWD&E to support clinical development activities and to seek evidence to inform health technology assessment (HTA) decisions, the healthcare community uses RWD&E to develop guidelines and decisions to support medical practice and to assess treatment patterns, costs and outcomes of interventions. Although high performance computing tools, artificial intelligence and machine learning algorithms have been conveniently applied to RWD, there are still substantial challenges in deriving RWE from RWD and in using the RWE in drug development and healthcare decision-making. This short course aims to provide the audience with practical interdisciplinary approaches and applications using RWD&E in product development, regulatory decision-making, and healthcare delivery, with case studies given throughout the presentation.
Course outline: 1. Introduction 2. Real World Data 3. Statistical and Machine Learning Methods for Healthcare Decision Analysis 4. Disease Diagnosis, Patient Heterogeneity and Adherence 5. Health Technology and Health Economic Assessment 6. Risk Models and Outcome Prediction 7. Benet-Risks Assessment 8. Causal Inference Using Real World Data 9. Analysis of Data Generated from Mobile Devices 10. Public Health Surveillance and Pharmacovigilance 11. Real World Data to Support Clinical Development 12. Pragmatic Trials and CER Trials
A Bayesian approach provides the formal framework to incorporate external information into the statistical analysis of a clinical trial. There is an intrinsic interest of leveraging all available information for an efficient design and analysis. This allows trials with smaller sample size or with unequal randomization. Examples include early phases drug development, occasionally in phase III trial, and special areas such as medical devices, orphan indications and extrapolation in pediatric studies. Recently, 21st Century Cure Act and PUDUFA VI encourage the use of relevant historical data for efficient design. An appropriate statistical method in this context needs to leverage “borrowing” of information while considering the heterogeneity between historical and current trial. In this short course, we'll cover the statistical frameworks to incorporate trial external evidence with real life example.
We will introduce the meta-analytic predictive (MAP) framework for borrowing historical data. The MAP approach is based on Bayesian hierarchical model which combines the evidence from different sources. It provides a prediction for the current study based on the available information while accounting for inherent heterogeneity in the data. This approach can be used widely in different applications of clinical trial.
In the second part of the short course, we will focus on three key applications of the MAP approach in clinical trial. These applications will be demonstrated using the R package RBesT, the R Bayesian evidence synthesis tools, which are freely available from CRAN. The aim of the short course is to teach the MAP approach and enable participants to apply the approach themselves with the help of RBesT.
I. Introduction: Motivation and general framework (15 min)
II. Methods for the analysis of a new trial using historical controls (45 min) a. Overview of available methods b. Meta-analytic Predictive (MAP) Prior and extension
III. Practical implementation I (45 mins) a. Introduction b. Prior derivation with RBesT with real life examples
IV. Design a new trial using historical controls (30 min) a. Design a new trial with MAP: Key consideration b. Complexities and challenges for implementation
V. Practical implementation II (30 mins) a. Real life example of designing a new trial with RBesT b. Assessment of operating characteristics
VI. Extension of Meta Analytic Framework (30 mins) a. Meta-analytic Combined (MAC) Approach b. Extrapolation
VII. Practical Session III (30 mins) a. Implementation of MAC using RBesT b. Example of extrapolation
VIII. Concluding Remarks and Discussion (15 min)
Dr. Satrajit Roychoudhury is a Senior Director and a member of Statistical Research and Innovation group in Pfizer Inc. Prior to joining, he was a member of Statistical Methodology and Consulting group at Novartis. He started his career as a research statistician in Schering-Plough Research Institute (now Merck Co.). He has 12+ years of extensive experience in working in different phases of clinical trials. His primary expertise includes implementation of innovative statistical methodology in clinical trials. He has co-authored several publications/book chapters in this area and provided statistical training at major conferences. His areas of research include the use of survival analysis, model-informed drug development and Bayesian methods in clinical trials.
Dr. Sebastian Weber is working as Associate Director in the Statistical Methodology and Consulting group at Novartis. He joined Novartis 5+ years ago as his first industry job. He is leading the historical control working group at Novartis for more than 4 years. Sebastian has extensive experience in designing Oncology phase I dose-escalation trails, use of historical control data in clinical trials and since most recently is involved in pediatric drug development programs, where he applies extrapolation concepts. His research interests include the application of pharmacometrics in statistics, model-based drug development and application of Bayesian methods for drug development
The oncology drug development paradigm has changed over the last several years. With the novel targeted and immunotherapies, clinical trial design in oncology has evolved to help accelerate drug development and provide timely access to highly effective therapies to patients. Study designs, such as seamless expansion cohorts/single-arm studies are increasingly used to support accelerated regulatory approvals.
In rare disease settings with unmet need (e.g. patients with rare mutations, pediatric patients), interpreting results from a single-arm study requires the construction of an external control arm. Construction of contemporaneous and clinically relevant external cohorts from high-quality, real-world databases could help provide a robust comparability to evaluate the effectiveness of promising therapies in oncology drug development.
The objectives of the presentations in this session will be to examine whether contemporaneous longitudinal data from curated electronic health record (EHR) database of cancer patients could be used to (1) create control arm for a single arm study and (2) to evaluate the use of such data in novel design approaches such as “hybrid control” in a randomized trial (enhancing control arm). We intend to evaluate the reliability of developing external control from real-world data through case studies. For the development of “hybrid control” arm, we will evaluate the use of appropriate methodologies, including Bayesian approaches, to bring the appropriate level of evidence from real-world data to a clinical trial to enhance the control; while promoting internal validity of such novel designs.
Ever since the missing data report “The Prevention and Treatment of Missing Data in Clinical Trials” published by National Research Council in 2010, missing data prevention and handling has been a popular regulatory topic that has caught attention not only from both researchers and trialists, but also from both statisticians and clinicians. Furthermore, the draft addendum on estimands and sensitivity analysis in clinical trials to ICH E9 was issued in mid-2017 and it has been consulted throughout a trial by both regulatory and industry. Despite the many efforts and novel strategies of minimizing missing data, it is inevitable in most of the clinical trials however small the impact on the final treatment effect evaluation. Hence, most of the regulatory agencies continue recommending pre-specified missing data handling strategies for the primary analyses along with sensitivity analyses. Such a recommendation pushes missing data analysis research and implementation to a new height. The most popular primary missing data handling strategies are different variations of multiple imputation while the most favorable sensitivity analysis is tipping-point analysis based on the primary multiple imputation analysis. Nevertheless, most of the research work has been focused on superiority trials with binary or continuous endpoints. There is still room for further advancement in less common trial designs and endpoints, such as non-inferiority trials, time-to-event and longitudinal endpoints. The seemingly straightforward multiple imputation and tipping-point analysis cannot be adopted easily in these situations, or even more importantly, does not construe a reasonable investigation in missing data impact on the study conclusions. This session will venture to address some of these unique challenges with revamped multiple imputation models and tipping-point analyses.
The 2017 draft addendum of the ICH E9 guideline on Statistical Principles for Clinical Trials, introduces an estimand framework aimed at aligning trial objectives and statistical analyses based on a precise definition of the inferential quantity of interest, the estimand. One of the attributes of an estimand is the strategy for handling intercurrent events, i.e., post-baseline events which preclude observation of the endpoint of interest, or affect its interpretation, to reflect the scientific question of interest. Although causal estimands are not explicitly mentioned in the addendum, the hypothetical and principal stratum strategies for addressing intercurrent events lead to causal estimands.
This session will focus on causality in a time-to-event setting and provide examples where a causal estimand in a drug development program is desirable. The principal stratum strategy, rarely applied in drug development, will be in scope. Alternatives to the hazard ratio effect measure will be embedded in the estimand framework, reviewed under causality considerations, and their underlying assumptions and possible sensitivity analyses will be discussed.
This session will also consider causal inference methodology applied in oncology while analyzing overall survival in the presence of treatment switching. Different estimands in this setting will be presented, illustrating the impact of the estimand choice on study design, data collection, trial conduct, analysis, and interpretation.
The session will include three talks by members of the Joint EFSPI SIG for Estimands in Oncology and FDA representatives.
How to optimize study design has been extensively discussed for superiority/efficacy trials, but seldom discussed in bioequivalence (BE) or biosimilar studies. For locally-acting generic drugs, a three-arm parallel clinical end-point BE study is often used to establish BE between a generic (T) and an innovator drug (R). BE is established if equivalence is demonstrated between T and R AND superiority is established for T over placebo (P) and R over P. In practice, however, there are times when equivalence passes but one or both superiority tests fail. For certain biosimilar products, both pharmacokinetic (PK) and clinical efficacy are required to demonstrate equivalence. However, PK and efficacy parameters are sometimes inaccurate resulting in either failure to establish equivalence or unnecessary costs. Therefore, how to optimize study design for BE and biosimilar studies in order to improve the effectiveness and efficiency of the study is a pressing task under both the Biosimilar User Fee Amendments (BsUFA II) and the Generic Drug User Fee Act II (GDUFA II). In this session, three speakers from FDA and academia/industry will discuss various optimized study designs for BE and biosimilar studies. Speaker 1 will discuss the challenges encountered when reviewing clinical endpoint BE studies using case examples and provide recommendations for study designs. Speaker 2 will introduce a novel adaptive clinical endpoint study with interim analysis and sample size re-estimation in order to avoid an over-powered or under-powered clinical trial; Speaker 3 will discuss an adaptive seamless design for establishing PK and efficacy equivalence in developing biosimilars to remedy the risk of mis-specification of both PK and efficacy parameters. An expert in the field will comment and provide recommendations based on the three talks. Speakers and discussant from the FDA and academia/industry will present and discuss complex study designs to optimize BE and biosimilar studies.
Designing appropriate clinical trials to support approval of products for the treatment of rare diseases is hard. FDA defines an orphan product as those that impact less than 200,000 subjects per year but some rare diseases may impact 1000 subjects or less in the US. One might assume rare means homogeneous but that is far from the truth. Patients with a disorder may suffer from a subset of a list of symptoms on a daily basis. Because the disease is rare, natural history not always well understood, and accruing patients difficult, development of meaningful endpoints is hard and poses statistical challenges. We are proposing a session with an introduction from a physician with extensive experience designing clinical trials for rare diseases (e.g. disorders that are inherited such as Gaucher’s disease). This will be followed up by two statisticians that will talk about ways of approaching complex endpoints in clinical trials (we have asked L.-J. Wei of Harvard and he is interested but it is not confirmed as yet and George Kordzakhia, an FDA statistician with extensive experience in rare diseases, will provide some perspectives on the subject). Then we will have a panel and an interactive discussion (with the speakers and two additional panel members, one from pharma and one from FDA) and will provide opportunity for audience involvement. Dr. Mike Hale is an experience moderator and heads up Biostatistics at Shire, a pharma company with serious interest in developing products for rare diseases. He will moderate the session.
Master protocols, including basket, umbrella, and platform trials, provide improved efficiency to address broader questions on the effect of multiple drugs and/or in multiple sub-populations in one trial, as compared to multiple independent trials. In additional to the operational complexities, statistical challenges are not trivial; but development of novel statistical design or methods for analysis to accommodate those challenges are falling behind. FDA recently published a guidance on strategies of master protocols with major focus on the design and statistical considerations. This includes sample size considerations for achieving adequate power in nonrandomized studies, comparative analysis in the use of a common control arm, allocation of biomarker-defined subgroups, and adaptive design strategies for sample size modification, adding and dropping an arm, etc. Although the master protocol concept has gained a lot of momentum recently, statistical issues and practical challenges need to be fully addressed to pave the way for its broader application. In this sessions, experts and practitioners will share their experience on basket, umbrella, and platform trials with focus on innovative statistical considerations.
Medical diagnostic tests often provide a binary response - positive or negative result for a target condition. However, in some cases, the incorporation of a “Gray Zone” or intermediate zone, leading to more than two results seems reasonable. Using an intermediate/gray zone to define a 3x2 table is appropriate than ignoring the test scores in these zones. The six-cell matrix (3x2 table), however, would serve limited purpose if clinicians cannot apply the additional information provided by the conditional operating characteristics to have effective patient management decisions. A decision analytic approach utilizing pre-test and post-test probability to the target condition can be used for efficient categorization of more than two results for such tests.
With the 21st Century Cures Act enacted, the stakeholders from industry, academia, and regulatory agency have been exploring how machine learning and artificial intelligence (AI) can help prompt medical innovations and accelerate medical product development. The machine learning and AI technologies have been developed to support bedside clinical decision-making. Also, machine learning and AI open many exciting opportunities in developing innovative ways to synthesize evidence from clinical trial data and real-world big data to support pharmaceutical development. While machine learning and AI are quickly evolving in medical fields, many stakeholders feel that we are getting into uncharted waters, e.g., lack of experience, especially in the regulatory area, and the technical uncertainties such as “black box” machine learning / AI tools used in clinical decision support. Since machine learning and AI are new in regulatory sciences, the presentation and discussion on this topic have been rare in the regulatory context. This session will provide an open forum for speakers from industry, academia, and regulatory agencies to share their up-to-date methodological research, practical experience, and regulatory consideration. We hope that this session will bring machine learning and AI closer to our clinical / pharmaceutical statistics community, and inspire discussion, collaboration, and consensus building in this new front.
Demonstration of vaccine efficacy (VE) has been relying on evaluating clinical disease endpoints through randomized, double-blind, placebo-controlled trials (RCT) in selected population. However, this is not a one-size-fits-all approach as it may not always be feasible when disease incidence rate is low. Potential causes for a low incidence rate include previously licensed vaccine, disease being rare when there is not an outbreak, etc. In such cases, Immune response may be used to infer VE when correlates of protection have been established.
On the contrary, observational methods are more common in the assessment of vaccine effectiveness (VEV) post licensure due to ethical concern over the inclusion of a placebo group in the study when an efficacious vaccine is readily available. Furthermore, VE is estimated from restricted and well-defined scenarios in clinical trials, which may differ markedly from the actual effectiveness in the field when conditions are less ideal (e.g. poor compliance, etc.), or population is different. Real world evidence could complement data from conventional large-scale RCTs, and shed light on the effectiveness of a vaccine. In combination with post-licensure surveillance, observational VEV studies play a crucial role in evaluating benefits and risks of a vaccine, especially when the actual incidence rate is dissimilar to that observed in the pre-licensure studies.
In this session, speakers from industry, public health, and regulatory agency will discuss the contemporary challenges in evaluation of vaccine efficacy and effectiveness. Topics include, but are not limited to, study design, statistical considerations of traditional RCT for efficacy, and utilizing real-world evidence (e.g. observational studies) to assess vaccine effectiveness.
The FDA’s approval of three gene therapy products in 2017 has opened the door to a radically new class of treatments and/or cure of diseases that was thought impossible many decades ago. Additionally, the product development in rare diseases has been challenging and many rare diseases have identified genes responsible. As a result, there is a high hope and belief that gene therapies with gene editing, replacement or delivery of a new gene can treat and even cure diseases. These factors have been motivating a rapid development of gene therapies in treating various severe diseases in the drug development community. In addition to existing issues pertaining to product developments for rare diseases, gene therapies have unique features that are different from small molecules or other biologic products, for example, one dose for a life time effect and the safety impact on patients due to permanent modification of human genome. These features lend them challenges in the design and assessment of safety and efficacy in clinical development programs. FDA recently refreshed and published a number of guidances on the development of gene therapies, for example, the considerations in hemophilia gene therapy development. Therefore, it is beneficial for the community to discuss the issues and considerations pertaining to the design and development of gene therapies at the workshop for the first time. In this session, we invite speakers from FDA and industry to discuss statistical considerations and more general development issues that are important to the design of gene therapy clinical trials, and the assessment of clinical data, such as first-in-human study design, dose selection, randomization, utilization of external data, efficacy endpoints and safety considerations. Three speakers will provide their perspectives in this topic: • Shiowjen Lee, PhD, Center for Biologics Evaluation and Research (CBER), FDA • Jessie Gu, PhD, Novartis • John Zhong, PhD, Biogen
Recent rapid development of big data analytical methods makes it possible to adopt artificial intelligence (AI), which mimics human cognition functions by computers, into the healthcare domain. AI can not only reveal the clinically relevant information from the massive amount of healthcare data, but also assist the clinical practice in decision making. For example, the IBM Watson system which includes both machine learning and natural language processing modules may provide treatment recommendations that are coherent with physician decisions. The US Food and Drug Administration (FDA) published the guidance of Software as a Medical Device (SAMD): Clinical Evaluation in 2017, and permitted marketing of a medical device that uses a series of deep learning detectors to search for lesions specific to diabetic retinopathy in 2018. To explore the recent AI development in regulatory science, this session will provide topics from both scientific and regulatory perspectives.
Heterogeneity of treatment effects (HTE) is the variation in how individuals respond to a treatment. Treatment benefit often varies among individuals in a trial due to difference in important disease characteristics. In current practice, the primary analysis focuses on the treatment benefit observed in the intent-to-treat (ITT) population. However, this "average" benefit may not be applicable to all patients in the trial due to possible heterogeneous benefit in different subgroups. The conventional way of assessing HTE is subgroup analysis using data from each subgroup independently. It divides the trial population into different subgroups based on each potentially influential disease characteristic, and then performs separate analysis. This traditional subgroup analysis is prone to false positive signal detection (treatment benefit), and the estimated treatment effects has high variability due to limited data in each subgroup. HTE is also observed in the study-level parameters of meta-analysis because of an imbalance in important disease characteristics (known as “effect–modifiers”). Appropriate assessment of HTE is critical to regulators, pharmaceutical companies, policy makers, researchers, and patients. The key challenges are assessing heterogeneity and identifying subgroup of patients likely to benefit from the treatment with certain degree of precision. The current practice does not address those concerns adequately.
This session will focus on different statistical approaches to assess HTE for confirmatory and exploratory analysis along with their advantages and disadvantages. It will further reflect on some recent state-of-art methodologies, such as Bayesian shrinkage. The speakers and discussant of this session will discuss possible strategies for communicating HTE to all stakeholders involved in clinical trial. This session will feature four prominent participants (three speakers and one discussant) from industry, academia and regulatory agency.
The FDA Guidance on real-world evidence issued in August 2017 and the Framework of FDA’s Real-World Evidence Program released in December 2018 describe the current practice on the use of real-world data (RWD) for evidence generation and outlines the blueprint of evaluating RWD and RWE (real-world evidence) for use in regulatory decisions to support safety and effectiveness of medical products. While regulatory agencies gain more experience in using RWD and RWE in product approval, there are still substantial challenges and hence opportunities in deriving RWE from a variety of RWD for regulatory use in product development and decision-making. This session comprises four presentations given by statisticians from academia, industry and FDA, two of which focus on practice and perspectives on use of RWD and RWE in regulatory decision making for medical devices and drug products, respectively, and the other two on methodology aspects in using RWD for causal inference which is essential in transforming RWD to valid RWE.
Speakers and Their Affiliations and Presentation Titles:
Yi Huang, PhD University of Maryland Baltimore County Email: email@example.com Presentation Title: Comparison of Causal Methods for Average Effect Estimation Allowing Covariate Measurement Error Using Simulation Studies
Jie Chen, PhD Merck Research Laboratory Email: firstname.lastname@example.org Presentation Title: Real world data, machine learning and causal inference
Rongmei Zhang, PhD US FDA, Center for Drug Evaluation and Research Email: email@example.com Presentation Title: Real World Evidence for Regulatory Decision Making in Drug Safety and Efficacy
Lilly Yue, PhD US FDA, CDRH Email: firstname.lastname@example.org Presentation Title: Incorporating Real World Evidence for Regulatory Decision Making in Medical Device Evaluation
Basket design trials are the clinical studies that test one investigational product on the patient population with the same gene mutation but different cancer types or other cancer characteristics. This is in contrast to the traditional trial design where each clinical trial only studies one drug in one indication. The basket design trials have been attracting more interest given the rapid development of precision medicine and biomarker discovery. While it brings efficiency and expedition in drug development to allow one master protocol to simultaneously evaluate one drug in multiple cancer types (FDA Guidance, 2018), it does not mean clinical trialists can take a “discount” on safeguarding the patient safety and trial integrity.
A data monitoring committee (DMC) is usually employed to monitor the safety and efficacy during the trial. With added complexities in basket design trial, it brings both operational and statistical challenges for DMCs to work organically to achieve the goal of safeguarding the patient safety. The composition of the DMC may be different considering multiple cancer types are studied under the master protocol. As Renfro (2016) stated a few statistical challenges in the master protocol design, there have not been many discussions about the safety perspective of the basket design trial. Should the DMC break down by cancer type or should we have one big group of members to monitor all the sub-studies? How would DMC(s) be alerted by safety signal from one sub-study and not overreact or properly react on the other studies? How will interim analysis result from some sub-studies impact the entire master protocol? Additional questions may be considered.
In this session, the presenters will share their thoughts from both industry and regulatory perspective on their experiences with implementing safety monitoring strategies in basket design trials, what they have learned to improve the organization of DMCs and appropriate interpretation of data.
Mobile devices with apps are more and more common and are starting to be used in clinical investigations for many purposes. There are several FDA guidances relating to this including Use of Electronic Health Record Data in Clinical Investigations (July 2018), Computerized Systems Used in Clinical Investigations (May 2007), Mobile Medical Applications (February 9, 2015), and Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims (December 2009). Potential subjects use apps to locate clinical investigations. Entities who pursue clinical investigations (e.g., pharmaceutical companies, CROs, private and public research groups) use apps and/or mobile devices with apps to find participants (recruitment), to assess eligibility for clinical investigations, to collect clinical investigations data. They are being promoted as reducing time lines, finding the right participants, and obtaining more accurate data for scales, PROs, etc. in the sense of real world, real time data from the participants themselves. This is happening across therapeutic areas and for all phases of clinical investigations. Statisticians for clinical investigations need to understand how these can be helpful and what the challenges are. Some devices bring in almost continuous data and there is a need to decide how to use these data. Some devices may have the capability to bring in data not directly related to the investigation and there is the need to determine the value of these data for research. We propose to invite individuals who have experience with mobile devices and apps in clinical investigations to discuss value, challenges, impacts on any aspect of clinical investigations such as design, assessments, analyses as well as individuals who understand the guidances.
Time-to-event outcomes are often used as the primary endpoint for clinical trials in many disease areas. Most randomized controlled trials with a time-to-event outcome are designed and analyzed using the log-rank test and the Cox model under the assumption of proportional hazards. The log-rank p-value evaluates the statistical significance of the treatment effect, and the hazard ratio (HR) from the Cox model is used to quantify such effect. The log-rank test is most powerful under proportional hazards (PH). In practice, however, non-PH patterns are often observed in clinical trials. In particular, patterns of delayed treatment effects have been observed recently across immuno-oncology trials. To mitigate the power loss, an increase in the sample size and/or a delay in study readout is needed, which often delays the availability of the therapy to patients with unmet medical needs. Alternative tests and estimation methods under non-PH for primary analysis could increase the probability of success, shorten the time to bring new treatments to patients, and provides more accurate description of the treatment effect. In this session, speakers from industry, health authority and academia will propose novel methods to analyze the time-to-event outcome under non-PH, compare the operating characteristics of different methods and discuss some statistical considerations in the design, conduct and data analysis of clinical studies when non-PH is suspected or has been observed. The presentation and discussion in this session will generate new ideas, create common ground in ongoing discussions and hopefully provide guidance in the further clinical research.
Machine learning (ML) has an increasing number of applications to drugs, devices and other areas in health care. ML has been used for: analyzing quantitative structure–activity relationships (QSARs), or absorption, distribution, metabolism, excretion, and toxicity (ADMET) model for drug discovery (Panteleev, et al. 2018); identifying skin cancer from images (Stanford AI Lab); recognizing the locations of transcription start sites (TSSs) in a genome sequence (Libbrecht and Noble, 2017); text-mining for pharmacovigilance (Cocos, et al, 2017); and predicting adverse reaction using FAERS data (Chen, 2018).
Statisticians and ML developers share many common goals, such as prediction, classification (supervised learning), and clustering (unsupervised learning). Statistics and ML also have many common techniques and methods. However, ML models cannot be understood in the same manner as traditional statistical models, for example, the concept of building a machine-learned structure based on observed data is different from the concept of modeling based on a pre-specified model structure. Machine learning involves a different approach to data analysis than many statisticians are used to, yet statisticians’ expertise and knowledge regarding uncertainty, inference, and trial design are invaluable in developing and evaluating its use in the medical arena. This section invites experienced academic, industry and regulatory speakers to talk about the recent developments and applications of ML, and the roles statisticians can play.
Highlights: Dr Grace Kim will present an application of using quantitative computer-aided diagnosis score from volumetric HRCT scans image as a clinical study primary endpoint. Dr Youran (Ryan) Qi will present a new framework to simulate the Phase 3 clinical trial based on the Phase 2 clinical trial data and the real-world data with an innovative deep learning method to predict the Ctrough and treatment effect in the Phase 3 clinical. Dr Jae Joon Song will present a project that uses machine learning for generating real-world evidence will monitor changes in prescription opioid use and guide proactive pharmacovigilance of drug abuse.
Since the establishment of the Best Pharmaceuticals for Children Act (BPCA) in 2002 and the Pediatric Research Equity Act (PREA) in 2003, there has been significant progress in pediatric drug development. However, substantial challenges still exist, such as logistical, technical, and ethical barriers. Specifically, children are vulnerable, are not able to consent themselves, and may not respond to medications in the same way as adults. Moreover, often in pediatric settings there are no known active comparators, parents are reluctant to put their children at risk, and the pediatric population has generally low disease prevalence. These challenges translate to pediatric studies having small sample sizes and lacking suitable control groups.
With the realization of these challenges and the consideration of having adequate evidentiary standards for pediatric drug development, it is critical that we incorporate innovative statistical design and analysis methods to develop needed pediatric drugs. Based on prior successful clinical trials and statistical research, many approaches have been proposed to tackle the challenges. These include the application of two-stage re-randomization designs, the utilization of historical control information to reduce the size of the placebo arm, Bayesian designs and analyses, and basket designs to enroll pediatric patients with multiple indications having a single targeted biomarker, etc.
In this session, three speakers are invited to share their experience on innovative designs and advanced statistical methodologies in pediatric drug development. A presentation from FDA will summarize the statistical designs applied or proposed to pediatric trials; An academic presenter will discuss Bayesian sequential monitored trial that allows early stopping for efficacy or futility; A representative from industry will share their research and experience on using a Bayesian design through a case example.
The session will bring FDA and Industry Clinicians and Statisticians who have worked on the series on Getting the Questions Right or GTQR based upon applying ICH E9 (R1) to design clinical trials with clinically meaningful endpoints acceptable to regulatory authorities. Most organizations rely on their statisticians to use the “best approaches”. A randomized, double blind, placebo-controlled study is accepted as the gold standard. Randomization does not protect from bias due to events that occur after randomization, e.g. death, discontinuation of treatment, treatment switching, taking rescue medication or missing data due to various reasons. At present, these post-randomization events are dealt with implicitly based on choices made about the data collection and statistical analysis.
Analysis specifications related to key endpoints that were not pre-specified in the protocol are looked at with suspicion by regulators. In order to have greater transparency and clarity from regulators, the Draft ICH E9 (R1) Addendum “Statistical Principles for Clinical Trials: Estimands and Sensitivity Analysis in Clinical Trials”, was issued in June 2017..
An estimand includes (a) the patient population targeted by the scientific question (b) the variable (or endpoint), to be obtained for each patient, that is required to address the scientific question (c) the specification of how to account for intercurrent events to reflect the scientific question of interest (d) the population-level summary (e.g. means) for the variable which provides, as required, a basis for a comparison between treatment conditions.
The GTQR series conducted as 6 Webinars in 2018 brought together multiple stakeholders to advance the understanding and implementation of Estimands. This session will share key learnings that attendees can apply to improve the design and analysis of clinical trials.
In the recent FDA statement on its new strategic framework on real-world evidence (https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm627760.htm), the Sentinel system was highlighted as a good example of using RWE for regulatory decision making. In this session, we will examine the structure, technology and statistical methodologies that have enabled the Sentinel success. We will also examine safety evaluation as a continuum from RCT to RWE. We will discuss how the FDA RWE framework can be implemented in the holistic safety evaluation. We will invite speakers from the US FDA and members/advisors of the ASA BIOP section safety working group.
Safety data from various sources present many challenges with regard to curation, analysis, interpretation, and reporting. Safety outcomes have high variability in measurements and are multidimensional and interrelated in nature. Traditionally, safety data in drug research have primarily come from clinical trials using structured data sources. Newer sources of safety data within and outside clinical trials, including spontaneous reporting systems, real world, and social medial data sources have heightened the need to identify new approaches to analyze and present these data in some insightful way. Additionally, the availability of high speed computing and influx of digital sensors in premarketing and postmarketing arenas has also led to additional sources of safety data and readily available tools and statistical techniques. Visual analytics facilitates blending data visualization and statistical and data mining techniques to create visualization modalities that help researchers to make sense out of safety data with emphasis on how to complement computation and visualization to perform effective analysis. This session will discuss some of the independent and collaborative efforts and open source and online tools that can be leveraged for visual analytics of safety data from clinical trials, spontaneous reporting systems, social media sources, and electronic medical and healthcare records.
Traditional survival analysis featuring time-to-response associated with log-rank test has been the gold standard in evaluation of long-term safety or clinical outcome endpoints. However, it has been recognized that there are few downfalls of this convention. First of all, when multiple events are of interest, time to the first event is usually engaged to continue using the traditional survival analysis with little to no modifications. All events are considered equally and the relationships amongst the events are not taken into account in the analysis. It may mask the actual effect or risk by ignoring the frequency and number of the later events of a subject. Moreover, almost all events are viewed with different importance by both patients and clinical practitioners. Different weights should be assigned to the events of interest and incorporated into the analysis, while all events of interest should be considered. On the other hand, log-rank test, as a non-parametric method, may not be the most powerful tool for comparisons between treatment groups. Other methodologies where censoring can contribute to the information used in the tests have been investigated for their usage in time-to-response evaluations. This session will take another look at the objective of these long-term safety or clinical outcome trials and propose alternative endpoints and their associated methodologies to utilize the data collected in a clinical trial more efficiently. Case examples will be presented for each proposal. Advantages, disadvantages, past experiences and future guidance will also be illustrated by both regulatory and industry presenters.
Randomized controlled trials though represent the gold standards for the approval of a medical product, they have limitations logistically and analytically, in addition to ethical concerns, such as in rare diseases and may not reflect the actual use of medical products in a real-world setting. Given that clinical trials have experienced increased costs/time and design complexity, modernization of product development process is essential. Exploration and understanding of the roles of historical controls and real-world data is one of the areas. Historical control data have long been used in product development. The temporal effect of historical controls and the use of publications for extraction of historical controls however have made its use controversial. On the meanwhile, since the 21st Century Cures Act was signed into law, FDA has been launching initiatives to fulfill the legislation. One of the provisions (3023) is to require FDA to evaluate the potential utilization of real-world evidence to help support the approval of a new indication for a previously approved product and satisfy of post-approval study requirements. FDA regulates therapeutic treatments including biologics, devices and drugs. Because of different modality, there are different features in the considerations of clinical data collection for drugs, biologics and medical devices. For examples, there have been drugs approved based on data from registry-like case series, and registry data as external controls; while registry data have been used to update the long-term efficacy in vaccine products. Recognizing that there are work on-going, it is valuable to understand FDA’s experiences and approaches in the use of historical controls and real-world data in regulatory decision making and medical products approval. In this session, we invite the following speakers from FDA Centers (CBER, CDER, and CDRH) to present their current experiences and approaches: • Elizabeth Teeple, PhD, Center for Biologics Evaluation and Research (CBER) • Pallavi Mishra-Kalyani, PhD, Center for Drugs Evaluation and Research (CDER) • Yun-Ling Xu, PhD, Center for Devices and Radiological Health (CDRH)
Although biomedical technology has advanced, chronic diseases such as Alzheimer’s disease, autoimmune diseases, and diabetes remain difficult to treat. While researchers are continuing to search for effective treatment for these diseases, owing to the long duration and slow progression of chronic diseases, many challenges arise for the design and analysis of chronic disease clinical trials. For example, chronic disease clinical trials would require a long treatment period, leading to a high percentage of missing data and concerns about the validity of statistical analysis models; on the other hand, if the efficacy of a drug for chronic disease is only studied in relatively short-term clinical trials, could we or how would we extend the short-term findings to long term? What are clinically meaningful efficacy endpoints and what are appropriate statistical methods to analyze them?
In this session, we will discuss the statistical issues in chronic disease clinical trials. Our first speaker is Professor Michael Donohue from the University of Southern California. He is also a member of the American Statistical Association Alzheimer’s Disease Scientific Working Group. He will discuss estimands and estimation methods for Alzheimer’s disease clinical trials. Our second speaker is Dr. Sue-Jane Wang from CDER/FDA. She will talk about statistical approaches to biomarker endpoints in the context of chronic disease clinical trials with examples. Dr. James Hung from CDER/FDA will be the discussant for this session.
The discovery and development process of medical products generate large amount data, from genomic data, patient level data, to large networks of health care networks. How to extract knowledge in a better way from data and exploit this knowledge is a question equally challenging for industry, academia and regulatory agencies. The rapid development of sophisticated data visualization and analytical techniques provide powerful tools for data scientists and statisticians to examine data in ways beyond traditional approaches. With the development of computing technique and software, there are multiple software/languages/tools are available for data exploration and visualization including SAS JMP, R Shiny. These software/tools make it possible to explore large amount of data, create meaningful and dynamic visuals of data, identify trends and gain deep insights, and make inferences and predications with modern computing techniques. In this session, we will bring three experts to elaborate on the new tools for tackling modern-day data analysis problems in the field of biopharmaceutical industry and medical devices. The speakers will share their visions on the new advances and demonstrate how these tools are integrated into the operation of drug development and are making great impact in the way we see and understand the data. Douglas Robinson from Novartis will discuss how to use dynamic displays to maximize the value of Data, Zhiheng Xu from FDA will discuss data mining and visualization in the Regulation of Medical Devices. Zachary Skrivanek from Eli Lilly will share recent work from Lilly’s Advanced Analytics Hub.
Pediatric drug development faces substantial clinical, technical, logistical, and ethical challenges. Since the establishment of BPCA (Best Pharmaceuticals for Children Act) and PREA (Pediatric Research Equity Act), both pharmaceutical industry and FDA have put a lot of efforts in improving drug development for pediatric patients. As the current approach for pediatric drug development are largely lagging behind, FDA and European regulators are adopting more proactive approach and require sponsors to discuss the pediatric requirement earlier in the drug development process. Advanced planning and collaborations among regulatory, formulation, toxicity, PK/PD, statistical, and clinical teams are critical for the success of the pediatric drug development. There are many key questions defining the strategic plan, such as the timing to start enrolling pediatric patients, dose formulation and dose-regimen finding, and how to move from older children to younger children. Addressing these key questions requires rigorous planning and innovative thinking in all aspects of clinical development. Statisticians play a critical role in the entire process, often drive the decisions on study designs, dose selection, extrapolation plan, and sample size justification. To improve the processes and to save time and effort, it is important for statisticians and clinicians in both regulatory agency and industry to come together to exchange knowledge and share experiences. In this session, speakers from FDA, and pharmaceutical industry will share their insights on how to address some of the key questions for strategic planning and execution of pediatric clinical trials.
The draft ICH E9(R1) Addendum on “Estimands and Sensitivity Analysis in Clinical Trials” describes a structured framework that includes the specification of an estimand (i.e. “treatment effect to be estimated”) in the presence of intercurrent events, main method of estimation (estimator), and sensitivity estimators to explore the robustness of inferences from the main estimator to deviations from its underlying assumptions. This session will explore various advances in statistical methodology, which were motivated by the estimand framework and the potential five strategies of accounting for intercurrent events (treatment policy, composite, hypothetical, principal stratification, and while on treatment). This includes methods for estimation of the treatment difference for subjects in the trial who can adhere to one or both treatments based on the potential outcomes framework as well as an alternative framework based on true outcomes framework. In addition, as the Addendum incorporates the missing problem into the broader context of intercurrent events, sensitivity analyses should also be considered in this broader context, rather than merely as sensitivity to missing data assumptions. This session will also share methods for addressing sensitivity to the assumptions made within each of the five general strategies of accounting for intercurrent events.
As immunotherapy is being integrated as a key therapeutic pillar in oncology, it has unique issues worth special consideration in clinical trials designs such as biomarker, combination strategies, and endpoint. For example, 1) Biomarker and combination strategy: how to design confirmatory trials in a manner consistent with modern tumor biology information for monotherapy and combination? 2) Endpoint: how to design the trial considering the special characteristics of pseudo progression and delayed clinical effect? And how to design the trials when there may not be a good earlier endpoint to predict clinical benefit, e.g., in maintenance trials?
In this session, we will specifically discuss recent innovations in confirmatory clinical trial designs to increase operational efficiency and optimize patient outcomes in the context of development of cancer immunotherapies.
The classic phase II trials are designed to evaluate a single treatment in patients with a particular cancer type. Simon’s Two Stage Design has been a popular design. However, a change in emphasis of oncology drug development has occurred that involves the use of tumor genomics to guide the use of molecularly targeted drugs. We have a trial where we test a new therapy across many disease types simultaneously where all patients have the same mutation. A new class of trials where the drug is tested simultaneously across the different subgroups of different tumor types (or baskets). In this design, we need to answer “Does the drug work in any subgroup?” and “Does the response differ between subgroups?” The activity of the drug may have a similar response across all subgroups,or may be similar in some subgroups and no activity in others. In basket designs, an advantage would be the ability to share/ borrow information across the subgroups. The statistical models for basket design Phase II trials need to consider the heterogeneity between the subgroups, the sample size, and control of Type I and II errors. One design is an aggregation design [Cunanan, et al, 2017] where in Stage 1 determine the response for each subgroup and if the subgroups homogeneous or not; and in Stage 2 either continue allocating to aggregate group and single test for efficacy, or selected baskets and perform multiple tests for each basket for efficacy. The Bayesian models exploit the similarity and borrow information between baskets (subgroups) [Simon et.al. , Neuenschwander et. al. ]. One model EXNEX (exchangeability EX and non-exchangeability NEX) allows borrowing between subgroups while avoiding too optimistic borrowing for extreme subgroup outcome. In this session, we will review these different methods, address the sample size necessary, control of Type I and II errors, and how these methods are consistent with the FDA Guidance for Master Protocols [Sept 2018].
This session proposal is submitted on behalf of the ASA BIOP RWE Scientific Working Group.
Data from “real world” clinical practice and medical product utilization – outside of clinical trials – are regarded as an increasingly pragmatic source of evidence generation that holds high potentials to increase efficiency and improve clinical development and life cycle management of medical products.
Robust RWE will not only leverage increasing amount of real-world data, but weave together different sources of data, such as clinical data, registry, and electronic health records, to bridge the gaps between efficacy and effectiveness. However, many challenges still remain. We will propose a general framework in the use of RWE which include but are not limited to the following key elements: • define scientific question in a regulatory-specific context, • identify potential real-world data (RWD) sources, • evaluate RWD quality and standard in a fit-for-purpose manner, • determine clinical study design according to available RWD, • specify appropriate analytic approaches with the consideration of advanced analytics, and • analyze and interpret the study outcomes while being aware of the of the study design’s strengths and limitations.
The regulatory, scientific, and ethical issues in using RWE have yet to be fully understood and addressed. These issues present ample interesting research topics for biostatisticians such as how to ascertain outcome measures, how to mitigate biases and potential confounding in both study design and analysis, how to understand and define appropriate estimands, and how to obtain clear understanding on the different data sources and quality of the data on the impact of study design, analysis, and interpretation.
In this session, invited speakers will discuss and provide insight on the above challenges and opportunities. Potential speakers and discussant will include experts from regulatory agencies, academia, and industry.
FDA CDRH defines artificial intelligence under the digital health as a device or product that can imitate intelligent behavior or mimics human learning and reasoning. Artificial intelligence techniques, such as machine learning, neural network, deep learning etc., have been used in medical device area such as medical imaging devices and biomarker development, etc. It serves as a critical component in many innovative medical devices yet does have mystic aspects. To gain a better understanding of this emerging technology, and its benefits, challenges and risks to medical decision making, this session intends to share thoughts and generate discussions among professionals and stakeholders from academia, industry as well as FDA.
Therapeutic protein products can elicit drug specific immune responses in people receiving the protein therapeutic. Because anti-drug antibodies (ADA) may negatively affect the safety and efficacy of protein therapeutics, detection and management of ADAs are essential components of drug development. A key component of ADA assay development is determination of cut point, by which samples are deemed to be either positive or negative. Although in 2019 FDA published draft guidance on the development and validation of ADA assay, in which a statistical procedure for cut point analysis is described, procedures for determination of cut point values are far from being settled and have remained the subject of a vigorous debate. In this session, three ADA assay experts, representing industry and regulatory perspectives, will together provide fresh insight on the latest advances in ADA assay cut point determination.
Utilizing observational studies to generate real world evidence and support the process of regulatory decision-making is of increasing interest. Due to non-randomized nature, the absence of confounding cannot be assumed in observational studies when the associations between a given exposure and a given outcome is investigated. The failure to account for confounding in the study design and statistical analysis methods can lead to biased results and ultimately an incorrect inference. Meanwhile, observational studies and statistical models rely on assumptions, which can range from how a variable is defined or summarized to how a statistical model is chosen and parameterized. The validity of all inferences from any analysis is dependent upon the extent to which these assumptions are met. In this session, speakers will discuss methods to assess the effect of confounders and minimize the resulting biases on study results. Moreover, sensitivity analyses by altering underlying assumptions to evaluate robustness of results will also be discussed.
Structured benefit-risk assessment in human drug evaluation has been the subject of many conference sessions, publications and books. The importance of quantitatively assessing benefit impacts versus risk impacts in public health is a key component of sponsor NDA/BLA submissions and FDA’s regulatory review. Given benefit-risk is of key importance to stakeholders, this session highlights examples of how benefit-risk planning and analysis affects sponsor and regulatory decisions. Such decisions include whether to develop a medicine, to continue development as the medication profile emerges, to approve a medicine, and/or to issue a warning or modify labeling. How has benefit-risk assessment affected decision-making within sponsor companies? How has it affected the approach of making decisions in CDER? What are best practices in benefit-risk planning, analysis, and communication to ensure effective and transparent decision-making? This session will demonstrate tangible ways that benefit-risk evaluation has contributed to and can effectively contribute in the future to medicines development and regulatory decision-making. Presentations will refer to case studies and review industry and regulatory perspectives. The session will conclude with a discussion of key challenges and best practices for benefit-risk approaches in decision-making. Presentations: 1. Advancing benefit-risk assessment for human drug review – Sara Eggers, Office of Strategic Programs, CDER, FDA 2. Applications of Benefit-Risk Assessment and Patient Preferences to Inform Decision Making – Drug Development, FDA Advisory Committee, and Regulatory Submission Examples – Eva Katz and Rachael DiSantostefano, Department of Epidemiology, Janssen R&D, LLC. 3. Best practices for benefit-risk evaluation to ensure effective and transparent decision-making – Gregory Levin, Office of Biostatistics, CDER, FDA
Historical data are from previous trials with a similar setting and could provide information relevant to the research questions of the current trial. Historical Data Borrowing in a new clinical trial uses historical information in both design and analysis for the trial. In the design stage, it can reduce the number of patients and hence reduce costs and timelines. In the analysis stage, it could improve the precision of the estimates. Therefore, historical data borrowing could increase statistical power for the hypothesis testing or reduce the type I error rate provided the historical information is sufficiently similar to the data from the current trial. On the other hand, heterogeneity often exists among the historical trials and between the current trial and the historical trials, which limits the use of historical data in new clinical trials.
During this session, the presenters from industry, academia, and regulatory will discuss their recent research in theory and applications on historical data borrowing in clinical trials. Dr. Ivan Chan, Vice President Statistics from AbbVie, will present the novel Bayesian approaches for historical data borrowing in continuous endpoints and their applications. Professor Matthew A. Psioda from UNC will present the use of patient level historical data to make historical data borrowing more efficient. Dr. David Ohlssen, Group Head of Statistical Methodology from Novartis, will present the summary of state-of-the-art statistical methodologies for historical data borrowing in clinical trial design and analysis. A speaker from FDA will present the regulatory thoughts and Statistical principles for historical data borrowing in clinical trial design and analysis. These novel and important information will significantly benefit audiences in research and practice on historical data borrowing in clinical trial design and analysis.
Pain, especially chronic pain, affects millions of humans and animals. Jensen (2016) reports that 43% of Americans suffer from some type of chronic pain, accounting for up to $635 billion annually in medical costs and lost productivity. Reid et al. (2013) assert that animals suffer more from pain that do humans because they cannot understand why pain occurs or anticipate pain relief. Our ability to assess pain in a valid and reliable way is essential to meet the growing demand to treat and manage pain more effectively in human and animals. Dr. Jean Recta (FDA) will examine pain assessment methods in animals relative to advances in pain assessment in humans. She will also discuss strengths and challenges in the use of animal pain assessment tools from a regulatory statistics perspective. Dr. Dottie Brown (Elanco Animal Health) will present on minimizing observation bias and placebo effects in chronic pain studies. Observation bias is of particular concern with subjective outcomes such as owner or veterinarian assessments of pain. Placebo effects may occur due to regression-to-the-mean. Many diseases, particularly chronic ones like osteoarthritis, have waxing and waning signs. Owners are more likely to seek out enrollment in a trial when those signs are at a peak. Over time, even without intervention, these animals will cycle back to their average level of symptom burden or disability, so animals in control groups may show improvement. Dr. Ciprian Crainiceanu (Johns Hopkins University) will discuss wearable computing for characterization of activity when pain occurs in the free-living environment. He will demonstrate the use of novel pattern recognition methods to extract the relevant movement signals, functional data analysis of these signals to quantify participant-specific differences between activity during periods when pain is reported or absent, and population level methods for characterizing daily patterns of activity as a function of pain levels.
Drug development is becoming increasingly costly due to the high attrition rate. To reduce cost and improve success rate, evidence based decision making is critical for clinical development. Key decisions in early drug development include but not limited to 1) do we see enough evidence to confirm proof of mechanism of the drug 2) do current efficacy and safety support further development? 3) can we modify (stop, accelerate or enroll more patients) an ongoing trial with accumulating data? 4) do we have the optimal dose/dose regimen for future studies? Traditional decision making rarely depends on formal quantification and relies mostly on an ambiguous process involving subjective expert opinion. Therefore, it is critical to develop a quantitative and evidence-based decision making framework which can provide a full picture of benefit and risk. Making good decisions involves three important factors. First, efficient and flexible trial designs allow collection of appropriate data for early adaptations. Second, appropriate fit-for-purpose metrics are needed to allow robust decision making. Third, appropriate statistical methods are critical for valid statistical inference that is essential for robust decision making. Numerous novel designs and statistical approaches for objective decision making have been developed in literature involving both Frequentist and Bayesian approaches. The Bayesian approaches are gaining significant attention recently due to their inherent flexibility and intuitive interpretation. However, the real life implementation of such design and statistical analysis are still rare. In this session, renowned speakers from FDA and industry will share their experience in developing novel designs and statistical approaches for robust decision-making in early clinical development. Practical utility of recent approaches including Bayesian methods will be discussed. Real world examples will be provided for illustration of Go/No Go decisions.
Natural historical control group has been implemented in multiple orphan products clinical development programs for treatment effect assessments for decades. The usage of such a group in common disease clinical development programs, however, is very limited if any. With the advancement of medicine, more and more medical products are on the market, available to the patients. It inevitably creates tough competition with the patients’ enrollment to clinical trials and difficulties in other operational considerations. Eventually, it may delay approvals of new molecular entities that could be more effective than the existing medical products. Recently, the interest of digging more into historical or external data has caught much attention. It is well-recognized that historical and external data can potentially improve efficiencies of a clinical trial. Besides using subject-level data from historical data reservoir in an attempt to mimic a randomized parallel-group trial, researches realized there is much more potential in historical data utilization. For example, through meta-analysis of the historical data, one can choose more sensitive endpoints and primary analysis models. Moreover, instead of using subject-level data, Bayesian framework can be introduced to leverage historical data at a population-level. This session will showcase few real case examples that use historical data beyond choosing a matching historical control. New methodologies and trial designs will also be discussed with case examples and/or simulations. Both industry and regulatory experiences will be shared, followed by lessons learned and future guidance.
The transition to precision medicine approaches in cancer has sparked a much-needed shift in the design and implementation of clinical trials. For example, umbrella trials are frequently used to evaluate the efficacy of targeted treatment to groups of patients with the same cancer type, while basket trials evaluate the effectiveness of a drug based on its underlying mode of action rather than strictly on the specific form of cancer it was intended to treat. Recently, a new promising approach in precision medicine is developing therapy for patients with specific molecular characteristic which are agnostic to cancer site. In this approach, rather than requiring separate development programs for each disease site, it will be based on biomarkers irrespective of organ site or histology. For example, the Food and Drug Administration (FDA) approved pembrolizumab, a programmed death 1 (PD-1) inhibitor, for the treatment of adult and pediatric patients with unresectable or metastatic, microsatellite-instability–high (MSI-H) or mismatch-repair–deficient (dMMR) solid tumors, regardless of tumor site or histology. There are many statistical issues for this new precision medicine approach from both device and therapy perspectives; yet the issues are less known to the society. In this section, experts from FDA, device and pharmaceutical industries, and academic will discuss the statistical issues and methods in clinical trial designs and data analysis for this new emerging precision medicine approach.
According to a recent review article by Tang et al. [Nature Reviews Drug Discovery 17, 783-784 (2018)], by the end of October 2018, the number of immune-oncology agents under development has increased to as many as 3,394, the majority of which are in preclinical or phase 1 stage. Despite such a concentration of resources in this explosively developing field, the statistical methodology for early oncology clinical development, on the other hand, have not been fully optimized to embrace the special needs and challenges of immune-oncology. Wage et al, summarized the main challenges of early-phase study design of immunotherapies from a statistical perspective [Journal for ImmunoTherapy of Cancer 6, 81 (2018)]. Late-onset toxicities, drug combinations, novel clinical endpoints, and design of expansion cohorts were identified as the primary issues for phase 1 development among others. In August 2018, a draft guidance released by the FDA, “Expansion Cohorts: Use in First-In-Human Clinical Trials to Expedite Development of Oncology Drugs and Biologics”, emphasized the challenges including collecting and reporting of new safety information, confirming recommended phase 2 dose, and evaluating preliminary anti-tumor activity. In this session, we will invite speakers from academia, regulatory agency, and industry with extensive experiences in immune-oncology early development to share their insight and innovation on several key issues. Opinion leaders will join the panel discussion to summarize the challenges and brainstorm ideas to overcome them. Not only could these conversations be of practical value for statisticians working on early stage immune-oncology to learn about the state-of-the-art approaches, but also inspire thinking differently and being innovative when new challenges emerge.
Bayesian approaches become increasingly interested by clinical researchers because they provide analytical pathway to effectively utilize available information and enhance efficiency or precision in clinical trials.
Bayesian design has been widely used in early phase clinical trials, and it is also attractive in late phase efficacy trials especially in rare disease areas and pediatric trials due to low incidences. It will expedite the development if we can leverage historical data from previous trials or other populations to augment the efficacy and reduce sample size for the current trial. However, several challenges in Bayesian designs need to be carefully considered during the design stage, such as choosing proper prior distributions, handling heterogeneity, and developing computationally feasible approaches.
In this session, innovative methods and practical considerations will be discussed for Bayesian design in clinical trials with methodologies and applications. Speakers from pharmaceutical industry who are highly engaged in this area will share their experiences and research in Bayesian analysis in designs and analysis in clinical trials. The speakers will present some Bayesian design in pediatric clinical trials, some applications of Bayesian methods in Benefit risk assessment, and some Bayesian methods in early development clinical trials. Discussions will be provided on these methods including some regulatory perspectives.
List of invited speakers: • Amarjot Kaur, PhD, Merck & Co., email@example.com, “Bayesian framework for pediatric drug development” • Margaret Gamalo, Eli Lilly & Co., firstname.lastname@example.org, “Applications of Bayesian Methods in Indirect Comparisons of Benefit Risk” • Mani Lakshminarayanan, Complete HEOR Solutions (CHEORS), email@example.com, “Use of Bayesian Methods in Shaping New Early Development Clinical Trials Paradigm”
Session discussant: Frank Harrell, Vanderbilt University, firstname.lastname@example.org
Getting the questions right is a critical part of any scientific pursuit. The ICH E9 R1 laid out a good conceptual framework for endpoints using estimand concept. In clinical development, the concept is not only important for clinical trial design, but can be very important for the safety and benefit risk evaluation at compound level. This session will examine the estimand in safety and benefit risk assessment. Potential topics will include but not limited to:
• “Challenges of Safety and Dual Benefit-Risk Estimands”: Safety endpoints are multifaceted with frequency, severity, timing, duration and reversibility. They are most often underpowered, unexpected and impossible to impute. Safety estimands are impacted differently by intercurrent events than efficacy estimands. The pairing of safety and efficacy estimands to conduct a benefit-risk assessment is even more challenging. In this talk, the challenge of safety estimands will be discussed first, then dual estimands for BRA will be discussed with examples.
• “Safety Estimands: A Regulatory Perspective”: One of the objectives of FDA CDER’s Office of Biostatistics Safety and Benefit-Risk working group has been to develop best practices for the Office’s evaluation of safety and benefit-risk, including the choice of safety estimands and appropriate methods to evaluate them. This talk will describe the working group’s current thinking regarding safety estimands, and will use an example indication to illustrate the thought process on different considerations when defining and evaluating safety estimands.
•Topic 3- “Creating a Benefit-Risk Estimand from My Drug Program’s Efficacy and Safety Estimands”: This talk will use a real example indication, beginning with a benefit-risk value tree, to discuss how a benefit-risk estimand may be created from the efficacy and safety estimands. The structured thinking of value trees and estimands will demonstrate how benefit-risk assessment may be considered in the planning phase of a clinical program, based on the scientific questions that require elucidation in this example clinical program.
Panel discussants will be invited to comment on this hot area. We hope to trigger more discussion and deeper thinking on the estimand of safety and benefit risk assessment.