All Times EDT
Interest in causal inference as a statistical field has grown tremendously over the past decade. While this is partially motivated by the growing availability of non-randomized observational data, it has also been realized that ideas from the field of causal inference provide a useful framework for inference in randomized studies. For example the ICH E9 addendum on estimands uses the counterfactual viewpoint from causal inference to define treatment effects in clinical trials.
In this training we will outline the basic ideas of causal inference (causal effect, potential outcomes, standardization and inverse probability weighting). We will also illustrate how it relates to questions and concepts encountered in randomized clinical trials (intent-to-treat, per-protocol analyses, covariate adjustments in regression analyses, ...) and the strategies defined in the ICH E9 addendum on estimands. While the causal inference framework is in many aspects aligned with pharmaceutical statistics traditions, there are also areas where the framework sheds new light on established traditions, which we will outline in this training.
1) Introduction to causal inference and potential outcomes (~45min)
2) Causal inference in randomized clinical trials and drug development (~45min)
3) Estimation methods for causal inference: Standardization and inverse probability weighting, including connection to treatment effect parameters in regression models (~60min)
4) Application example: Treatment switching (~30min optional)
Instructor(s) background: All three instructors have extensive experience with implementation of the ICH E9 addendum over the past few years (including contribution to health authority interactions), and how causal inference ideas and techniques can be utilized to define estimands as well as estimation techniques reflecting clinical questions in randomized clinical trials. The instructors have presented trainings on this topic successfully on multiple occasions.
Akacha, M., Bretz, F., Ohlssen, D., Rosenkranz, G., and Schmidli, H. (2017). Estimands and Their Role in Clinical Trials, Statistics in Biopharmaceutical Research, 9, 268-271.
Degtyarev, E., ..., Bornkamp, B., et al. (2019) Estimands and the Patient Journey: Addressing the Right Question in Oncology Clinical Trials. JCO Precision Oncology 3, 1-10.
Magnusson, B., Schmidli, H., Rouyrre, N., Scharfstein D. (2019) Bayesian inference for a principal stratum estimand to assess the treatment effect in a subgroup characterized by postrandomization event occurrence, Statistics in Medicine, 38, 4761-4771.
Bornkamp, B., Bermann G. (2019) Estimating the treatment effect in a subgroup defined by an early post-baseline biomarker measurement in randomized clinical trials with time-to-event endpoint, Statistics in Biopharmaceutical Statistics, to appear, https://doi.org/10.1080/19466315.2019.1575280
Artificial intelligence (AI) or machine learning (ML) has been used in drug discovery in biopharmaceutical companies for nearly 20 years. More recently AI has also been used for the disease diagnosis and prognosis in healthcare. In analysis of clinical trial data, predicted individual patient outcomes for precision medicine, similarity based machine learning (SBML) has recently been proposed for clinical trials for oncology and rare disease without the requirement of big data. The course will focus on supervised learning, including similarity-based learning and deep learning neural networks. We will also introduce unsupervised, reinforcement, and evolutionary learning methods. In addition, initiatives and innovative thinking of AI addressing key challenges in pharmaceutical industry will be discussed. The short course aims at conceptual clarity and mathematical simplicity. R code will be provided with examples for implementation. The course materials are based on instructor’s upcoming book in Feb, 2020: Artificial Intelligence in Drug Development, Precision Medicine, and Healthcare.
The course will cover:
(1) Introduction to AI: Classic Statistics versus AI; Past, Current, and Future of AI in Drug Development, Medicine, and HealthCare
(2) Deep Learning Neural Networks: Convolutional Neural Network (CNN); Recurrent Neural Network (RNN); Long Short-term Memory Networks (LSTMs); Deep Belief Network (DBN)
(3) Similarity Based Method: Similarity-Based Machine Learning; Kernel Method; Nearest-Neighbors Method; Support Vector Machine
(4) Overview of unsupervised, Reinforcement, Collective Intelligence, and Evolutionary Learning Methods
Goals: attendees will learn common AI methods in drug development and medical research, and will be able to use the AI methods with R to analyze clinical trial and other data, and interpret the results.
This one-day short course will cover a variety of sequentially adaptive phase I-II clinical trial designs that use both efficacy and toxicity to optimize dose, the dose pair of a two-agent combination, two doses given in sequence, or dose and schedule. The course will begin with an explanation of fundamental flaws with the conventional paradigm that separates phase I and phase II, followed by an overview of phase I-II designs. The remainder of the course will cover specific designs, with each illustrated by a practical application, including basic design structure, establishing numerical values of design parameters and prior parameters, and computer simulations to establish operating characteristics. Examples will include designs based on either efficacy-toxicity probability trade-offs, elicited joint utilities of efficacy and toxicity, methods for dealing with late onset outcomes, personalized (precision) dose-finding, optimizing molecularly targeted agents, two-agent combination trials, and methods for dealing with drop-outs.
Morning Lectures (Professor Thall)
• Flaws with the Conventional Phase I ? Phase II Paradigm
• Phase I-II Trials: Using Both Efficacy and Toxicity
• Efficacy-Toxicity Trade-Off Based Designs
• Utility Based Designs
Afternoon Lectures (Professor Yuan)
• Model-assisted Designs
• Designs with Late Onset Outcomes
• Optimizing Molecularly Targeted and Immunotherapy Agents
• Personalized Dose Finding
The two over-arching objectives of this short course are (1) to show the attendees the many serious flaws with the conventional approach that separates phase I from phase II, so that they may avoid this approach when possible, and (2) to present practical alternative phase I-II designs and methods.
Yuan Y, Nguyen HQ, Thall PF. Bayesian Designs for Phase I-II Clinical Trials. Chapman & Hall/CRC Biostatistics Series. 2016.
Under the 21st Century Cures Act, the FDA is directed to develop a program to evaluate how real-world evidence (RWE) can potentially be used to support approval of new indications for approved drugs or to support/satisfy post-approval study requirements. This brings new opportunities to utilize statistical innovations and advances that are critical to assess and address data quality as well as establish causal inference based on real world data (RWD) for regulatory decision-making. However, designing a valid RWD-based study and draw inferences about RWE face with numerous challenges such as confounding, treatment switching, missing information, etc. Propensity score methods offer powerful and flexible approaches to design and analyze RWE studies that can efficiently address aforementioned statistical challenges. This course will start with a general overview on causal inference and related methods including matching and inverse probability weighting (IPW). We will discuss how to design and analyze an RWD-based study using these methods under a question of interest that is coherent to ICH-E9 regulatory definition of estimand. This course will provide hands-on training based on simulated data and implementation codes in R. During the training, participants will be able to learn (1) efficient way to examine balance of covariate distributions in the target population before/after matching or weighting, how to (2) fit the different causal models, (3) deal with tails of the PS distribution, and (4) interpret study findings based on selected causal model. Then we will illustrate a case-study example that demonstrates a practical use of IPW for drug safety evaluation. A mock SAS code will be provided during the case-study illustration. This course will wrap up with introduction to some recently developed methodologies such as overlap weights that places emphasis on clinical equipoise and targeted maximum likelihood estimation that utilizes machine learning algorithms.
Immunotherapy has emerged as a promising treatment option for cancer in recent years. A major challenge in immune-oncology development is the delayed onset of treatment effects due to the mechanism of immunotherapy which violates the proportional hazard (PH) assumption. It is often referred as the non-proportional hazard (NPH) problem. In contrast to the PH assumption, NPH constitutes a broad class of alternative hypotheses. A suitable design for time to event data with potential NPH needs to be flexible enough to incorporate the uncertainty of NPH type and provide a robust inference. Different alternative design and analysis approaches for immune-oncology trials will be discussed. These include piecewise log-rank test, weighted log-rank test, combination tests and Kaplan-Meier based methods (e.g. restricted mean survival time). We will introduce a new MaxCombo test which provides robust power under different NPH scenarios. The short course will provide the analysis methodology, general design framework and strategies, sample size calculation, strategies of interim analysis, evaluation of operating characteristics and necessary steps for protocol implementation. All methodologies will be illustrated with real life examples and implementation with the available R package Simtrial. Outline: 1. Introduction 2. Alternative methods for design and analysis 3. Introduction of the R package Simtrial 4. Designing a trial with potential NPH 5. Design using Simtrial 6. Group sequential design with potential NPH 7. Summary and discussion Instructor(s): Dr. Satrajit Roychoudhury is a Senior Director and a member of Statistical Research and Innovation group in Pfizer Inc. His areas of research include the use of survival analysis and Bayesian methods. Dr. Keaven M Anderson is a Distinguished Scientist and head of Methodology Research biostatistics group at Merck. He has interest in group sequential design and survival analysis and applications of multiplicity control.
Ask a group of statisticians, “What do effective leaders do?” and you’ll hear a sweep of answers: Leaders set strategy; they demonstrate technical competence; they influence and collaborate across functions; they possess strong communication and negotiation skills. Then ask: “What should leaders do?” If the group is seasoned, you’ll likely hear one response: The leader’s singular job is to get results.
The recently endorsed Leadership- in- Practice Committee (LiPCom) of the Biopharmaceutical Section of the ASA seeks to present an interactive practical leadership workshop. The workshop will introduce real-life scenarios common to statistical practice that require leadership skills to drive results.
Most statisticians don’t enter their careers intending to become leaders and statistical leadership does not always accompany titles. A statistician becomes a leader by recognizing a problem that matters and can turn bold scientific, strategic, or organizational objectives into reality. Statisticians who can do this are increasingly valued in today’s data-driven organizations. As stated by 2012 ASA President Bob Rodriguez, “Leadership ability is a prerequisite for the growth of our field because statistics is an interdisciplinary endeavor and our success ultimately depends on getting others to understand and act on our work.”
This half-day workshop will provide a forum to explore multiple dimensions of leadership and expose ways that statisticians can leverage these leadership skills in driving results.
International Council for Harmonization (ICH) and Food and Drug Administration (FDA) Guidance for Industry (GFI) E9 defined Bayesian approach as “Approaches to data analysis that provide a posterior probability distribution for some parameter (e.g., treatment effect), derived from the observed data and a prior probability distribution for the parameter. The posterior distribution is then used as the basis for statistical inference.” Bayesian statistics has been widely applied in many areas. In this session, speakers from FDA, industry, and academia will talk about different applications of the Bayesian approach. The FDA speaker will share her experience on using Bayesian statistics to evaluate the efficacy of animal cell therapies. By using hierarchical models, information across different trials may be combined to increase the power to detect treatment effect while power prior models can be used to borrow information from previous trials to increase the power of the current trial. The speaker from industry led the FDA effort for the Bayesian Guidance for Industry and FDA Staff “Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials” (2010). He will discuss how Bayesian hierarchical models and power priors, in the context of borrowing other data from historical studies, impact sample size calculations and weight of the priors for medical device clinical trials. The speaker from academia will introduce elastic priors and their application in designing clinical trials with adaptive information borrowing. An important feature of the elastic prior is that it is constructed based on, and thus automatically satisfies, a set of information borrowing constraints prespecified by regulatory agencies or researchers. For example, when the treatment effect between the current trial and historical data is greater than a prespecified clinically significant margin, no information should be borrowed. Within that margin, information borrowing is appropriate.
Regulatory approval of a medical product is usually based on the balance of benefits and risks that patients may experience while taking the product. However, a quantitative assessment of benefit-risk profile is always problematic due to the lack of universally acceptable standard on how the benefits and risks are compared. In addition, incorporating patient baseline characteristics in benefit-risk analysis can be challenging as it may introduce multiplicity due to unplanned subgroup analyses. This session includes four presentations covering a composite endpoint approach for treatment benefit evaluation, Bayesian benefit-risk assessment, longitudinal interval binary recurrent event and testing for trend in benefit-risk analysis.
For diagnostic device, new technologies emerge quickly. Next-generation sequencing (NGS) is increasingly used in precision oncology because of its ability to identify many mutations simultaneously. In addition to traditional tissue-based biopsy, Liquid biopsy test becomes an attractive. Companion diagnostics (CDx) are test that provide information that is essential for the safe and effective use of a corresponding therapeutic product, such as a drug. Multiplexed tumor profiling tests which may include CDx assess many biomarkers that may have a range of clinical evidence associated with them that is constantly changing as new science emerges, and etc. In this session, we will the statistical challenges on how to analytically and clinically validate the device performances with these new emerged technologies.
Adaptive design becomes increasingly interested by clinical researchers and/or sponsors due to the flexibility to allow for prospectively planned modifications of one or more aspects of the design based on analysis of accumulated data from enrolled participants in the trial. Trials with adaptive designs are potentially more efficient, informative and ethical since they can adjust to information that was not available when the trial began. The 2018 FDA document “Adaptive Designs for Clinical Trials of Drugs and Biologics”, provides guidance on the appropriate use of adaptive designs for clinical trials. With the acceptance by the regulatory agency demonstrated in the guidance, the complexity and limitations of adaptive designs inevitably require sophisticated statistical methods to minimize the chance of introducing bias and to avoid erroneous conclusions.
In this session, we will focus on new frontiers of statistical methodologies for adaptive strategies in clinical trials, including adaptations to patient allocation through new algorithms for covariate-adaptive randomization, adaptive seamless phase 2/3 design using response adaptive randomization, seamless 2-in-1 designs, and adaptive flexible strategies in challenging experimental settings. Speakers from pharmaceutical industry and regulatory agency who are highly engaged in this area will share their recent research experiences.
List of Invited Speakers: Jonathan Hartzel, PhD, Merck & Co., “Covariate-adaptive randomization through algorithms for minimization and the implementation with an R package”; Ivan Chan, PhD, AbbVie Inc., “Accelerating clinical development with seamless phase 2/3 design using response adaptive randomization”; Cong Chen, PhD, Merck & Co., “Adaptive 2-in-1 design and extensions”; Sue Jane Wang, PhD, FDA, “Adaptive flexible strategies in challenging experimental settings”.
Sparse blood sampling occurs when a few samples can be collected from each subject in a clinical trial. Destructive sampling includes the extreme case of sparse sampling when only one sample can be collected from each subject. Because each study subject does not have all blood samples at every deigned sampling time points, it is not possible to calculate traditional pharmacokinetics (PK) parameters for each subject, such as the maximum blood concentration (Cmax) of a drug and the area under the concentration curve (AUC) from time 0 to the last time just before the first unquantifiable value after Cmax. Cmax and AUC have been the key parameters to evaluate drug bioequivalence, which a generic drug is equally bioavailable compared to its respective Reference Listed Drug (RLD) in the rate and extent to which the active ingredients are absorbed and become available at the site of drug action. How to design a BE study and what PK parameters should be used under sparse sampling has not been well established.
In this session, we have invited and confirmed three experienced and knowledgeable speakers, each from industry and two FDA centers. They will tackle the recent challenges of sparse or destructive sampling from their perspectives. Our speakers will propose innovative solutions of designing BE studies, analyzing and assessing bioequivalence when traditional PK parameters cannot be calculated for each study subject.
Our speakers have kindly prepared their presentation titles below: • Presentation Title 1: “ Some Possibly Useful Thoughts on BE Assessments in the Case of Sparse Sampling” by Martin Wolfsegger, Ph.D., Director Plasma Derived Therapies & Pharmacometrics,Takeda • Presentation Title 2: “Permutation Bioequivalence Test under Sparse Sampling and Small Sample Size” by Jing Han, Ph.D., Math. Statistician,FDA\CDER • Presentation Title 3: “Evaluation of Partial AUC in BE Studies using Destructive Sampling Design” by Shasha Gao, Ph.D., Math. Statistician,FDA\CVM
Although randomized clinical trials are considered to be the gold standard for generating clinical evidence, the use of real-world evidence to evaluate the efficacy and safety of medical interventions is gaining interest. Regulatory bodies around the world have put out guidance and position papers on the potential use of real-world data in regulatory settings. In this session, we will outline the key points-to-consider from various guidelines and position papers including those form the FDA, PMDA, EMA and China CDE. We will share a fit for purpose example of utilizing real world data for a regulatory filing. Statistical and operational challenges of implementing the study will be addressed. Best practices for close collaboration between statisticians and observational researchers to leverage the strength of each other will be discussed. Speakers will be from industry, academia and regulatory.
Vaccine development is a lengthy, expensive process and typically takes multiple candidates and many years to produce a licensed vaccine. Conducting vaccine efficacy trials during a pandemic like COVID-19 poses unique challenges. Trials in resource-limited settings may face severe logistical constraints; transmission can be highly localized and hard to predict. Furthermore, the ethics of a trial in the face of an emergency are complex. In the current high-mortality situation, populations may not accept randomized, controlled trials with placebo groups. Finally, pandemics will generate simultaneous demand for vaccines around the world. However, a potential vaccine needs to be brought to people before the next wave begins. Therefore, such important public health decision will most probably be based on a relatively small amount of information on vaccine efficacy. Innovative design and statistical analysis approaches are crucial to elicit all possible information of vaccine efficacy for robust decision-making.
This session will focus on the key statistical and regulatory aspects of developing COVID-19 vaccine during a pandemic situation. Eminent speakers and panelists from academia, industry and FDA will present key design, analysis and review aspects required to bring the most promising candidates to the patients.
The purpose of this session is to provide insight to statisticians on effective communication of Bayesian clinical trial design and analysis to statisticians, clinicians, regulatory affairs, and regulators to ensure alignment and mutual understanding.
Explaining the value of a Bayesian approach to clinicians and even to statisticians can be challenging. The situation is further exacerbated by a myth that regulatory authorities do not accept Bayesian methods. Communication clarity and a custom approach for each audience is needed to succeed in convincing involved parties.
This session has three parts:
1. “Effective Communication of Bayesian Design and Analysis to Statisticians” - FDA speaker
What design and analysis components should be specified? What operating characteristics should be assessed? How to conduct simulation experiments? How to specify the decision criteria? How to report the findings? References will be made to multiple FDA guidances for industry, relevant literature, and personal experience.
2. “Effective Communication of Bayesian Design and Analysis to Clinicians” - Industry speaker
Effective communication to clinical professionals requires a different strategy than communicating with statisticians. What details should be omitted? What baseline knowledge is useful to clinicians for a clear understanding? How to translate statistical elements in terms of clinical relevance? We will draw on the results of a recent Drug Information Association-Bayesian Scientific Working Group survey assessing Bayesian statistics knowledge, training suggestions, and perspective within the medical community. Furthermore, lessons learned from the experiences in industry and the DIA working group will be discussed.
3. Panel discussion with experts from industry and regulatory agencies
Mathematical models which simulate complex physical processes are widely used in regulatory health. These models are typically made up of large systems of differential equations and are implemented by complex computer code. Examples of these are: pharmokinetic models which enable the prediction, distribution, metabolism and excretion of chemicals in the body; the EPA’s Pesticide in Water Calculator (PWC) which estimates pesticide concentration in water bodies and mathematical models for infection diseases to help understand disease dynamics.
Goals in using mathematical models include: predicting unknown output with known sets of inputs; optimization of the functions by finding representative maximum or minimum values; visualization of the output in relation to a set of inputs; calibration of the models to fit observed sets of inputs and integration to obtain the average over sets of inputs. Analyzing data from these models involves manipulation of large datasets comprised of a large number of inputs relating to multiple outputs, in addition relationships between set of inputs and outputs is sometimes unknown.
In this session researchers in biomedical research and regulatory health will discuss the experiences in using mathematical models to enable an exchange of ideas when facing data challenges. Speakers include Dr. Brandon Gallas(FDA, CDRH) and Dr. Andrew Miglino (FDA, CVM).
Dr Miglino is a physical scientist, his work at the FDA involves using computer codes to implement models which explore environmental problems. His focus is on Molecular dynamics, sorption mechanics, diffusion modeling and advective-dispersive systems.
Dr Gallas provides mathematical, statistical, and modeling expertise to the evaluation of medical imaging devices at the FDA. His main areas of research are image quality, computer-aided diagnosis, imaging physics, and the design, execution, and statistical analysis of reader studies
FDA released a guidance on “Policy for Device Software Functions and Mobile Medical Applications” on September 27, 2019, explaining “mobile apps that meet the definition of a device and either are intended: · to be used as an accessory to a regulated medical device; or · to transform a mobile platform into a regulated medical device.” Mobile medical applications (commonly called mobile medical apps) provide new type of data and statistical challenges to evaluate performance for regulatory decision. The apps can span the gamut from measuring motion, sleep related disorders to atrial fibrillation to list a few among many. The challenges associated are with sheer size of data, the complexity of the signals and appropriate metrics to evaluate performance. This session will address regulatory experiences with mobile medical apps, statistical challenges in methodology and reporting.
Vaccines have been playing a critical role in public health through prevention of infectious diseases. An effective vaccine could prevent serious diseases, reduce disease burden on families and communities, or even eradicate diseases (e.g. smallpox). Vaccine research and development have been and will be important in this everchanging world. In developing new vaccines and managing life cycles of existing vaccines, new challenges keep emerging, and innovations are following.
As an integral part of the vaccine development, statisticians also come up with innovations in study designs and analysis methodologies. Speakers from academia, industry and government will share their experiences with some of the following innovations in solving challenges in vaccine development: biomarker assessment (correlates of risk and correlates of protection) especially for first-in-class vaccines; bridging of vaccine efficacy to new settings; multiplicity challenges in multivalent vaccines (multiple serotypes of a pathogen) or due to co-administration with routine vaccines that results in testing of many hypotheses to show non-inferiority of the routine vaccines with vs. without investigational vaccine; evaluation of effects for new serotypes added on top of an existing vaccine; real world data/evidence implementation (e.g. hybrid trial design); application of estimands in vaccine trials to address intercurrent events; and safety assessment/monitoring.
Potential speakers for the session include:
Dr. Peter Gilbert, Fred Hutch; Dr. Lihan Yan, CBER, FDA; Dr. Judy Pan, Sanofi Pasteur; Dr. Jianing Li, Merck & Co., Inc.
Typical oncology practice often includes not only an initial front-line treatment but also subsequent treatments. For example, acute lymphoblastic leukemia patients receive hematopoietic stem cell transplantation as a subsequent therapy or a kidney transplantation may be given to patients being treated with dialysis. In clinical trials, subsequent therapy is a nuisance covariate that may cause complications in the interpretation of the experimental therapy because it is usually non-random and may correlates with the patient response to the front-line treatment, therefore affects the outcome of the primary endpoint. Conventional analyses using ITT analysis that ignore subsequent treatments may be misleading, because they are an evaluation of both the front-line treatment effects and the subsequent treatments. The international harmonization of technical requirements for registration of pharmaceutical for human use (ICH) guideline suggests that it is not advisable to adjust the main analyses for covariates measured after randomization because they may be affected by the treatment. This raised a challenge in analyzing the data that include post-randomization treatment, or subsequent therapy. In this session, speakers will present statistical considerations regarding this issue from regulatory, academic, and industry perspectives. Novel methods to analyze the impact of subsequent therapy in time-to-event endpoints will be discussed. Both simulations and real case studies will be used to evaluate the pros and cons of different statistical approaches.
“Covariate adjustment” in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates”). The baseline variables could include, for example, age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We describe recent developments in statistical methods for covariate adjustment, and present case studies that demonstrate their benefits and limitations. A focus is on methods that are robust to model misspecification. Our speakers are from the FDA, NIH, and academia, in order to give multiple viewpoints on covariate adjustment. The intended audience consists of clinicians and statisticians who design and/or analyze clinical trials. We aim to provide practical, useful information to help guide trial designers in getting the most out of covariate adjustment, while avoiding some pitfalls.
Dr. Daniel Rubin, an FDA statistician, will present on the recent FDA draft guidance (which he helped to write) on using analysis of covariance (ANCOVA) to analyze trials with continuous-valued or change score outcomes.
Dr. Michael Rosenblum, Associate Professor of Biostatistics at Johns Hopkins Bloomberg School of Public Health, will present case studies of the value added by covariate adjustment in trials from the following disease areas: stroke, Alzheimer's disease, depression, schizophrenia.
Dr. Min Zhang, Associate Professor, Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, will present on covariate adjustment methods for time-to-event outcomes.
Dr. Michael Proschan, an experienced NIH trialist and expert on statistical analysis and monitoring of trials, will present on methods for selecting the variables to be used in covariate adjustment.
Please note: I have not yet confirmed speaker availability.
In 2019, FDA published two draft guidance on drug development for rare disease including, Rare Diseases: Common Issues in Drug Development Guidance for Industry, and Rare Diseases: Natural History Studies for Drug Development. Both guidance discuss key challenges in developing novel therapies for rare disease, albeit the increasing number of orphan drug approvals since 2014. Among various topics reviewed, the guidance highlight the utilities of natural history study and broadly the real world data (RWD), in the demonstration of the effectiveness and safety from novel therapies. With much of these discussions focus on regulatory approval, however, it is worth noting that RWD support the whole life cycle of the drug development. In fact, RWD is particular important for studying rare disease given the often large knowledge gap between the cause of disease and a plausible therapeutic approach, and from developing a drug to delivering to patients. In this session, we invite speakers from regulators, industry representatives and academia researchers, to discuss issues that are important at different stages of drug development for rare disease, and review case studies on how RWD can be generated and used to accelerate the search for novel therapies in the following areas: • Improve knowledge of the disease epidemiology and natural history • Assist the design of (patient-centric) clinical development program • Support the regulatory approval • Characterize disease burden to assist the health-technology assessment
Much has been written and hyped about Bayesian, adaptive, and complex clinical trials. This short course focuses on the practical details of each of these innovations. We will discuss what each of these concepts are, why they may improve a clinical trial, and the practical ramifications of each of them. A focus will be on providing multiple examples of each.
The What This section will work to explain each of these three innovations. What are adaptive designs, what are not adaptive designs? What are the ramifications of using a Bayesian analysis in a clinical trial? Finally, either of these concepts can create a situation in which calculating the operating characteristics is impossible without clinical trial simulation, and thus we label these trials as complex innovative designs. Examples of clinical trials labeled as adaptive, Bayesian, and complex will be presented.
The Why The second focus of the course will be on why these innovations may be preferred in a clinical trial. Each of these innovations can improve a trial to provide better answers, more answers, and more efficiently. In addition, there can be an impact on the patients in the clinical trial. All of these ramifications will be discussed. Again, examples of trials will be presented to demonstrate the potential promise of these innovations.
The How Perhaps most importantly is how do we utilize these innovations? How does one go about creating an adaptive design? What are the steps, what are the potential missteps of creating an adaptive design? Practical advice will be presented on good practices in the construction of adaptive, Bayesian, and complex clinical trials. Examples of trials will be presented where the focus will be on how they were constructed.
Scott Berry, Ph.D., is President and a Senior Statistical Scientist at Berry Consultants, LLC and has done numerous short courses with awards.
All statisticians at some point in their careers are frustrated by the rejection of their ideas or resistance when they try to drive change by implementing a new method or process. Through time and experience, some statisticians learn to overcome these issues and successfully influence their collaborators and business partners. How do they do it? What skills do they practice? The simple answer is leadership - skills that you can learn to get people to listen to your perspective, trust your proposals, and invest in your ideas. With leadership skills, your ideas and innovation have a much greater chance of being adopted and of impacting the drugs you develop and the patients you serve. This short course will focus on the leadership skills necessary to influence and to take new ideas and innovative thinking into practice. The course will provide: - A brief introduction of leadership, why it is important, and its role in creating impact through innovation - A discussion of the critical skills required to influence and get collaborators to buy into your ideas, including communication, business acumen, and networking. - What it takes to develop and exhibit a professional presence that will allow you to establish and maintain the ability to influence decisions & strategy and impact your organization. Although this course will not turn you into an instant leader, it will provide you with knowledge of what it takes to improve as a leader and an initial direction & focus to get you started on your leadership journey.
This course will include select topics and concepts from two successful ASA leadership courses: Preparing Statisticians for Leadership (taught annually at JSM since 2014) and Leading with Executive Presence (developed and taught as part of Lisa LaVange’s ASA presidential leadership initiative in 2018). Dr. Gary Sullivan will be the course instructor.
Both Statistics and Machine Learning (ML), which is loosely called Artificial Intelligence in the media, are the fields of learning from data. The fact that they share many underlying math principles and theories overshadow the fact that they are indeed based on different philosophies. Ignoring the differences caused confusion among some statisticians and prevented them from effectively using some ML technologies. This short course is uniquely designed as an introduction to ML for statisticians. It avoids wasting time on topics that statisticians are already familiar. Instead, it puts emphasis on the areas unique in ML, and draws connections between the two fields for those with superficial similarity. This course was recently taught 6 times in Merck. Its success shows its effectiveness for statisticians to learn ML. The course has 5 sections: (1) What ML is: to show similarities and differences between ML and Statistics, and to provide an overview of the subareas within ML; (2) Supervised learning workflow and methods: to explain the workflow, along with key concepts, related to supervised learning tasks, and to introduce those popular ML methods, e.g. SVM, boosting machine, random forests, etc.; (3) Model inference: to use the trained models to predict, to select variables, and to gain insights into the data; (4) Unsupervised learning methods; (5) An introduction to deep learning.
After obtained his Ph.D. in 2001 with research focusing on modern ML, Junshui Ma has been working on many ML topics and projects across academia and industry, which produced a dozen of ML journal papers, covering diverse topics. Andy Liaw has been working in related areas after he obtained his Ph.D. in 1997. He is the author of the first popular R package for random forests, and the associated paper has been cited 10K+ times. One of the papers coauthored by them for predicting compound activities using deep learning methods became one of the most cited papers in that area.
Following the tremendous success in oncology drug development in the last decade (e.g., emergence of immune checkpoint inhibitors as a new backbone therapy in multiple tumor indications and cell-based therapies), recent years have witnessed an explosive growth in number of new drugs and vaccines in cancer trials. While the expectation is high, it is unrealistic to expect all of them to have the same success, as the improved SOC has raised the hurdle for them to demonstrate effectiveness. It is imperative to apply cost-effective designs to the development of these new agents.
In this short course, I will focus on three issues in contemporary oncology development: 1) efficacy screening post dose finding; 2) transition of a new program from early stage to late stage; 3) biomarker hypotheses in Phase 3 confirmatory trials. On efficacy screening, I will present a variety of optimal basket designs including an extension of Simon’s optimal designs for single-arm trials to multi-arm trials. I will also make a distinction between assessing whether any of the test drugs is effective and whether the test drug is effective in any of the tumor indications, and show its impact on design strategies. On transition from early stage to late strategy, I will extend the 2-in-1 design that has quickly become popular since its publication and then talk about advanced utilization of intermediate endpoints for making transitional decisions in an operationally seamless design with dose-selection and a statistically seamless 2-in-1 design. The use of a predictive biomarker can substantially complicate a Phase 3 trial design. I will discuss the various program level and trial options for test drugs with a predictive biomarker hypothesis including adaptive population expansion designs.
The FDA recently issued a draft guidance on “Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products”. Stated in the guidance, trial designs that might be considered novel or complex innovative include those that formally borrow external or historical information in data analysis. Utilizing all available information through data borrowing will save resources, shorten development timeline and sometime be ethical in order to limit the number of patients in the placebo control group. This is particularly essential in a rare disease area where it is very challenging to recruit patients. The assumption for the validity of data borrowing is consistency of the historical and current studies. Matching patients’ baseline characteristics or other approaches can ensure the consistency regarding the patient population. On the other hand, dynamic borrowing using a power prior in a Bayesian analysis will allow the amount of data borrowing relying on the observed homogeneity in the study endpoint between the two data sources. The combination of the two technics should ease the major concern for data borrowing. In this session, regulatory, industry and academic speakers will share their thoughts on historical data borrowing. More specifically, the discussion will be on the patient selection/matching methods, frequentist operating characteristics of power and Type I error probability of Bayesian hypothesis testing. In addition, non-inferiority and superiority assessments in Bayesian framework will be considered. Simulation results and examples will be applied to illustrate the applications of the methods.
Developments have been in progress on the tools and methodologies to consolidate, organize, and structure real-world data (RWD) to generate research-grade evidence and ensure that confounding variables are accounted for in analyses. Synthetic control arm (SCA), is a novel clinical trial designs, mimicking a placebo, which utilizes RWD and enables rapid “Go/No Go” decisions. SCA requires disease is predictable and that the standard of care is well-defined and stable. Analytic techniques such as natural language processing, and statistical and machine learning methods are needed to extract relevant information from structured and unstructured data. A framework on the real-world evidence program by FDA was released in December 2018. Like any novel research initiative, the proposed use of historical control data to build a SCA has some associated risks. Selection bias and historical time effect are obvious risk factors. Simulation studies can aid in understanding the bias-variance trade off and more generally, the influence of the historical control data. So, a careful statistical planning and designing, along with a thorough understanding of the characteristics of the target population of interest, are required to circumvent some of those risks. In this session, we plan to discuss data mining and machine learning for the generation of SCA, adaptation of SCA to the clinical trials and associated statistical techniques.
One of the most important considerations in designing clinical trials is the choice of outcome measures. These outcome measures could be clinically meaningful endpoints that are direct measures of how patients feel, function and survive. Alternatively, indirect measures, such as biomarkers that include physical signs of disease, laboratory measures and radiological tests, often are considered as replacement endpoints or potential “surrogates” for clinically meaningful endpoints (Fleming TR 2012). These surrogate endpoints are sometimes employed in clinical trials and when used judiciously, can accelerate and focus the study of new therapies and can greatly enhance our understanding of their mechanisms of action. This session will feature speakers from industry, academia and regulatory agency who will discuss their insight into challenges of analysis of surrogacy, the pros and cons of the currently available methods in assessing the surrogate endpoints, outline some of the advantages, disadvantages and specific statistical considerations associated with the use of surrogate endpoint. Several examples in oncology trials will be provided.
The drug development in oncology and I-O area is constantly accelerated. There is a desperate need of new therapies for patients, driving pharmaceutical companies to invest tremendous resource. However, the challenge in development is substantial including timeline and high cost that delays patients access to new medicine. Furthermore, the complex mechanism and new approvals have significant impact on the probability of success of programs with always changing benchmarks. There are several key scientific questions in I-O development: dose level, combination agent, and population selection. The traditional clinical development pathway requires multiple phase I and II studies with enormous number of patients and long time. Considering the extremely high cost and “to be proved” drug mechanism, plus patient enrollment challenge due to many other ongoing clinical trials at investigational sites, it is critical to have an optimal design to answer the key questions utilizing patient’s data efficiently and to improve the probability of success. Therefore, innovative seamless phase I/II designs with consideration of treatment selection and population enrichment is of great interest. This parallel session will review and discuss the recent advance, challenges and opportunities of using complex trial design and analyses to expedite drug development and regulatory pathways, including role of simulation and best practice. Presentations will be given by a group of experts from industry, regulatory agencies and academia: Yuan Ji from University of Chicago: “ROBOT: A Robust Bayesian Hypothesis Testing Method for Basket Trials”. Jingjing Ye from FDA: “A working platform trial for rare cancers”. Jianchang Lin from Takeda Pharmaceuticals: “Flexible Semiparametric Meta-Analytic-Predictive Prior for Historical Control Borrowing”. Brian Hobbs from Cleveland Clinic: “Designing trials for potentially non-exchangeable patient subpopulations”.
Oncology is a competitive therapeutic area where the landscape is constantly changing. Some of the treatments produce remarkable responses with complete eradication in some cases. But nearly all treatments face drug resistance issues where they ultimately stop working for many patients. It is speculated that combination therapy can overcome resistance and gain greater potency by attacking the cancer at multiple points on cell signaling pathways or by attacking multiple pathways. Many of the combination regimens in the future are expected to have PD-(L)1 as the backbone and many of the patients will be PD-(L)1 failures in some respect. This poses a unique challenge to the future of oncology drug development. Traditionally developers have built combinations only from drugs that were already approved as monotherapy by the regulatory agencies. In the last decade, we have observed companies increasingly develop dual-novel or novel-novel combinations, where neither of the component drugs has been approved for use alone. In some situations, one of the component drugs could be from a class where a drug has been approved, such as PD-(L)1, or the component drugs are not yet approved for the specific patient population. In respond to this trend, FDA issued a guidance on codevelopment of combination therapies in 2013. Complex study designs to accommodate more trial arms with the accrual of an extensive number of patients or to consider single-arm combination therapy with external/historical control are being increasingly required, although facing unprecedented regulatory challenges. Trial sponsors and regulators will need to balance the level of evidence needed for approval in the context of data that may already be available to ensure equipoise and expedite development. This session will have speakers from both regulatory and industry to discuss the challenges and strategies in novel-novel combination therapy development, dose finding and confirmatory evidence generating.
The selection of an external control using real world data (RWD) is an extremely important part of the study design for rare disease studies or single arm studies. Real-world data and evidence have been increasingly used in regulatory and healthcare decision-making since the passage of the 21st Century Cures Act on December 9, 2016. It has significantly broadened the potential of external controls. The source data for an external control can be from real world data (RWD) including electronic health records (EHR), Patient Registry and Laboratory Database, or from other randomized clinical trials RCT). The external control can be a group of patients treated at an earlier timeframe or treated during the same time period but in another trial or registry. This control group is chosen for which there is detailed information including where pertinent individual subject data regarding demographics, baseline status, concomitant therapy and course on study. The most common way to select an external control group has been based on propensity score (PS) matching method. The PS is typically estimated using a logistic regression model that incorporates all variables that may be related to the outcome and/or the treatment decision. An expansion to the PS model was proposed by Tan (2019) which is a causal inference method that not only has the doubly robustness property but also extends the propensity score model and the regression model to semiparametric models with monotone constraint on the nonparametric parts. Another method is to borrow information through the use of a prior and then to apply the Bayesian methodology. Our presentation will explore this and other model for selecting the external control group and the subsequent trial analysis. Also, the presenters will discuss how their methods were applied in the clinical trial.
Treatment effect heterogeneity (different patients respond differently to treatments) plays a more and more essential role in evaluating the efficacy of treatments. With technology advancements, besides the traditional baseline demographic and disease endpoints, more patient characteristics data (e.g. -omiscs data) are now being collected at an unprecedented scale. In parallel, many advanced statistical learning methods have recently been developed to handle high-dimensional data. High-dimensional data from multiple sources in combination with advanced analytical tools offer great promise of addressing heterogeneity of treatment effects. At the same time, the study of complex nature of disease response to treatment, however, still poses great challenges to statisticians. For examples, can we identify an optimal treatment for a given patient rather than finding the right patient for a given treatment? Instead of entirely data driven, can we incorporate domain knowledge into statistical learning to obtain biologically more interpretable and meaningful results?
This section invites industry, academic, and regulatory speakers to discuss the challenges and recent developments in addressing heterogeneity of treatment effects. Dr. Pingye (Eric) Zhang from Merck & Co., Inc. will present a method for value function guided subgroup identification. Dr. Qi Long from University of Pennsylvania will talk about knowledge-guided statistical learning methods in precision medicine. The discussant for the session will be from FDA.
The COVID-19 pandemic has a significant impact on clinical trials around the world. Recent studies have shown that approximately 5000 studies in different therapeutic areas are currently impacted by this pandemic situation. The area of impact is vast including operational (e.g., site initiation and visits, site audits, drug supply) and scientific (e.g., changes in population, data collection). The impact of COVID-19 also poses new risks to the interpretation of trial results and its broad applicability for future clinical practice. Pandemic may cause significant lost to follow-up or missing planned visit which eventually results in a large amount of missing data. The potential consequences include inability to acquire the data needed to meet trial objectives or to implement pre-specified analysis plans, with obvious impacts on clinical development programs. Other consequences involve change in the target population, informative drop-outs, and inadmissibility of assessing the primary estimand of interest While some of the issues may be addressed through operational excellence, the study design and planned statistical analyses will need to be adjusted to address these issues without compromising the integrity of the trial. The impact of the COVID-19 outbreak on the conduct and reporting of clinical trials is also recognized by regulatory agencies, and important guidelines have been issued and updated (FDA and EMEA).
This session will focus on the necessary statistical considerations for design and analysis of clinical trial conducted during the pandemic. Both scientific and regulatory impacts will be discussed. The session consists of eminent speakers and panelists from industry and different regulatory agencies.
Bayesian methods have emerged as particularly helpful in combining the disparate sources of information while maintaining reasonable traditional frequentist characteristics. It helps to borrow information strategically under different heterogeneity (e.g., different disease subgroups, different regions, different studies etc.). The combined information may provide enough justification for smaller or shorter clinical studies without sacrificing the goal of evidence-based medicine. Modern computing power and algorithms now make it possible to take advantage of Bayesian continuous knowledge building. Also, recent FDA guidance (Adaptive Design Clinical Trials 2019, Interacting with the FDA on Complex Innovative Trial Designs 2019 (draft)) reflects Bayesian methods as a scientifically rigorous and safe experimental approach to clinical trials plus a statistically sound way of incorporating prior knowledge to make better decisions.
This session will focus on various applications of Bayesian statistics in clinical trial. This includes designing non-inferiority trials, borrowing trial external information for smaller and efficient design in rare disease and pediatric population, detecting safety signal for a drug from real world data etc. The proposed session consists of eminent speakers and discussant from FDA, industry and academia.
People love to hear stories, but we scientists in general are not good storytellers. How can we harness the power of storytelling to effectively communicate clinical data and artificial intelligence? The discovery and development process of medical products generate large amount data, from genomic data, patient level data, to large networks of health care networks. How to extract knowledge in a better way from data and exploit this knowledge is a question equally challenging for industry, academia and regulatory agencies. In this session, we will bring in three experts to share their invaluable insights about how to use storytelling and advanced analytics to bridge the gap between clinical data, machine learning, statistics and the mind of audience.
The final ICH E9(R1) Addendum on “Estimands and Sensitivity Analysis in Clinical Trials” (2019) recommends a change in mindset on how clinical trials are planned. According to this guidance, trial protocols should include precise descriptions of the treatment effect reflecting the clinical question(s) posed by the trial objectives, done through defining trial relevant estimands. Main and sensitivity estimators would need to be pre-specified for each estimand.
Implementation of this framework has been initiated across companies in the pharmaceutical industry. Regulatory discussions on estimands have also been conducted across projects from different disease areas. Different working groups on estimands have been formed and there has been an increasing number of publications on this topic.
This townhall will discuss the current impact of the implementation of the estimand trial planning framework. The townhall panelists will include key clinical and statistical leaders from both FDA and Industry, including a senior FDA clinical policy maker, members of the ICH E9(R1) Expert Working Group from FDA and Industry, as well as influential experts on this topic from Industry and Academia. Important questions will be asked, including:
• How does this guidance shape regulatory interactions, including those for submissions and labeling? • How are both clinicians and statisticians adopting this framework in project teams across disease areas? • How can we translate clinical questions into clear trial objectives and scientific questions of interest? • What are the key steps to implement this framework, including using protocol and SAP templates?
Each member of the panel will have a short presentation on a pre-determined question and the audience will have the opportunity to share additional points of view and engage with more questions.
Confirmed participants: Robert Temple-FDA , John Scott-FDA , Frank Bretz-Novartis, Craig Mallinckrodt-Biogen , Scott Emerson-U Washington
Virtual (or decentralized) clinical trials (VCT) have been introduced into medical research and drug development. These trials take advantage of online technology (apps, social media, monitoring devices, etc.), digital inclusion platforms (recruitment, endpoint measurement, adverse reactions), and order and send medications or device directly to the subject’s residence, allowing subjects to be homebased at every stage of the clinical trial. The VCT can provide subjects with access to clinical research even if they reside far from a traditional medical center. This digital transformation has also changed the mindset of both subjects entering clinical trials and researchers performing these clinical trials. VCTs have already been conducted in overactive bladder and osteoarthritis trials. The characteristics of a VCT are 1) potential subjects are identified based on the subject’s online searches and medical activities, 2) determination of qualification for the trial is based on EMR, laboratory or imaging visits, or self-reported information, 3) consent is obtained via the internet, 4) subjects receive treatment via home delivery, and 5) subjects report information via electronic monitoring or self-report. Issues to be considered in design and implementing these clinical trials are: 1) the possible diversity of the study population (including equitable selection and selection bias considerations), 2) the impact on the inclusion and exclusion criteria for the clinical trial, 3) the impact on randomization and masking of study treatment, 4) the impact on data collection and verification, 5) statistical methods used to analyze such data, and 6) the impact on regulatory submissions of the clinical trial data. Our session will explore these topics. Each presenter will discuss his/her views on virtual clinical trials, and how they can be integrated into our medical research programs.
The recent advances in chimeric antigen receptor (CAR) T cell therapy heralds an exciting new era in cancer treatment. While dramatic clinical responses to CAR-T treatment in B-cell malignancies and multiple myeloma generate enthusiasm, the unique feature of such therapy poses new challenges in the study design and statistical analysis of clinical trials. The major challenges facing CAR-T trials include: (1) As the development of autologous CAR-T therapy may be subject to manufacturing failure, should the randomization occur before or after the “successful” product being developed? (2) Eligible subjects may receive bridging therapy prior to the administration of CAR-T product during the manufacturing period. How should the treatment effect of interest be properly addressed given the joint effect of bridging therapy and CAR-T product? (3) Various non-proportional hazards patterns manifested in CAR-T trials violate the proportional hazards assumption, resulting in underpowered studies conducted in the conventional fashion. How can a novel study design salvage the power loss? (4) How can the manufacturing process and trial eligibility criterion be improved via artificial intelligence and machine learning? This session will highlight the recent innovations in statistical methodologies to address the unique challenges of CAR-T trials and feature speakers from industry, academia and regulatory agency to share their insights.
This session proposal is submitted on behalf of the ASA Biopharm Section Real-World Evidence Scientific Working Group (WG).
Data from “real world” clinical practice and medical product utilization – outside of clinical trials – are regarded as an increasingly important source of evidence generation that holds high potentials to increase efficiency and improve clinical development and life cycle management of medical products. Both US and EU regulatory agencies, public-private partnerships and health technology assessment organizations have launched major initiatives to address the concerns and considerations in the use of real-world evidence (RWE) to inform regulatory decision making. In August 2017, the FDA has issued a final guidance document on use of RWE to support medical device regulation decision making. Additionally, FDA has committed to exploring the use of RWE in drug regulatory decision making through stakeholder engagement, demonstration projects, and guidance development.
This WG represents a range of statisticians from regulatory, industry, and academics, working on a rapidly developing and promising area, in which statisticians need to play a central role. It is organized in two workstreams. The first workstream focuses on the use of RWE for label expansion. The second workstream focuses on the use of RWE to inform study design and the use of external controls. Both workstreams have conducted an extensive landscape and gap assessments. The results of the overall assessment were presented in the 2019 BIOP RISW. The WG is in its second year and is moving from a landscape and gap assessment phase to a research phase. The research topics identified in the initial phase include estimands in RW setting, outcomes ascertainment with use of machine learning, causal frameworks, computable phenotypes, and complex exposure patterns. These research topics and details will be covered in the 2020 RISW.
The United States is in the throes of an opioid epidemic. More than 130 people die every day from opioid-related drug overdoses. An estimated 40% of opioid overdose deaths involve a prescription opioid1. The misuse, abuse, and diversion of prescription drugs is a significant public health problem. As a consequence of the increasing rate of prescription drug abuse, regulatory requirements for the assessment of abuse potential of CNS-acting drugs have become increasingly stringent, particularly in the United States. A large focus of the abuse potential evaluation of drug products is the human evaluations of pharmacological effects, both pharmacokinetic and pharmacodynamic (PD). This is typically assessed by conducting a human abuse potential (HAP) study. There are two types of HAP studies, those designed to evaluate new molecular entities (NME) and those assess abuse-deterrent opioid formulations (ADF). Final FDA guidance for each type of study were published in 2017 and 2015, respectively. Each guidance has outlined detailed recommendations for statistical analysis. However, many issues have emerged in actual practice, which have resulted in difficulties in complying with guidance recommendations such as hypothesis testing margins, study variabilities, selection of endpoints, and the best way to present totality of evidence. This session will present the various perspective of key players involved in the design and analysis of HAP studies, including statisticians, pharmacologists and clinicians from FDA and industry. The goal is to discuss the challenges and present potential solutions for these emerging issues. The discussion will focus on areas where improvement can be made, including the evaluation of next generation ADFs, statistical modeling assumptions and the approaches within the study design to reduce the variability in primary and secondary outcome measures.
Reference: 1. https://www.cdc.gov/drugoverdose/epidemic/index.html
In 2020, executive level Statistics Leaders of leading biopharmaceutical companies formalized a consortium to facilitate communication and non-competitive collaboration to help industry be optimally positioned to face the challenges and seize the opportunities in drug development that are emerging on many fronts; to support visibility, health and advancement of the statistics profession; and where appropriate to provide a unified voice on key topics or issues. One recent example of this was the consortium sponsored industry proposal around COVID-19 published in 2020 with broad representation across companies. This session will serve to introduce the consortium further, and allow company leaders across industry to share their perspectives on how this consosrtium can help or be utilized by the broder community on a wide variety of topics their organizations are engaged with including topics such as novel designs, innovation, leadership, technology and the ever-changing clinical trial landscape Time will be given for open panel Q&A as well.
We live in the day and age of abundance of data-from traditional randomized controlled clinical trials, to observational studies, patient registries and expert opinions, to name a few. Due to recent advances in technology, having access to this data has become easier (e.g. Transcelerate initiative). Not surprisingly, the desire to utilize all this available data in drug development decisions to benefit patients and public has increased as well. Having access to large amounts of data can be overwhelming at times with a common reaction to engage with information selectively. Bayesian decision making framework helps to avoid this pitfall by creating a structured approach incorporating prior information based on relevant data into study design and decision-making. Both PDUFA VI and 21st century act legislations call for expanding use of complex innovative designs, which include trial designs with formal incorporation of prior knowledge and use of external or historical control data. Several PhRMA member companies have pioneered these approaches in their development decisions and will share their experiences today. The two talks from industry’s statistical methodology experts will share a common element of Bayesian decision making as a key driving force behind development decisions but will also highlight nuances reflecting specifics of each company’s approach. The panel discussion following the talks will reflect views from other and regulators. The talks will cover prior elicitation,synthesis of data (with case studies), application of award-winning Bayesian Quantitative-Decision-Making framework and using conditional assurance to de-risk studies.
1. K. Price or M. Thomann (Eli Lilly)
2. T. Montague or I. Perevozskaya (GSK)
3. Panel: speakers + S.Roychoudhury (Pfizer) + J. Davis(FDA )+ R. Beckman (Georgetown Univ.)
It is well known that the estimates of treatment effect from real-world studies are subject to systematic biases including selection bias, information bias and confounding bias. While the first two types of biases can be tackled by study design, confounding bias, however, can be very challenging due to the complex relationship of potentially confounding variables, either measured or unmeasured and time-independent or time-dependent, which causes the difficulty in estimating causal effect of an intervention. This session includes four presentations discussing recent development in causal inference methodologies for estimating treatment effects in real-world studies including pragmatic clinical trials.
Yaru Shi, PhD Merck Research Laboratories, Merck & Co., Inc. Presentation Title: Causal inference for self-controlled case-series studies
Ying Wu, PhD Southern Medical University Email: firstname.lastname@example.org Presentation Title: Targeted estimation of stochastic treatment effects
Rachael Phillips University of California at Berkeley Email: email@example.com Presentation Title: Targeted Machine Learning for Causal Inference with Real-World Data
Fang Liu, PhD Merck Research Laboratories, Merck & Co., Inc. Email: firstname.lastname@example.org Presentation Title: Mediation analysis in pragmatic clinical trials
A final guidance on Estimands and Sensitivity Analyses has just been posted. It is obvious there is a need to discuss what comes next, namely what analyses ought to follow. Repeated measures designs are very common in many indications but relying on large sample theory in a rare disease setting would be a challenge as studies are smaller. Often, when the outcomes are quantitative, two sets of options can exist, a slope-based outcome and a change from baseline outcome. Both of these outcomes can be analyzed using a mixed model. For a change from baseline outcome, the analysis is generally referred to as a mixed model repeated measures (MMRM) approach, while a longitudinal mixed model is often used for the slope-based outcome. For either approach it is likely that an analysis that assumes all missing data is missing at random can be misleading.
Any mixed model requires one specify a covariance model. One obvious question is how flexible are the choices? For example, is an unstructured covariance matrix required since it does not take advantage of the time series nature of the data. In rare diseases, loss of efficiency can mean the difference between success and failure. Will the newer approaches to defining an estimand and not assuming incomplete data are missing at random impact the choices? The longitudinal mixed model is a slope-based approach where a mixed model is fit where the outcome is the measured value at each time point and time is treated as a continuous variable with time, treatment group, a time by treatment group interaction term, and stratification factors as covariates. One alternative approach is to estimate each individual’s slope and to then perform an ANCOVA to test for differences between groups. Again, one would need to consider the impact of how intercurrent events impact the decisions if at all.
The purpose of this session is help clarify the pros and cons of these two broad approaches to repeated measures with a focus on rare diseases.
Interim analysis has been widely planned and implemented in modern clinical trials. It is an analysis of data that is performed prior to formal completion of a trial to allow early stopping (for futility or efficacy) or other trial adaptations (e.g. sample size adjustment). Various methodologies have been developed in these areas. However, recent experiences in clinical trials with interim analysis highlighted the importance of scrutinize of such designs and their implementation. In this session, new research in innovative methods for interim analysis will be presented. Statistical controversies, potential pitfalls and lesson learnt in interim analysis will be illustrated. Practical challenges of planning and implementation of interim analysis will also be covered. Three industry speakers will first present their research and experiences in the planning and/or implementation of interim analyses in clinical trials. An FDA speaker will then give a regulatory perspective regarding pros and cons of adopting interim analysis planning in clinical trials, associated procedures, and other important considerations in the design and implementation of the interim analysis in clinical trials.
The industry is in need of rational guideposts for the decision to invest the next 40-50 Million dollars in a late phase clinical trial. A good starting solution is Lalonde’s dual-criterion approach, which consists of injecting probabilistic considerations into pre-defined decision criteria, anchored in the concepts of Lower Reference Value (LRV), and Target Value (LV).
In this session, we will embed this approach in the context of good principles of interdisciplinary decision making, and we will demonstrate its virtues and limitations. We will discuss alternative methods such as decisions based on Bayesian statistics that would allow incorporation of external information, and assurance-based decisions.
Two speakers will cover the industry perspective – one Ph.D. statistician from a mid-sized organization, and one M.D. from a major pharmaceutical company. A discussant from FDA will bring in the agency’s perspective.
The NIH reports that there are approximately 7,000 rare diseases affecting more than 25 million Americans. Approximately 80% of them are caused by a single-gene defect, and about half of them affect children. Because many of these diseases are serious or/and life-threatening conditions and most of them have yet approved therapies, there remains a significant unmet medical need for effective treatments. The product development for rare diseases poses numerous challenges including: (1) Scarce, heterogeneous and dispersed patient population which results in difficulties in identification of suitable patients in clinical trials; (2) Poor understanding of diseases which make it difficult to set clinical endpoints, outcome measures or surrogate (biomarker) endpoints in clinical trials; (3) Sensitive subpopulations such as neonates or pediatrics which provoke additional ethical considerations when conducting clinical trials. In addition to the above challenges, there are other considerations and challenges for new classes of treatments that aim to cure rare diseases, for example, gene therapies. By targeting a single-gene defect, gene therapy often does not need repeated dosing and that one dose can have effect that lasts for a life time. Novel trial design and analyses have been proposed to overcome the aforementioned challenges. This session will invite speakers from FDA, industry and academia to discuss trial design and analysis considerations that are important to the development and approval of products for rare diseases, including frequentist and potential use of Bayesian approaches with information borrowing, adaptive designs and assessment of safety and efficacy for small molecules, gene therapies and other novel treatments. Examples will be provided.
The opportunities for statisticians to have leadership impact in the pharmaceutical industry and regulatory world are considerable at all levels. The Leadership-in-Practice Committee (LiPCOM), a newly formed committee within the Biopharmaceutical Section, is organizing this session to create a forum for gaining insights and triggering discussion on how statisticians in the pharmaceutical industry and regulatory world can uniquely influence decision-making and therefore, raise visibility around their leadership impact. This session will feature presentations from two statistical leaders who will share their experiences on how they evolved into influential leaders during their careers. They will offer thoughts on what types of leadership skills and competencies are required for statisticians, especially given the changing environment (e.g., real-world evidence, big data) in the pharmaceutical/regulatory space, as well as explore the potential for statisticians to broaden their leadership impact and advance their career outside statistics. A panel discussion will follow the presentations. The panelists will discuss the topics and questions from the presentations based on their own experience.
The use of Bayesian trial designs has been attractive and is recognized as an important tool for improving design efficiencies since it enables the incorporation of existing earlier trial results or other knowledge through the specification of prior distributions, and therefore potentially reducing time, cost, and unethical exposure of subjects to inferior treatments. The incorporation of external data to the current trial result is however not without controversies. The Bayesian framework utilizing various approaches combines prior data and information with current trial results to make informed decisions in a timely manner. This is thought to be a critical step to bring innovation into the development and approval of medical products. There have been numerous usage of Bayesian methods for the medical product development in the literatures. For examples, use of prior adult trial results to augment and plan the trial design of a pediatric study, use of Bayesian model in subgroup analysis, use of Bayesian method to aid decision making in platform trials and use of historical data to design a trial for a rare disease indication. Many researchers have expressed great interests to understand regulatory perspectives and experiences in the utility of Bayesian approaches in design and analysis for the approval of medical products. In this session, we invite speakers from FDA Centers (CBER, CDER, and CDRH) to present their experiences and perspectives relative to the use of Bayesian approaches in designs and analyses. Examples and case studies will be provided.
For slowly progressed diseases, such as nonalcoholic fatty liver disease (NAFLD) including nonalcoholic steatohepatitis (NASH), and sickle cell disease (SCD), it may take years or even decades, to observe disease-related morbidity and mortality. For these types of chronic diseases, drug developers usually cannot afford to wait a prolonged period to examine and market drugs solely based on the hard and objective clinical endpoints. To reduce burden of diseases, the regulatory agencies have now offered an accelerated approval pathway option to allow the registration trials of some chronic diseases to be conducted based on good biomarkers as surrogate endpoints. Nevertheless, what determines good surrogate endpoints that can be used to reasonably likely predict outcome benefit still need to be carefully examined and validated with strong statistical evidence. In addition, before trials can be launched, the major clinical and statistical components, including the study design, exact study population, selections of biomarker and surrogate endpoints, should be prospectively defined. In this session, we will bring together experts from FDA, academia and pharmaceutical industry to share their insights, experiences and recommendations about how to overcome the challenges and issues in the clinical development when using the biomarker endpoint for the accelerated approval.
Speakers(2): Dr. Weiya Zhang, FDA, Weiya.Zhang@fda.hhs.gov; Dr. Gene Pennello, FDA, Gene.Pennello@fda.hhs.gov; Panelists(2): Dr. Peter Mensenbrink, Novartis, email@example.com; Dr. Shein-Chung Chow, Duke University, firstname.lastname@example.org;
ICH released the final version of the ICH E9(R1) “Addendum on Estimands and Sensitivity Analysis in Clinical Trials” on Nov 20, 2019. During the past two years prior to its finalization, estimand has become a popular topic and many stakeholders actively assess its impact in clinical trials from both methodological and operational aspects. With the finalization of the addendum, it is time to evaluate how the concept of the estimand framework is utilized in actual practice, including designing clinical trials, planning statistical analyses, answering clinical questions of interest, and interacting with health authorities. In this session, we will invite speakers from both regulatory agency and Industry to give their experience on application of the estimand framework in their organization and their perspectives on next steps.
The Food and Drug Administration recently launched the Complex Innovative Trial Design (CID) Pilot Program as a deliverable under the sixth iteration of the Prescription Drug User Fee Amendments. The goal of the program is to facilitate the advancement and use of complex adaptive, Bayesian, or other innovative clinical trial designs requiring simulations to estimate operating characteristics. One avenue for achieving this goal is through the sharing of regulatory case examples. The ability of the FDA to publicly share examples prior to approval of a therapeutic agent is a unique aspect of the Pilot Program. This session will include examples from the CID Pilot Program with the goal of advancing the use of appropriate innovative trial designs.
This session will feature three speakers, two industry representatives and a FDA representative as follows: JonDavid Sparks, Eli Lilly and Company Stephen Lake, Wave Life Sciences Gregory Levin, Food and Drug Administration
When hazards are non-proportional, conventional log-rank tests and the Cox model are not optimal or even invalid, and discussions of clinical trial design and analysis have mainly focused on the restricted mean survival time. In this session, we present four novel methods with non-proportional hazards (NPH) that have become readily applicable, and illustrate them with real clinical and regulatory examples.
1) Motivated by the delayed onset of treatment effects, a NPH situation typically observed in immuno-oncology, as part of Cross-Pharma Non-proportional Hazards Working Group, we present the MaxCombo test. It can provide robust inference when the treatment effect varies over time.
2) Cancer immunotherapy often shows a mixture of short-term risk reduction and long-term survival benefit. We propose a change sign weighted log-rank test for the setting with crossing hazards. The appealing features of this design are increased study efficiency and detection of both short-term risk reduction and long-term survival.
3) When there are multiple time-to-event outcomes, competing risks may occur. Eg, death precludes the occurrence of progression in oncology trials. Traditional competing risks methods assume independent censoring. We derive a new consistent estimator of the cumulative incidence functions in the presence of independent and/or dependent censoring.
4) When prioritized multiple time-to-event endpoints are of interest, conventional time-to-first-event analyses treat all outcomes as equally important. Instead, the win ratio considers more important outcomes first (eg, death is considered first, then progression). The win ratio has gained popularity (eg, tafamidis approval in 2019), but was so-far applied only at the population level. We propose a covariate-adjusted win ratio. All of our speakers and discussant were confirmed: Satrajit Roychoudhury (Pfizer), Jianrong Wu (Univ of Kentucky), Judith Lok (Boston Univ), Gaohong Dong (iStats Inc), and James Huang (FDA).
There is a growing interest in using real-world data and evidence (RWD/RWE) for clinical trial design and analysis and regulatory decision-making. For example, RWD/RWE can be used as an external control in single-arm trials or used to augment concurrent control group of a randomized controlled trial. The most challenging issue in using external data is the comparability of external controls with the trial patients. On the other hand, as stated in the Framework of the FDA’s Real-World Evidence Program, multiple data sources may be used in deriving RWE in drug development and regulatory settings, which brings another challenge on how to combine different sources of RWD. In addition, RWE can be enhanced through modeling the inclusion and exclusion criteria of a clinical trial. This session includes four presentations, two on novel statistical and machine learning approaches to leverage RWD, one on combining multiple RWD sources and another on modeling the inclusion and exclusion criteria to enhance RWE.
Lilly Yue, PhD, Deputy Director Division of Biostatistics, Center for Devices and Radiological Health U.S. Food and Drug Administration Email: email@example.com Presentation Title: Novel Statistical methods for Leveraging Real-World Data in the Regulatory Settings
Zhiwei Zhang, PhD NIH/NCI Email: firstname.lastname@example.org Presentation Title: Adjusting for population differences using machine learning methods
Gang Li, PhD Janssen Pharmaceuticals Email: GLi@its.jnj.com Presentation Title: Perspective Plan for Studies Combining Real Word Data Sources
Thomas Jemielita, PhD Merck Research Laboratories, Merck & Co., Inc. Email: email@example.com Presentation Title: Enhancing Real World Evidence by Modeling the Clinical Inclusion/Exclusion Criteria
The totality-of-evidence approach has been applied to FDA’s various decision process from therapeutic product approvals to food nutrition labeling. In this session, we provide an overview of totality of evidence, which could encompass data collected from multiple investigational studies including pre-clinical and clinical trials as well as real-world data from various sources. To meet the statutory requirement of substantial evidence, careful evaluation of biochemistry, toxicology, clinical pharmacology, clinical/PRO efficacy and safety data on multiple therapeutic options (e.g., doses) forms the regulatory foundation by which the decision for product approvals could be made. Following the overview, we focus recent methodological advances on evaluation of effectiveness incorporating data from multiple clinical trials, multiple doses, multiple endpoints, as well as both clinical trials and natural history studies of real-world evidence. A prospective meta-analytical approach is presented to address the challenges of comparability and completeness of incorporating data from natural history studies. In this session, we will invite statisticians and clinicians from industry, academia, and FDA to share their experience and vision of the application of totality-evidence approaches.