All Times EDT
International Council for Harmonization (ICH) and Food and Drug Administration (FDA) Guidance for Industry (GFI) E9 defined Bayesian approach as “Approaches to data analysis that provide a posterior probability distribution for some parameter (e.g., treatment effect), derived from the observed data and a prior probability distribution for the parameter. The posterior distribution is then used as the basis for statistical inference.” Bayesian statistics has been widely applied in many areas. In this session, speakers from FDA, industry, and academia will talk about different applications of the Bayesian approach. The FDA speaker will share her experience on using Bayesian statistics to evaluate the efficacy of animal cell therapies. By using hierarchical models, information across different trials may be combined to increase the power to detect treatment effect while power prior models can be used to borrow information from previous trials to increase the power of the current trial. The speaker from industry led the FDA effort for the Bayesian Guidance for Industry and FDA Staff “Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials” (2010). He will discuss how Bayesian hierarchical models and power priors, in the context of borrowing other data from historical studies, impact sample size calculations and weight of the priors for medical device clinical trials. The speaker from academia will introduce elastic priors and their application in designing clinical trials with adaptive information borrowing. An important feature of the elastic prior is that it is constructed based on, and thus automatically satisfies, a set of information borrowing constraints prespecified by regulatory agencies or researchers. For example, when the treatment effect between the current trial and historical data is greater than a prespecified clinically significant margin, no information should be borrowed. Within that margin, information borrowing is appropriate.
Regulatory approval of a medical product is usually based on the balance of benefits and risks that patients may experience while taking the product. However, a quantitative assessment of benefit-risk profile is always problematic due to the lack of universally acceptable standard on how the benefits and risks are compared. In addition, incorporating patient baseline characteristics in benefit-risk analysis can be challenging as it may introduce multiplicity due to unplanned subgroup analyses. This session includes four presentations covering a composite endpoint approach for treatment benefit evaluation, Bayesian benefit-risk assessment, longitudinal interval binary recurrent event and testing for trend in benefit-risk analysis.
For diagnostic device, new technologies emerge quickly. Next-generation sequencing (NGS) is increasingly used in precision oncology because of its ability to identify many mutations simultaneously. In addition to traditional tissue-based biopsy, Liquid biopsy test becomes an attractive. Companion diagnostics (CDx) are test that provide information that is essential for the safe and effective use of a corresponding therapeutic product, such as a drug. Multiplexed tumor profiling tests which may include CDx assess many biomarkers that may have a range of clinical evidence associated with them that is constantly changing as new science emerges, and etc. In this session, we will the statistical challenges on how to analytically and clinically validate the device performances with these new emerged technologies.
Adaptive design becomes increasingly interested by clinical researchers and/or sponsors due to the flexibility to allow for prospectively planned modifications of one or more aspects of the design based on analysis of accumulated data from enrolled participants in the trial. Trials with adaptive designs are potentially more efficient, informative and ethical since they can adjust to information that was not available when the trial began. The 2018 FDA document “Adaptive Designs for Clinical Trials of Drugs and Biologics”, provides guidance on the appropriate use of adaptive designs for clinical trials. With the acceptance by the regulatory agency demonstrated in the guidance, the complexity and limitations of adaptive designs inevitably require sophisticated statistical methods to minimize the chance of introducing bias and to avoid erroneous conclusions.
In this session, we will focus on new frontiers of statistical methodologies for adaptive strategies in clinical trials, including adaptations to patient allocation through new algorithms for covariate-adaptive randomization, adaptive seamless phase 2/3 design using response adaptive randomization, seamless 2-in-1 designs, and adaptive flexible strategies in challenging experimental settings. Speakers from pharmaceutical industry and regulatory agency who are highly engaged in this area will share their recent research experiences.
List of Invited Speakers: Jonathan Hartzel, PhD, Merck & Co., “Covariate-adaptive randomization through algorithms for minimization and the implementation with an R package”; Ivan Chan, PhD, AbbVie Inc., “Accelerating clinical development with seamless phase 2/3 design using response adaptive randomization”; Cong Chen, PhD, Merck & Co., “Adaptive 2-in-1 design and extensions”; Sue Jane Wang, PhD, FDA, “Adaptive flexible strategies in challenging experimental settings”.
Sparse blood sampling occurs when a few samples can be collected from each subject in a clinical trial. Destructive sampling includes the extreme case of sparse sampling when only one sample can be collected from each subject. Because each study subject does not have all blood samples at every deigned sampling time points, it is not possible to calculate traditional pharmacokinetics (PK) parameters for each subject, such as the maximum blood concentration (Cmax) of a drug and the area under the concentration curve (AUC) from time 0 to the last time just before the first unquantifiable value after Cmax. Cmax and AUC have been the key parameters to evaluate drug bioequivalence, which a generic drug is equally bioavailable compared to its respective Reference Listed Drug (RLD) in the rate and extent to which the active ingredients are absorbed and become available at the site of drug action. How to design a BE study and what PK parameters should be used under sparse sampling has not been well established.
In this session, we have invited and confirmed three experienced and knowledgeable speakers, each from industry and two FDA centers. They will tackle the recent challenges of sparse or destructive sampling from their perspectives. Our speakers will propose innovative solutions of designing BE studies, analyzing and assessing bioequivalence when traditional PK parameters cannot be calculated for each study subject.
Our speakers have kindly prepared their presentation titles below: • Presentation Title 1: “ Some Possibly Useful Thoughts on BE Assessments in the Case of Sparse Sampling” by Martin Wolfsegger, Ph.D., Director Plasma Derived Therapies & Pharmacometrics,Takeda • Presentation Title 2: “Permutation Bioequivalence Test under Sparse Sampling and Small Sample Size” by Jing Han, Ph.D., Math. Statistician,FDA\CDER • Presentation Title 3: “Evaluation of Partial AUC in BE Studies using Destructive Sampling Design” by Shasha Gao, Ph.D., Math. Statistician,FDA\CVM
Although randomized clinical trials are considered to be the gold standard for generating clinical evidence, the use of real-world evidence to evaluate the efficacy and safety of medical interventions is gaining interest. Regulatory bodies around the world have put out guidance and position papers on the potential use of real-world data in regulatory settings. In this session, we will outline the key points-to-consider from various guidelines and position papers including those form the FDA, PMDA, EMA and China CDE. We will share a fit for purpose example of utilizing real world data for a regulatory filing. Statistical and operational challenges of implementing the study will be addressed. Best practices for close collaboration between statisticians and observational researchers to leverage the strength of each other will be discussed. Speakers will be from industry, academia and regulatory.
Vaccine development is a lengthy, expensive process and typically takes multiple candidates and many years to produce a licensed vaccine. Conducting vaccine efficacy trials during a pandemic like COVID-19 poses unique challenges. Trials in resource-limited settings may face severe logistical constraints; transmission can be highly localized and hard to predict. Furthermore, the ethics of a trial in the face of an emergency are complex. In the current high-mortality situation, populations may not accept randomized, controlled trials with placebo groups. Finally, pandemics will generate simultaneous demand for vaccines around the world. However, a potential vaccine needs to be brought to people before the next wave begins. Therefore, such important public health decision will most probably be based on a relatively small amount of information on vaccine efficacy. Innovative design and statistical analysis approaches are crucial to elicit all possible information of vaccine efficacy for robust decision-making.
This session will focus on the key statistical and regulatory aspects of developing COVID-19 vaccine during a pandemic situation. Eminent speakers and panelists from academia, industry and FDA will present key design, analysis and review aspects required to bring the most promising candidates to the patients.
The purpose of this session is to provide insight to statisticians on effective communication of Bayesian clinical trial design and analysis to statisticians, clinicians, regulatory affairs, and regulators to ensure alignment and mutual understanding.
Explaining the value of a Bayesian approach to clinicians and even to statisticians can be challenging. The situation is further exacerbated by a myth that regulatory authorities do not accept Bayesian methods. Communication clarity and a custom approach for each audience is needed to succeed in convincing involved parties.
This session has three parts:
1. “Effective Communication of Bayesian Design and Analysis to Statisticians” - FDA speaker
What design and analysis components should be specified? What operating characteristics should be assessed? How to conduct simulation experiments? How to specify the decision criteria? How to report the findings? References will be made to multiple FDA guidances for industry, relevant literature, and personal experience.
2. “Effective Communication of Bayesian Design and Analysis to Clinicians” - Industry speaker
Effective communication to clinical professionals requires a different strategy than communicating with statisticians. What details should be omitted? What baseline knowledge is useful to clinicians for a clear understanding? How to translate statistical elements in terms of clinical relevance? We will draw on the results of a recent Drug Information Association-Bayesian Scientific Working Group survey assessing Bayesian statistics knowledge, training suggestions, and perspective within the medical community. Furthermore, lessons learned from the experiences in industry and the DIA working group will be discussed.
3. Panel discussion with experts from industry and regulatory agencies
Mathematical models which simulate complex physical processes are widely used in regulatory health. These models are typically made up of large systems of differential equations and are implemented by complex computer code. Examples of these are: pharmokinetic models which enable the prediction, distribution, metabolism and excretion of chemicals in the body; the EPA’s Pesticide in Water Calculator (PWC) which estimates pesticide concentration in water bodies and mathematical models for infection diseases to help understand disease dynamics.
Goals in using mathematical models include: predicting unknown output with known sets of inputs; optimization of the functions by finding representative maximum or minimum values; visualization of the output in relation to a set of inputs; calibration of the models to fit observed sets of inputs and integration to obtain the average over sets of inputs. Analyzing data from these models involves manipulation of large datasets comprised of a large number of inputs relating to multiple outputs, in addition relationships between set of inputs and outputs is sometimes unknown.
In this session researchers in biomedical research and regulatory health will discuss the experiences in using mathematical models to enable an exchange of ideas when facing data challenges. Speakers include Dr. Brandon Gallas(FDA, CDRH) and Dr. Andrew Miglino (FDA, CVM).
Dr Miglino is a physical scientist, his work at the FDA involves using computer codes to implement models which explore environmental problems. His focus is on Molecular dynamics, sorption mechanics, diffusion modeling and advective-dispersive systems.
Dr Gallas provides mathematical, statistical, and modeling expertise to the evaluation of medical imaging devices at the FDA. His main areas of research are image quality, computer-aided diagnosis, imaging physics, and the design, execution, and statistical analysis of reader studies
FDA released a guidance on “Policy for Device Software Functions and Mobile Medical Applications” on September 27, 2019, explaining “mobile apps that meet the definition of a device and either are intended: · to be used as an accessory to a regulated medical device; or · to transform a mobile platform into a regulated medical device.” Mobile medical applications (commonly called mobile medical apps) provide new type of data and statistical challenges to evaluate performance for regulatory decision. The apps can span the gamut from measuring motion, sleep related disorders to atrial fibrillation to list a few among many. The challenges associated are with sheer size of data, the complexity of the signals and appropriate metrics to evaluate performance. This session will address regulatory experiences with mobile medical apps, statistical challenges in methodology and reporting.
Vaccines have been playing a critical role in public health through prevention of infectious diseases. An effective vaccine could prevent serious diseases, reduce disease burden on families and communities, or even eradicate diseases (e.g. smallpox). Vaccine research and development have been and will be important in this everchanging world. In developing new vaccines and managing life cycles of existing vaccines, new challenges keep emerging, and innovations are following.
As an integral part of the vaccine development, statisticians also come up with innovations in study designs and analysis methodologies. Speakers from academia, industry and government will share their experiences with some of the following innovations in solving challenges in vaccine development: biomarker assessment (correlates of risk and correlates of protection) especially for first-in-class vaccines; bridging of vaccine efficacy to new settings; multiplicity challenges in multivalent vaccines (multiple serotypes of a pathogen) or due to co-administration with routine vaccines that results in testing of many hypotheses to show non-inferiority of the routine vaccines with vs. without investigational vaccine; evaluation of effects for new serotypes added on top of an existing vaccine; real world data/evidence implementation (e.g. hybrid trial design); application of estimands in vaccine trials to address intercurrent events; and safety assessment/monitoring.
Potential speakers for the session include:
Dr. Peter Gilbert, Fred Hutch; Dr. Lihan Yan, CBER, FDA; Dr. Judy Pan, Sanofi Pasteur; Dr. Jianing Li, Merck & Co., Inc.
Typical oncology practice often includes not only an initial front-line treatment but also subsequent treatments. For example, acute lymphoblastic leukemia patients receive hematopoietic stem cell transplantation as a subsequent therapy or a kidney transplantation may be given to patients being treated with dialysis. In clinical trials, subsequent therapy is a nuisance covariate that may cause complications in the interpretation of the experimental therapy because it is usually non-random and may correlates with the patient response to the front-line treatment, therefore affects the outcome of the primary endpoint. Conventional analyses using ITT analysis that ignore subsequent treatments may be misleading, because they are an evaluation of both the front-line treatment effects and the subsequent treatments. The international harmonization of technical requirements for registration of pharmaceutical for human use (ICH) guideline suggests that it is not advisable to adjust the main analyses for covariates measured after randomization because they may be affected by the treatment. This raised a challenge in analyzing the data that include post-randomization treatment, or subsequent therapy. In this session, speakers will present statistical considerations regarding this issue from regulatory, academic, and industry perspectives. Novel methods to analyze the impact of subsequent therapy in time-to-event endpoints will be discussed. Both simulations and real case studies will be used to evaluate the pros and cons of different statistical approaches.
“Covariate adjustment” in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates”). The baseline variables could include, for example, age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We describe recent developments in statistical methods for covariate adjustment, and present case studies that demonstrate their benefits and limitations. A focus is on methods that are robust to model misspecification. Our speakers are from the FDA, NIH, and academia, in order to give multiple viewpoints on covariate adjustment. The intended audience consists of clinicians and statisticians who design and/or analyze clinical trials. We aim to provide practical, useful information to help guide trial designers in getting the most out of covariate adjustment, while avoiding some pitfalls.
Dr. Daniel Rubin, an FDA statistician, will present on the recent FDA draft guidance (which he helped to write) on using analysis of covariance (ANCOVA) to analyze trials with continuous-valued or change score outcomes.
Dr. Michael Rosenblum, Associate Professor of Biostatistics at Johns Hopkins Bloomberg School of Public Health, will present case studies of the value added by covariate adjustment in trials from the following disease areas: stroke, Alzheimer's disease, depression, schizophrenia.
Dr. Min Zhang, Associate Professor, Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, will present on covariate adjustment methods for time-to-event outcomes.
Dr. Michael Proschan, an experienced NIH trialist and expert on statistical analysis and monitoring of trials, will present on methods for selecting the variables to be used in covariate adjustment.
Please note: I have not yet confirmed speaker availability.
In 2019, FDA published two draft guidance on drug development for rare disease including, Rare Diseases: Common Issues in Drug Development Guidance for Industry, and Rare Diseases: Natural History Studies for Drug Development. Both guidance discuss key challenges in developing novel therapies for rare disease, albeit the increasing number of orphan drug approvals since 2014. Among various topics reviewed, the guidance highlight the utilities of natural history study and broadly the real world data (RWD), in the demonstration of the effectiveness and safety from novel therapies. With much of these discussions focus on regulatory approval, however, it is worth noting that RWD support the whole life cycle of the drug development. In fact, RWD is particular important for studying rare disease given the often large knowledge gap between the cause of disease and a plausible therapeutic approach, and from developing a drug to delivering to patients. In this session, we invite speakers from regulators, industry representatives and academia researchers, to discuss issues that are important at different stages of drug development for rare disease, and review case studies on how RWD can be generated and used to accelerate the search for novel therapies in the following areas: • Improve knowledge of the disease epidemiology and natural history • Assist the design of (patient-centric) clinical development program • Support the regulatory approval • Characterize disease burden to assist the health-technology assessment
Much has been written and hyped about Bayesian, adaptive, and complex clinical trials. This short course focuses on the practical details of each of these innovations. We will discuss what each of these concepts are, why they may improve a clinical trial, and the practical ramifications of each of them. A focus will be on providing multiple examples of each.
The What This section will work to explain each of these three innovations. What are adaptive designs, what are not adaptive designs? What are the ramifications of using a Bayesian analysis in a clinical trial? Finally, either of these concepts can create a situation in which calculating the operating characteristics is impossible without clinical trial simulation, and thus we label these trials as complex innovative designs. Examples of clinical trials labeled as adaptive, Bayesian, and complex will be presented.
The Why The second focus of the course will be on why these innovations may be preferred in a clinical trial. Each of these innovations can improve a trial to provide better answers, more answers, and more efficiently. In addition, there can be an impact on the patients in the clinical trial. All of these ramifications will be discussed. Again, examples of trials will be presented to demonstrate the potential promise of these innovations.
The How Perhaps most importantly is how do we utilize these innovations? How does one go about creating an adaptive design? What are the steps, what are the potential missteps of creating an adaptive design? Practical advice will be presented on good practices in the construction of adaptive, Bayesian, and complex clinical trials. Examples of trials will be presented where the focus will be on how they were constructed.
Instructor background:
Scott Berry, Ph.D., is President and a Senior Statistical Scientist at Berry Consultants, LLC and has done numerous short courses with awards.
All statisticians at some point in their careers are frustrated by the rejection of their ideas or resistance when they try to drive change by implementing a new method or process. Through time and experience, some statisticians learn to overcome these issues and successfully influence their collaborators and business partners. How do they do it? What skills do they practice? The simple answer is leadership - skills that you can learn to get people to listen to your perspective, trust your proposals, and invest in your ideas. With leadership skills, your ideas and innovation have a much greater chance of being adopted and of impacting the drugs you develop and the patients you serve. This short course will focus on the leadership skills necessary to influence and to take new ideas and innovative thinking into practice. The course will provide: - A brief introduction of leadership, why it is important, and its role in creating impact through innovation - A discussion of the critical skills required to influence and get collaborators to buy into your ideas, including communication, business acumen, and networking. - What it takes to develop and exhibit a professional presence that will allow you to establish and maintain the ability to influence decisions & strategy and impact your organization. Although this course will not turn you into an instant leader, it will provide you with knowledge of what it takes to improve as a leader and an initial direction & focus to get you started on your leadership journey.
Additional Information
This course will include select topics and concepts from two successful ASA leadership courses: Preparing Statisticians for Leadership (taught annually at JSM since 2014) and Leading with Executive Presence (developed and taught as part of Lisa LaVange’s ASA presidential leadership initiative in 2018). Dr. Gary Sullivan will be the course instructor.
Both Statistics and Machine Learning (ML), which is loosely called Artificial Intelligence in the media, are the fields of learning from data. The fact that they share many underlying math principles and theories overshadow the fact that they are indeed based on different philosophies. Ignoring the differences caused confusion among some statisticians and prevented them from effectively using some ML technologies. This short course is uniquely designed as an introduction to ML for statisticians. It avoids wasting time on topics that statisticians are already familiar. Instead, it puts emphasis on the areas unique in ML, and draws connections between the two fields for those with superficial similarity. This course was recently taught 6 times in Merck. Its success shows its effectiveness for statisticians to learn ML. The course has 5 sections: (1) What ML is: to show similarities and differences between ML and Statistics, and to provide an overview of the subareas within ML; (2) Supervised learning workflow and methods: to explain the workflow, along with key concepts, related to supervised learning tasks, and to introduce those popular ML methods, e.g. SVM, boosting machine, random forests, etc.; (3) Model inference: to use the trained models to predict, to select variables, and to gain insights into the data; (4) Unsupervised learning methods; (5) An introduction to deep learning.
After obtained his Ph.D. in 2001 with research focusing on modern ML, Junshui Ma has been working on many ML topics and projects across academia and industry, which produced a dozen of ML journal papers, covering diverse topics. Andy Liaw has been working in related areas after he obtained his Ph.D. in 1997. He is the author of the first popular R package for random forests, and the associated paper has been cited 10K+ times. One of the papers coauthored by them for predicting compound activities using deep learning methods became one of the most cited papers in that area.
Following the tremendous success in oncology drug development in the last decade (e.g., emergence of immune checkpoint inhibitors as a new backbone therapy in multiple tumor indications and cell-based therapies), recent years have witnessed an explosive growth in number of new drugs and vaccines in cancer trials. While the expectation is high, it is unrealistic to expect all of them to have the same success, as the improved SOC has raised the hurdle for them to demonstrate effectiveness. It is imperative to apply cost-effective designs to the development of these new agents.
In this short course, I will focus on three issues in contemporary oncology development: 1) efficacy screening post dose finding; 2) transition of a new program from early stage to late stage; 3) biomarker hypotheses in Phase 3 confirmatory trials. On efficacy screening, I will present a variety of optimal basket designs including an extension of Simon’s optimal designs for single-arm trials to multi-arm trials. I will also make a distinction between assessing whether any of the test drugs is effective and whether the test drug is effective in any of the tumor indications, and show its impact on design strategies. On transition from early stage to late strategy, I will extend the 2-in-1 design that has quickly become popular since its publication and then talk about advanced utilization of intermediate endpoints for making transitional decisions in an operationally seamless design with dose-selection and a statistically seamless 2-in-1 design. The use of a predictive biomarker can substantially complicate a Phase 3 trial design. I will discuss the various program level and trial options for test drugs with a predictive biomarker hypothesis including adaptive population expansion designs.