All Times EDT
Since Professor Sir David Cox proposed the hazard ratio (HR) and its inference procedure in 1972, the HR has been routinely used for quantifying the between-group-difference for time to event data. More generally, the proportional hazard model has been commonly used for the association and prediction analysis. Other models in survival analysis, for example, the accelerated failure time model and transformation models are occasionally utilized in practice. Ideally, a summary measure for the group contrast should be model-free to avoid the model misspecification at the analysis. However, there are very few model-free measures available. The most popular one is either based on the median failure time or the event rate at a specific time point of the study. Both measures are local summaries and cannot catch the short- or long-term survival profile.
In the past few years, issues of utilizing HR have been extensively discussed. The validity of the HR estimate depends on a strong model assumption; that is, the ratio of two hazard functions is constant over time. When the proportion hazards assumption is not met, the HR estimate is difficult, if not impossible to interpret. In fact, the parameter for which the empirical HR estimates is not a simple, weighted average of the true HR over time and it generally depends on the censoring distributions. This is highly undesirable. Moreover, even when the proportional hazards assumption is plausible, the physical/clinical interpretation of a single ratio such as the HR estimate between two groups is difficult to assess the clinical utility of the study therapies, since there is no reference hazard value available as a benchmark.
This short course will have two parts. In the first half, we will cover brief history of development of survival data analysis, basic theories of survival data analysis for one- and two-sample problems. In the second halfthis short course, we will illustrate issues of the HR and present an alternative, that is, the t-year mean survival time (restricted mean survival time, RMST). It is the mean event-free time up to a specific time point, say, t-year. The RMST was introduced in the statistical literature in 1949, but had not got much attention until recently. Here is the list of the topics we will discuss in this short course.
Topics to be covered
The first half
- Brief history of development of the survival data analysis methodology - One-sample problem o Survival function, hazard function, and cumulative hazard function o Kaplan-Meier method - Two-sample problem o Testing: Log-rank test and large sample properties via martingale processother non-parametric tests o More flexible nonparametric testing procedure o Estimation:timation: Hazard ratio and its inference based on the partial likelihood - Regression o Cox regression with general covariates o Large sample inference on the Cox model via martingale process and empirical process o Regression-based prediction and resampling method o Regression analysis for clustered data, recurrent events, and competing risk.
The second half: More on estimation of between-group difference - Issues of Hazard Ratio (HR) - Alternative measures - Restricted mean survival time (RMST) - Power comparison between RMST-based test and HR-based tests - Empirical choice of a truncation time for RMST - Study design based on RMST - RMST for stratified analysis of survival data. - RMST to non-inferiority trials - RMST to quantify long-term survival benefit - RMST to assess duration of response - RMST in the presence of competing risks - Generalization of RMST for recurrent event time data analysis
Overview: The Bayesian paradigm provides a hierarchical and practical way of expressing complicated models through a sequence of simple conditional distributions making them useful for simple to complex data structures required to address multiple phases of clinical trials, particularly for medical device trials for which the use of usual placebo controlled experiments are often difficult conduct. Over the recent years there have been tremendous efforts on developing Bayesian analytics for leveraging data from sources outside of prospectively designed study, referred to as external data such as various Real-World-Data (RWD) sources, historical clinical data, and data from multiple trials within a grand hierarchical structure. Thus, development of appropriate statistical models and related inference are warranted that are not only based on solid theoretical guarantees but also making sure that such complex models are applicable and interpretable in practical settings for modern clinical trials. Thus, one of the primary aims of the proposed short course is to present the modern analytical tools that are easily accessible to practitioners by providing a glimpse of theoretical backgrounds supplemented by many practical examples derived from real case studies. This will be accomplished by illustrating a set of numerical examples (using standard software) beginning with simple two-arm trials to more complex hierarchical models that work with several data irregularities (e.g., missing values, censored observations, etc.) which are commonly faced by practitioners of clinical trials and observational studies.
Primary Focus: The tutorial will begin with a quick overview of general purpose Bayesian methods for randomized controlled trials (RCTs) using various study designs with special emphasis on medical device trails. The second part of the tutorial will involve more realistic and complex models that have recently emerged in the modern era used by pharmaceutical industries and regulatory agencies (e.g., FDA), and then showcase the use of modern Bayesian machine learning (BML) methods through various real case studies. Throughout the tutorial practical applications and worked-out examples will be emphasized without getting into the theoretical underpinnings of the methods, but relevant literature will be provided for those wishing to learn more in-depth notions of Bayesian ML tools. Participants with basic knowledge of probability theory and statistical inferential framework would find the short course useful in expanding their standard toolkit to advanced use of Bayesian analytical methods for clinical trials. The concepts and methods discussed will be demonstrated using the popular software packages (R and SAS) developed by the presenters, but they are implantable by any other software capable of coding Markov Chain Monte Carlo (MCMC) methods.
Course Outline and main topics:
Part I - Introduction to Bayesian Methods for Clinical Trials 1. Basics of Bayesian Methods for RCTs 2. Predictive Distributions and Sample Size Determination 3. Computational Methods using Monte Carlo Methods
Part II - Primer on Bayesian Software 1. JAGS through R 2. SAS through PROC MCMC and PROC BGLIMM
Part III – Hierarchical Models in Clinical Practice 1. Linear and generalized linear models 2. Multi-level models 3. Penalized regression models with data irregularities (e.g., missing and censored values) 4. Bayesian Machine learning methods accounting for estimation uncertainty
Course Outcomes: i. Attendees will become familiar with the essential concepts and computational methods of the Bayesian analytics ii. Attendees will learn how to deal with practical data irregularity issues that arise especially in multilevel modeling. iii. Attendees will become comfortable with using software to conduct Bayesian inference using machine learning models iv. Attendees will have access to worked out examples presented using R and SAS v. Attendees will likely to have a limited access to an early edition of a book draft that the instructors are currently working as co-authors
Instructors’ background:
Professor Sujit Kumar Ghosh is currently a Full Professor in the Department of Statistics at North Carolina State University (NCSU). He has over 25 years of experience in conducting, applying, evaluating and documenting statistical analysis of biomedical and environmental data. Prof. Ghosh is actively involved in teaching, supervising and mentoring graduate students at the doctoral and master levels. He has supervised over 35 doctoral graduate students and recently published a popular book titled "Bayesian Statistical Methods" co-authored with Brian Reich which is being used as a textbook at several universities. He has delivered multiple webinars sponsored by ASA Biopharmaceutical section and also presented webinar for the DIA's KOL lecture series organized by the BSWG. He has served an invited discussant for several session at the previous ASA Biopharmaceutical industry and regulatory workshop in recent past. He has published over 115 refereed articles and have been He is an elected fellow of the ASA and has also served as the Deputy Director at SAMSI (NC).
Dr. Amy Shi is a senior research statistician developer in the Advanced Statistical Methods Department at SAS Institute Inc. Her main responsibility is developing and enhancing the Bayesian capabilities of SAS software, with a focus on generalized linear mixed models, discrete choice models, and multilevel hierarchical settings. She is the developer of the BGLIMM procedure. She has a PhD in biostatistics from the University of North Carolina at Chapel Hill. Amy has given numerous tutorials and taught short courses at various statistical conferences, such as JSM, CSP, and Bayes-Pharma.
In recent years, the rapid increase in the volume, variety, and accessibility of digitized RWD and RWE has presented unprecedented opportunities for the use of RWD and RWE throughout the drug product lifecycle. We believe RWD and RWE will be leading the statistical innovation in healthcare industry and regulatory decision making for the next decades. In clinical development, RWD and RWE have the potential to improve the planning and execution of clinical trials and create a virtual control arm for a single arm for accelerated approval and label expansion. From the product lifecycle perspective, effective insights gleaned from RWE bring about informative relative benefits of drugs, comparative effectiveness, price optimization, and new indications. Aiming to present a wide range of RWE applications throughout the lifecycle of drug product development, we have written a book “Real-World Evidence in Drug Development and Evaluation” which was published in February 2021. We are excited to share the comprehensive RWD/RWE knowledge to researchers at 2021 Regulatory-Industry Statistics Workshop.
The goal of the short course is to serve as resources for practitioners who wish to apply these modern statistics and analytics in drug research and development. This short course will cover the essential statistical methodology for causal inference and recent practical case studies that adopted RWD and RWE in the clinical development and evaluation.
Course outline: Part I: Introduction of RWD and RWE in drug development and evaluation • Background and introduction • RWD data source • Agency guidance • Challenges & opportunities
Part II: Analytical tools for RWD and RWE • Causal inference basics • Propensity score adjustment for RWD • Sensitivity analysis for unmeasured confounding
Part III: Utilizing RWD and RWE in clinical development • Utilize synthetic control to support single arm study • Utilize natural history study for rare disease development • Utilize RWD/historical data for label expansion • Practical considerations of using RWD/historical data in the clinical development
Part IV: Utilizing RWD and RWE in post-marketing drug development • Patient-reported outcome and analysis • Benefit-risk assessment method • Real-world evidence in health care decision making • Statistical methods used for post-marketing surveillance
Recommended Textbook: Yang, H. and Yu, B. (2021). Real-World Data and Evidence in Drug Development and Evaluation. Chapman and Hall/CRC. Boca Raton, FL. https://www.taylorfrancis.com/books/edit/10.1201/9780429398674/real-world-evidence-drug-development-evaluation-harry-yang-binbing-yu
Instructor background: Dr. Binbing Yu is a Director in the Oncology Statistical Innovation group in AstraZeneca. He serves as the statistical expert across the whole spectrum of drug R&D process, including early-clinical and clinical research, design, operation and manufacturing, clinical pharmacology, oncology medical affairs and post-marketing surveillance. He obtained his PhD in Statistics from the George Washington University. His primary research interests are clinical trial design and analysis, cancer epidemiology, cause inference in observation studies, PK/PD modeling and Bayesian analysis. He was previously the Biometry Section Chief in the National Institute on Aging. He has nearly 80 publications in scientific and statistical journals and published a book on statistical methods on immunogenicity.
Dr. Bo Lu is a Professor of Biostatistics in the College of Public Health, the Ohio State University. He is an elected fellow of the American Statistical Association. He obtained his PhD in Statistics from the University of Pennsylvania. His primary research interest covers causal inference with observational data, matching/weighting adjustment for complex designs including multiple treatment arms/time-varying treatment initiation/with complex survey weights, Bayesian nonparametric modeling for heterogeneous causal effects, and statistical methods for survey sampling. He has been PIs for both federal and local government-funded research grants on causal inference methodology. He also has extensive collaborations with Pharmaceutical industry on utilizing causal inference methods to leverage RWD in drug discovery.
Dr. Qing Li is a senior manager in the statistical methodology group under the statistics and quantitative science department at Takeda Pharmaceutical Company. His responsibilities include statistical methodology development and consultation for real-world-evidence and advanced adaptive design from proof-of-concept to late phase studies across multiple therapeutic areas including oncology, gastroenterology, rare disease, and vaccine. His research interests include propensity score methods, RWE, adaptive designs, Immuno-Oncology design and surrogate endpoints. He obtained his MS and PhD degree in biostatistics from the University of Iowa.
Classical clinical trials are conducted in sequence defined by phases, for example, phase 1, 2 and 3, with the last phase aiming for confirming treatment effect and regulatory approval. The high attrition rate and increasing cost in drug development demand innovations on clinical trial design. Adaptive seamless designs (ASD) combine two phases, i.e. phase 2 and 3, with an opportunity to modify the design at the interim analysis and potentially increase the probability of success of the study.
An ASD offers variety of forms. Most commonly used ASDs include treatment/dose selection and population enrichment with flexibility of sample size re-estimation and early stopping. Treatment/dose selection method has been used in many therapeutic areas, while population enrichment designs are mostly employed in oncology studies. Significant progress and advancement of statistical methodology development around ASD have been achieved in the statistical community. However, there are still confusions and challenges when it comes to designing a specific trial. For example, when there is great uncertainty about the true effect size, how does the initial assumption of treatment effect affect the overall study power? What do other factors such as interim analysis schedule and choice of boundaries affect operating characteristics? Should all patients from the two stages used for final statistical inference? What is the regulatory agency’s position on this type of design? Is there a best practice for some scenarios, i.e. when to use what?
In this short course, concepts and theoretical background will be first introduced to facilitate better understanding the merit of ASDs. Great details will be focused on solving practical issues on how to implement the ASDs including treatment selection and population enrichment. We propose a structured, comprehensive strategy that can help achieve adequate power and cost-effective sample size in trial design when there is great uncertainty about the true effect size. Case studies in oncology and other therapeutic areas will be discussed. Demonstrations will be performed using the R/Shinny based software DACT (the Design and Analysis for Clinical Trials). Attendees will be granted to access to DACT for training purpose.
In statistical methodology research and statistical practice in wide-ranging applications areas from Medicine to Finance, statistical simulations are indispensable in showing operating characteristics of a given design against alternatives and in comparing models targetting to explain the variability and overall profile of a given outcome variable of interest. Depending on the response variables of interest in such simulations, univariate or multivariate, iterative or non-iterative, simulation designs must be considered very carefully to produce generalizable, repeatable, and reproducible conclusions in any given platform and this task is much more difficult and under-recognized than it is imagined. In this short course, we will introduce simple to more complex simulation designs and the importance of simulation size; we will describe potential pitfalls that may not be easily recognizable and suggest what data and metadata to be captured for a clear description of the simulation results. We plan to carry out examples both in SAS and R to show similarities and differences between the two platforms. The course will be provided in four modules: Module-1: Simulating data for univariate random variables following Gaussian Distribution, Student-t-Distribution, Gamma Distribution and its special cases, Beta Distribution, Binomial Distribution, Poisson Distribution, etc. Module-2: Simulation designs for one-sample hypothesis testing for continuous, binary, and survival endpoints. In this module, we will also illustrate iterative simulation designs such as Phase-I Dose Escalation Design, and Simon’s Two-stage designs. Module-3: Simulation designs for two- or more-sample hypothesis testing for continuous, binary, and survival endpoints. One of the main focus here will be Empirical Power calculations for Randomized Clinical Trials. Module-4: Simulation designs for Multivariate random variables and designs that require iterative processing.
Digital health technologies (DHTs) are technologies that use computing platforms, digital connectivity, software, and/or sensors for healthcare and related uses. These technologies span a wide range of applications from general wellness to medical devices. These products are also used as diagnostics, therapeutics, or adjuncts to medical products (devices, drugs, and biologics). They may also be used to develop or study medical products. DHTs include use of electronic technologies such as artificial intelligence (AI), software as a medical device, and mobile medical applications.
DHTs are moving healthcare from the clinic to patients by improving understanding of patient behavior and physiology outside traditional clinical settings and enabling early therapeutic interventions. DHTs, such as sensors and other telehealth tools, provide important opportunities in clinical trials to gather information directly from patients at home (decentralized clinical trials), and to gather frequent or continuous medical data from patients as they go about their lives. DHTs can use advanced algorithms, susceptible to potential errors, which may lead to malfunction or misinterpretation of health data. Therefore, regulatory science tools and methods, such as simulations to test algorithm performance, need to be developed to protect data integrity and improve overall reliability of DHTs.
In this course, the first speaker, Susan Murphy, will discuss "will discuss micro-randomized trials & reinforcement learning for constructing personalized mobile DHTs for behavioral modifications with application to individuals at risk of adverse cardiovascular events. Chad Gwaltney will then discuss case studies where clinical trial endpoints are being developed based on data from continuous remote monitoring of symptoms and physical activity. The third speaker, Andrew Potter, will discuss statistical perspectives on DHTs from FDA/CDER. In the fourth presentation, Berkman Sahiner and Matthew Diamond will present an overview of the CDRH Digital Health Center of Excellence approach to DHTs, with a focus on artificial intelligence/ machine learning-based radiological devices.
Under the 21st Century Cures Act, the FDA is directed to develop a program to evaluate how real-world evidence (RWE) can potentially be used to support approval of new indications for approved drugs or to support/satisfy post-approval study requirements. This brings new opportunities to utilize statistical innovations and advances that are critical to assess and address data quality as well as establish causal inference based on real world data (RWD) for regulatory decision-making. However, designing a valid RWD-based study and generating high-quality RWE face numerous challenges such as confounding, treatment switching, and missing information including informative censoring. To date, propensity score methods have been predominantly used to design and analyze RWE studies in a regulatory setting with the focus of confounding control. Although propensity score methods can be a useful tool to address the aforementioned statistical challenges, recent innovations such as targeted maximum likelihood estimation (TMLE) that can utilize an ensemble of machine learning algorithms can be a much more powerful and efficient tool to address those challenges. TMLE also has a doubly-robust property that can protect against propensity score model, or, outcome regression model misspecification (but not both).
This course is a continuation of a previous short course entitled “SC4 - Causal Inference for Real-World Evidence: Propensity Score Methods and Case Study”, which was provided at the 2020 ASA Biopharmaceutical Section Statistics Workshop. The previous short course covered causal inference framework, propensity score-based methods (matching and inverse probability weighting) and discussed how to design and analyze an RWD-based study using these methods under a question of interest that is coherent to ICH-E9 regulatory definition of estimand. As a continuation, this proposed short course will consist of two parts. In part 1, Dr. Hana Lee from FDA will provide a general overview on causal inference and various methods for confounding control including propensity score method, outcome regression-based method (g-computation), and doubly robust method. Dr. Lee will explain how recent innovations in statistics such as various machine learning algorithms can be used to draw causal inference, with a brief introduction to TMLE. Then Dr. Susan Gruber, an expert in TMLE and Founder and Principal of Putnam Data Sciences LLC, will cover topics in targeted learning, machine learning, and TMLE. In addition, Dr. Gruber will discuss statistical considerations on the use of TMLE for an RWE study (e.g., how to select candidate machine learners and associated tuning parameters) and how to write a statistical analysis plan accordingly. Then this course will wrap up with a case-study example that demonstrates a practical use of TMLE for drug safety and efficacy evaluation. Mock R code will be provided during the case-study illustration.
During the training, participants will be able to learn (1) foundation of causal inference framework and necessary assumptions, (2) various classes of causal methods for confounding control, (3) theory and application of TMLE including best practices of TMLE to a RWE question of interest, and (4) how to write a statistical analysis plan following targeted learning framework and using TMLE.
Instructors Background: 1.Martin Ho is the Head of Biostatistics at Google. He provides biostatistical leadership in total product life cycles of clinical and consumer products at Google from ideation, research and development, early phase and confirmatory clinical studies for regulatory approvals, to post-market health technology assessments. Prior to Google, Martin was Associate Director at the Office of Biostatistics and Epidemiology at Center for Biologics Evaluation and Research and Associate Director at Center at the Office of Clinical Evidence and Analysis at Center for Devices and Radiological Health, U.S. Food and Drug Administration. Before joining the FDA, he worked as a biostatistician at various contract research organizations for 10 years. Martin earned a master's degree in statistics from University of Wisconsin, Madison.
2. Susan Gruber: Susan Gruber, PhD, MPH, MS, is Founder and Principal of Putnam Data Sciences, LLC, a statistical consulting and data analytics consulting firm. Dr. Gruber received her master's in public health and a doctoral degree in Biostatistics from the University of California at Berkeley, and a master's degree in Computer Science from the University of California at San Diego. Former positions include Senior Director of the IMEDS Methods Research Program for the Reagan-Udall Foundation for the FDA, and Director of the Biostatistics Center in the Department of Population Medicine at Harvard Pilgrim Health Care Research Institute. Dr. Gruber's work focuses on the development and application of data-adaptive methodologies for improving the quality of evidence generated by observational and randomized health care studies. She is a leading expert in Targeted Learning, an efficient double-robust approach to obtaining unbiased estimates of causal parameters. In addition to authoring foundational papers, she wrote the first publicly available software for applying TMLE in point treatment settings, and to estimating the marginal mean outcome of a multiple time-point intervention. She is also an expert in Super Learning, an ensemble machine learning approach to predictive modeling. Dr. Gruber's previous short courses and webinars include: (1) An Introduction to Targeted Maximum Likelihood Estimation of Causal Effects. Putnam Data Sciences Targeted Learning Webinar Series. March, 2020. (2) Targeted Learning for Data Adaptive Causal Inference in Observational and Randomized Studies. Third Seattle Symposium on Healthcare Data Analytics, Seattle, Washington, October, 2018. (3) Beyond Logistic Regression: Machine Learning for Propensity Score Estimation. FDA CDER Office of Biostatistics Division of Biometrics VII, Silver Spring, Maryland, January, 2018. (4) An Introduction to Super Learning for Prediction. Takeda Pharmaceuticals, Inc., Boston, Massachusetts, June, 2017.
Abstract: As the paradigm of drug development shifts to personalized medicine and targeted therapies, pool of eligible clinical trial patients becomes increasingly smaller and there is a need for rapid learning and confirmation of clinically meaningful treatment effect. Master protocol, including umbrella, basket, and platform trials, promotes innovation in clinical trials and aims at improving efficiency, avoiding duplication and competition, and accelerating the drug development process. Though master protocols used to be primarily sponsored by nonprofit organizations, academic institutes and government agencies mostly in oncology area, there is a growing trend of conducting clinical trials using master protocol in recent years from the pharmaceutical industry in oncology and other therapeutic areas. Regulatory agencies across the globe have issued guidance on master protocols. In 2018, the ASA Biopharmaceutical Section Oncology Scientific Working Group (SWG) was chartered to explore innovative statistics in oncology drug development and a sub-team on master protocol in oncology was formed. In this short course, instructors from the Oncology SWG master protocol sub-team will provide an overview on master protocol, its regulatory landscape, statistical methodologies, special statistical considerations, challenges and opportunities both statistically and operationally. The last part of this short course will include a case study of a Pediatric platform trial: NCI-COG Pediatric MATCH, with detailed illustrations on how to implement the considerations discussed in the earlier part of the short course. See below the structure of the short course:
Part 1: Overview of master protocol a. Overview of master protocol: Definitions, Examples b. Review of Regulatory Landscape: US and Rest-of-World regulatory guidance c. Practical Considerations
Part 2: Novel statistical methodologies used in master protocol trials, statistical and operational consideration
Part 3: Case Study of Pediatric Platform Trial: NCI-COG Pediatric MATCH
Instructors’ background: The confirmed instructors have given the short course in Deming Conference 2020 under the same title. Nicole and Cindy are the co-leads of Master Protocol sub-team under the ASA Biopharmaceutical Section Oncology Scientific Working Group (SWG) since 2018. Jingjing, Nicole and Cindy are the current co-leads of Joint DIA-ASA Master Protocol Working Group. The ASA WG manuscript “Practical guide and recommendations for master protocol framework: basket, umbrella and platform trials” is under review.
Relevant presentations and short course by the instructors:
1. Master Protocol and Its Application, Session D Tutorial, Deming Conference, Dec. 2020 2. Type-I Error Considerations in Master Protocols with Common Control in Oncology Clinical Trials, FDA Oncology Center of Excellence & ASA Biopharmaceutical Section’s Virtual Discussion, Oct., 2020 3. Master Protocol in Pediatric Cancer Trials, ASA Biopharmaceutical Regulatory/Industry Statistical Workshop 2020, Sep., 2020 4. Practical Considerations for Master Protocol Framework, -Basket, Umbrella and Platform Trials, JSM, Aug. 2020
Target audience:
1. Statisticians working in the pharmaceutical industry 2. Epidemiologists/statisticians working in the field of epidemiology 3. Statisticians working on applications from clinical medicine 4. Researchers who are interested in applications and methods of modern meta-analysis methods
Prerequisites for participants: None.
Computer and software requirement: Some familiar with R programing.
Description: Comparative effectiveness research aims to inform health care decisions concerning the benefits and risks of different prevention strategies, diagnostic instruments and treatment options. A meta-analysis is a statistical method that combines results of multiple independent studies to improve statistical power and to reduce certain biases compared to individual studies. Meta-analysis also has the capacity to contrast results from different studies and identify patterns and sources of disagreement among those results. The increasing number of prevention strategies, assessment instruments and treatment options for a given disease condition have generated a need to simultaneously compare multiple options in clinical practice using rigorous multivariate meta-analysis methods. This short course, co-taught by Drs. Chu and Chen who have collaborated on this topic for more than a decade, will focus on most recent developments for multivariate and network meta-analysis methods. This short course will offer a comprehensive overview of new approaches, modeling, and applications on multivariate and network meta-analysis. Specifically, the instructors will discuss the contrast-based and arm-based network meta-analysis methods for multiple treatment comparisons; network meta-analysis methods for multiple diagnostic tests; multivariate methods to visualize, detect, and correct for publication biases; and multivariate meta-analysis methods estimating complier average causal effect in randomized clinical trials with noncompliance.
This proposed course will be a sequel to my BIOP2019 short course on composite endpoints (with 65+ attendants). This follow-up is intended to keep the applied audience up-to-date with the rapid and exciting methodological developments in the field and to provide more hands-on instruction on their real-life implementation.
Composite outcomes combining death and (possibly recurrent) nonfatal events, such as hospitalization and tumor progression, have long been the endpoint of choice for the primarily analysis of many clinical trials. The past couple of years, in particular, have seen tremendous progress in the statistical methodology for such endpoints. Compared with existing approaches introduced in the 2019 prequel, the new methods are noteworthy in: 1. Clear and interpretable definition of estimands in compliance with the ICH-E9(R1) guideline; 2. Fuller utilization of recurrent nonfatal events while prioritizing death; 3. Versatile modeling strategies for general outcome types. In addition to the methodological advancements, several user-friendly R-packages have also emerged to assist in their practical implementation. Given such remarkable developments, this follow-up course is highly necessary and will benefit both statisticians and clinicians alike in their research and practice.
Syllabus: 1. Introduction. 1.1 Definition and examples; 1.2 ICH-E9(R1) and its implications; 1.3 Traditional methods. 2. Nonparametric analysis. 2.1 Estimands and their inference (curtailed win ratio, restricted proportion/mean-time in favor of treatment); 2.2 Sample size calculation; 2.3 Real examples using R-packages WR and rmt. 3. Semiparametric regression analysis. 3.1 Overview of regression models; 3.2 Model specification, inference, and sensitivity analysis (win-ratio regression, generalized semiparametric proportional odds model); 3.3 Real examples using R-packages WR and GSPO. 4. Prospect for future work (interim analysis, high-dimensional baseline characteristics, etc.).