All Times EDT
The conversation about estimands has moved into implementation across the industry. Statistician and their clinical colleagues are sorting through the various disease areas and study designs to create precise definitions of estimands that are clinically meaningful and acceptable to regulatory agencies. This session will feature an enactment of meetings between industry statisticians and clinicians with their regulatory counterparts. A hypothetical Alzheimer’s Disease treatment will be the focus of the discussion. The session will consist of three parts. The first part will portray an End of Phase 2 Meeting between the industry and regulatory participants. In this part of the session, the industry Sponsor will present its rationale for its Phase 3 trial estimands. Regulators will challenge and advise the Sponsor. The second part of the session will be after the hypothetical trial is complete and the Sponsor will share results with regulators in a Pre-NDA meeting. There will be challenges and debates. The third part of the session will be a floor discussion and Q&A from the audience.
The hypothetical study and results will be constructed to maximize the opportunity for different perspectives and approaches. Both statistical and clinical viewpoints will be included. Rather than a series of formal presentations and a discussant, ideas and proposals will be presented by the Sponsor, and regulators will share their perspectives in the context of the regulatory interactions. At the end of the session, the audience will be asked to vote on the clinical questions of interest and corresponding estimands considered useful at each stage in the development.
In clinical studies, a result with statistical significance is often misinterpreted as a clinically important result. However, statistically significant results are not necessarily clinically significant, and vice versa. Statistical significance is used to measure the probability of a study's results being due to chance while clinical significance is used to determine whether the results of the trial are likely to impact current medical practice. In this talk, we will invite three experts from FDA, Industry and Academia to discuss real examples in related to statistical significance and clinical significance and how to balance them in evaluating treatment effects.
The design of time to event trials relies on modelling the complex relationship between number of events, sample size and patient follow-up. This relationship is further complicated by the fact that the survival curves rarely satisfy the assumption of proportional hazards. We present three scenarios in which the usual sample size calculations for event-driven trials, based on the assumption of a constant hazard ratio, can lose considerable power while more sophisticated modelling approaches either at the design or interim monitoring stages can recover the lost power. (1) For immuno-oncology trials characterized by delayed response one may use "modestly weighted" logrank statistics for the final analysis (Magirr and Burman, 2018) that, unlike the Harrington-Fleming weights, ensure adequate power while also controlling the strong null hypothesis. (2) It is often difficult to determine at the design stage whether to increase sample size, increase follow up or combine the two options to obtain a target number of events. At the interim analysis stage, however, one can obtain Bayesian methods to learn from the observed data and select the combination that produces the shortest study duration for a specified predictive power. Type-1 error is controlled by frequentist methods. (3) Maurer and Bretz (2013) have extended use of graphical methods for multiplicity control to group sequential designs. This application, combined with related alpha-spending approaches to deal with complexities in the evaluation of multiple hypotheses, will be presented. In addition, methods for taking advantage of known correlations to relax group sequential bounds are presented as extensions of designs with a single analysis. Speakers: Pranab Ghosh, Pfizer; Rajat Mukherjee, Cytel; Keaven Anderson, Merck. Discussant: Cyrus Mehta, Cytel.
Rare disease in different biological indications demands alternative strategies for the feasibility of clinical studies. Small patient populations limit study design and implementation. Also, the genetic basis or the co-morbidities associated with many rare diseases might be confounding factors in the study outcomes. The US Food and Drug Administration had issued guidance on rare diseases, common issues (January/February 2019) and natural history studies (March 2019) in drug development, and gene therapy for rare diseases (July 2018). Some strategies to consider: • Improved study design (e.g., factorial design, response-adaptive randomization) to minimize the participants. • Use of synthetic control arm and adjustment for the confounding factors either by improved study design or by using propensity score matching. • Efficient statistical analysis or incorporation into larger evidence context (small individual studies with definitive evidence). • Applying effective statistical outcomes measures (e.g., continuous, surrogate, or composite endpoints). • Planning and early collaboration with regulatory authorities could generate a pathway to drug development in timely manner • Focused patient recruitment for those who exhibit a higher probability of the outcome
Novel statistical methods are applicable in orphan drug development such as sample size re-estimation, dynamic borrowing through power priors, and fallback tests for co-primary endpoints. Also, model-based approaches, such as nonlinear mixed effects modeling and Bayesian approaches could be applied. In this session, we plan to discuss clinical designs, adaptable for rare diseases and different statistical methodologies.
Endpoints are a cornerstone of scientific research in drug development and development of clinically relevant, accurate, reliable, and reproducible endpoints is critical to deploy them successfully in the context of use. This field has been heavily researched and documented via publications and regulatory guidances in the context of clinical trials, while it is still nascent in the context of real-world data which is fast evolving to be a formidable data source for evidence generation in scientific and regulatory applications, in large part due to the 21st Century Cures Act and the FDA RWE framework. Given inherent differences in the ascertainment and quality of outcomes in the real-world vs clinical trial settings, the development of endpoints using data from real-world sources, such as electronic health records (EHR), claims, and registries carry special nuances that demand close attention and robust evaluation. It is only recently that consortiums facilitated by organizations like Duke Margolis Center for Health Policy Research have started to closely evaluate and specify a scientific framework for the development of real-world endpoints. Understanding how real-world endpoints perform within a pre-specified framework as well as how they compare with those used in clinical trials is an important step in assessing their validity and enhancing their utility across use cases, including regulatory decision making. This session will discuss the methodology and key considerations involved in the development of real-world endpoints, including the scientific framework, data considerations and analytical strategy, highlighting case studies with applications in the regulatory setting. While the main focus will be on endpoints in oncology, particularly tumor response-related endpoints, relevant examples from other therapeutic areas will also be considered.
This session aims to discuss different tools to support R-based submissions in a biopharmaceutical regulatory setting. Special attention given to the validation of the software. Statistical analysis software used for new drug applications in the biopharmaceutical regulatory setting has to fulfill specific requirements. Although the SAS programming language is the current standard, the food and drug administration (FDA) does not require use of any specific software for statistical analyses. The FDA provides a clear definition of validation (see Glossary of Computer System Software Development Terminology), which can be broken down into three core components: accuracy, reproducibility, and traceability. Having these three components in mind, we present ideas and suggestions reflecting the current thinking of the R Validation Hub working group and collaboration partners, which is a cross-industry initiative funded by the R Consortium. Our mission is the support the adoption of R within a biopharmaceutical regulatory setting. A comprehensive framework is discussed in a white paper (see A risk-based approach for assessing R package accuracy within a validated infrastructure). For the risk assessment, we differentiate two types of R packages: 1) Core and recommended packages that are shipped with the basic installation and a rigorous software development lifecycle assures minimal risk, and 2) Contributed packages that may vary in their accuracy and development rigor, which could be assessed by various metrics. We focus our attention on validating contributed packages. We present some of the tools that provide workflow to evaluate the quality of a set of R packages: The R package riskmetric, an associated shiny application to perform risk assessments, and discuss things to consider when testing R packages. Lastly, we offer some insights into standard tables for submissions and the r2rtf package.
Central to assessing drug safety in pre-market drug development is the identification and evaluation of adverse events (AEs) during clinical trials. Some serious AEs may occur only after the six months of exposure typically seen in a premarket randomized trial; other AEs may increase in frequency or severity with time. Selection of the number of patients to be followed is based on the estimated probability of detecting a given AE frequency. Patient or product registries with primary data collection can increase the sample size available for making risk-benefit decisions. Common study designs for long-term extension studies can include uncontrolled, open-label trials, with patients recruited from multiple studies and with different inclusions/exclusion criteria, prior or concomitant medications, and lengths of follow up. Other trials have mid-study modifications from a placebo-controlled to an open label design when it is apparent that the original trial is not enrolling enough participants. Yet other studies, particularly for well-established indications, use a randomized design with active comparators. Emphasis has been placed on designs and data collection for efficient assessment of safety that provide robust assessment of the long-term risks of investigational drugs. Patient or product registries can be used for broader, valid, and resource-efficient assessment of safety. The focus is on practical designs for pre-market safety registries and appropriate statistical methods for risk assessment. Design considerations include strategies to avoid type 2 errors, inclusion of active control for contextualization of signals, and operational efficiency for better retention of patients. Analyses of drug safety databases need to consider biases and gaps in data. A robust discussion of the outlined issues will be presented. Session speakers and panelists have extensive experience in these settings.
The development and evaluation of COVID-19 vaccines, treatments, and tests continues to evolve to meet new challenges, including the emergence of new SARS-CoV-2 variants. In this session, leading biostatisticians will speak on the latest developments and statistical aspects of vaccine, treatment, and test evaluation. A panel session will follow, in which statistical and regulatory aspects will be discussed, with questions encouraged.
In response to the high interests and rapid development of statistical methods utilizing historical and real-world evidence (RWE) for clinical trials and observational studies, we propose a session consisting of five experts in this area from academia, regulatory branch, and industry. The session aims to share and discuss their most recent development in the research borrowing information from historical and RWE to improve the design and decision making of clinical studies. A unique aspect of the session is to cover the perspectives of this important topic from three angles: 1) Academic researchers (Speakers Peter Müller and Brian Hobbs) will share their new methodological development that addresses the modeling, inference, and theory from methodological perspective; 2) Regulatory expert (Speaker Sue-Jane Wang) will provide the perspective of regulatory advancement on the use of RWE with novel research; and 3) industry leaders (Speakers Sammi Tang and Heinz Schmidli) will share the experiences in the implementation and application using statistical methods. The session is likely to attract a large audience in the space of modern clinical trial methods, especially those who are interested in the approaches utilizing real-world ad historical data to improve the efficiency of clinical trials and drug development.
The presentation titles of the five speakers are: 1) Brian Hobbs, Bayesian mediation modeling for phase III oncology trial design; 2) Peter Müller, Single arm trials with a synthetic control arm built from RWD; 3) Heinz Schmidli, Robust use of external control information in clinical trials; 4) Rui (Sammi) Tang, A roadmap to using historical controls in clinical trials – by Drug Information Association Adaptive Design Scientific Working Group (DIA-ADSWG); 5) Sue-Jane Wang, Performance of LTMLE in the presence of missing patterns for prospective control- matched longitudinal cohort studies.
The recent advances in cell and gene therapy bring an exciting new era in drug development. While transformative clinical responses to cell and gene therapy have been observed in hematological indications and rare diseases, e.g. B-cell malignancies, multiple myeloma, hemophilia and sickle cell disease (SCD), there are also many unique challenges in the design and analysis of cell and gene therapy clinical trials. The statistical challenges facing cell and gene therapy development include but not limit to: (1) how to identify optimal dose considering safety and efficacy outcomes simultaneously; (2) appropriate borrowing from historical information; (3) utilization of RWE as external control for single arm registration enabling trial; (4) use of master protocol (e.g. multiple expansion cohorts or platform trials) to optimize operational resource, cost and timeline for cell and gene therapy early development; (5) How to design efficient trial in confirmatory RCT to mitigate potential non-proportional hazards patterns which would result in power loss? (6) estimand framework for single arm and RCT in cell and gene therapy.
This session will feature speakers from industry, academia and regulatory agency to share their insights and highlight the recent statistical innovations to address the challenges of cell and gene therapy trials. List of invited speakers:
• Yuan Ji (University of Chicago): “The Joint i3+3 (Ji3+3) design for phase I/II adoptive cell therapy clinical trials”
• Brian Hobbs (University of Texas at Austin): “Statistical design considerations for trials that study multiple indications”
• Zhenzhen Xu (FDA): “Design for immune-oncology clinical trials involving non-proportional hazards patterns”
• Yang Song (Vertex): “Confirmatory statistical procedures with multiple IAs in a single arm setting”
The ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials has been of great interest to industry and regulatory statisticians and has raised important issues of interpretation and implementation. The Pharmaceutical Industry Working Group on Estimands in Oncology,with over 30 industry statisticians representing over 20 companies plus regulatory, clinical, and academic participants, has been working since 2018 on developing a common understanding of the estimands framework, supporting dialog between industry and regulatory participants, and developing best practices.
This session's presentations are drawn from white papers and journal manuscripts representing the work of three of the Working Group's active subteams, the Treatment Switching Subteam, Censoring Mechanisms Subteam, and Hematology Subteam. These subteams have surveyed the field, engaged in extensive discussions across the industry, and developed an overview and recommendations within their respective areas that should be of broad interest to industry statisticians and the clinical trial community. These presentations should be of interest both to people in involved in oncology clinical trials and people involved in implementing and understanding the practical implications of the ICH E9 (R1) estimands guidance.
COVID-19 pandemic impacted so many people as well as economy worldwide in the year 2020. FDA and pharmaceutical companies are at the frontline bringing treatment options and vaccines to protect the public. In the last two months of 2020, Pfizer, Moderna and AstraZeneca delivered positive news regarding their vaccine trials. Many other companies were working hard as well hoping to get more protective vaccines to the market.
Vaccine research and development normally would take 10-15 years. However, this process was accelerated significantly for COVID-19 vaccines. From March 2020 to the first study reporting positive results, it took 8 months! This acceleration required not only hard work, but also innovations from all aspects of the research and development, statistics included. Statisticians played a strategic role in designing clinical trials, defining clinical endpoints, analyzing the study data. Seamless design, adaptive design, Bayesian design, sequential monitoring, etc., were seen in many trials. The development of these novel, innovative statistical methodologies in the earlier years have come to fruition at the right time.
Statisticians from FDA, academia and industry will share the grueling but yet rewarding experiences in developing COVID-19 vaccines. Potential speakers for the session include:
Dr. Tsai-Lien Lin, CBER, FDA Dr. Bo Fu, Sanofi Pasteur Dr. Satrajit Roychoudhury, Pfizer Dr. Peter Gilbert, Fred Hutch
Bayesian framework may provide high utility for efficient and innovative drug development. Some of Bayesian methods advantages include flexibility in decision making, ability to incorporate prior information, and a natural accommodation of heterogeneity. The past three decades have witnessed vast development and impact of Bayesian adaptive designs in clinical trials. The following recently published FDA guidance mention Bayesian methods explicitly: 1) Adaptive Design Clinical Trials for Drugs and Biologics Guidance for Industry (December 2019) 2) Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products (December 2020) 3) Demonstrating Substantial Evidence of Effectiveness for Human Drug and Biological Products (December 2019; draft). Evolution in approaches and current challenging issues in Bayesian adaptive design use in modern clinical trials will be discussed, including the following: 1) statistical pitfalls in elicitation of prior distribution for Bayesian adaptive designs, including how to systematically quantify the prior effective sample size and robustness in prior specification 2) how to handle patient heterogeneity in designing trials, including some recent proposals on how to adaptively identify and test patient heterogeneity, and how to efficiently select the promising subgroups as the trial proceeds 3) statistical issues of potential conflict between concurrent and non-concurrent control data in platform trials, including an in-depth understanding of the non-concurrent control arm data use and its impact on Type I error rate 4) new statistical challenges that arise from the development of trials for COVID-19 treatment, including a discussion of a Bayesian integrated sequential trial design to expedite the drug development process for COVID-19 and improve the efficiency of the use of resources. Four experts (3 speakers and 1 discussant) from academia, regulatory agency and industry have confirmed to join the session.
Platform trials can leverage the efficiency of multi-arm trials with the flexibility of classical drug development based on independent studies. Based on an overarching master-protocol, treatments can enter and leave the platform trial over time. Operational and statistical efficiency gains can be achieved by a shared trial infrastructure, adaptive design features and (control-arm) data. As the platform trial concept extends the framework of current clinical trials, traditional concepts need to be adjusted to the new paradigm. Current regulatory guidance requires control of the study-wise type I error rate. In contrast to this, it has been argued that a per-treatment error rate should be controlled in platform trials. However, when multiple treatments are tested in a common framework, it seems reasonable to assess the overall operating characteristics of the platform trial as the probability of false positive decisions, the probability to identify the best treatment and so on. In this session, we will discuss novel statistical approaches to address multiplicity in platform trials, discuss which error rates to control (and for which family of hypotheses) and consider the role of criteria as the independence of hypothesis, sharing of control data and the phase of the trial. Session Structure 1. Talk academia 2. Talk industry 3. Discussant FDA 4. Discussant EMA
Digital data, endpoints, and analyses – regulatory guidance for clinical trials The 21st Century Cures Act (Cures Act) has ushered in multiple pathways to incorporate the perspectives of patients into the development of drugs, biological products, and devices in FDA’s decision-making process. One important development has been in the rapid adoption of Digital health technologies (DHTs) in pharmaceutical trials. The capabilities offered by DHTs allow modernization of clinical trials designs, including the use of real-world evidence, and clinical outcome assessments, which will speed the development and review of novel medical products. Digital health data collected from the DHTs capture patient reported outcomes that describe physical activities and broader functional status of the patients. Endpoints based on these data are quite varied in multiple contexts of use, definitions, and time frames. Issues relating to within-patient variability in digital measures, validation of endpoints and methods of analysis are of prime importance to pharmaceutical researchers and regulatory reviewers alike. A consolidated regulatory guidance in this direction to address novel digital endpoints, data collection & standardization strategies, patient privacy concerns, and analysis techniques is expected to be very timely and useful. In this session, we explore aspects of digital health technologies useful for adoption in clinical trials and applicable regulatory perspective/guidance.