All Times EDT
Statistical innovations are instrumental to construct a roadmap for deriving Real-World Evidence (RWE) from various sources of Real-World Data (RWD) in order to establish causal interpretations of the evidence for regulatory considerations. To seize this opportunity, a group of regulators, industry professionals, and academic researchers formed the ASA BIOP RWE Scientific Working Group in April 2018 with the purpose of supporting the development of RWE, informing best practices to address statistical considerations, and discussing a path forward with the use of RWD/RWE for regulatory decision-making. As the Working Group has entered Phase 2, we propose a reporting session to share some interim results on three key statistical considerations for RWE: (1) Estimand - What are we going to estimate? As defined in ICH E9(R1), estimand is a precise description of the treatment effect reflecting the clinical question posed by the trial objective. The first talk will discuss how to apply this first key step in real-world setting. (2) Data - What are we using to estimate the estimand? As stated in FDA’s RWE Framework, the strength of RWE depends on the reliability and relevance of the underlying data. Data should be selected based on their suitability to address specific regulatory questions. The second talk will discuss this second key step, statistical considerations regarding the use of RWD. (3) Inference – How are we estimating the estimand based on the data? RWE studies create a challenge for establishing causal inference that must be addressed to support effectiveness and safety decisions. Appropriate applications of causal inference roadmap are critical. The third talk will discuss this third step and a path forward for causal inference using RWD. The last key step is the interpretation of findings.
Drug development process remains lengthy and costly. Improving trial designs and methods will provide great opportunity but also associate with challenge. Regulatory agencies continue to encourage sponsors to apply innovative trial designs and methods by issuing various new guidelines. We are basically in a new era of modernizing new drug developments. In this session, industry, regulatory and academic speakers will share their experiences and thoughts on applying innovative designs and methods to enhance clinical trial flexibility and efficiency. More specifically, in certain disease areas, there may be a large placebo effect. A two-stage sequential parallel comparison design can be utilized to address the issue. With the design, treatment effects can be assessed for different patient subgroups defined by the outcome of stage 1 and then be combined for various purposes. Methods for handling intercurrent events and missing data for the specific design in the estimand framework stipulated in ICH E9(R1) will be covered in the presentations. In addition, enhanced doubly robust causal estimands will be discussed for trials with incomplete data and imperfect/no randomization. Simulation studies, performed to evaluate the finite sample performance, demonstrate clear superiority of the new method compared to several commonly used doubly robust procedures even when the propensity score and outcome models are mis-specified. Examples will be used to illustrate the applications of the novel designs and methods. Speakers: • Hui Quan (confirmed), Sanofi: Novel design for dealing with large placebo effect and the corresponding method for missing data handling, email@example.com • James Hung (confirmed), FDA: Sequential Parallel Comparison Designs in CNS Clinical Trials, firstname.lastname@example.org • Ming Tan (confirmed): Enhanced Doubly Robust Causal Estimands in Clinical Trials with Incomplete Data and Imperfect/No Randomization. Georgetown University: Ming.Tan@georgetown.edu
The COVID-19 pandemic has had effects on multiple operational aspects of clinical trials. Three notable areas of impact are an increase in pandemic-related missing data, disruption to patient recruitment, and acceleration of decentralized trials with alternative methods to conduct patient follow up. In the area of missing data, operational metrics that are leading indicators of potential issues with the primary efficacy endpoints have been helpful. Work will be presented on sophisticated metrics that have been developed that can flag problems even for complex endpoints such as progression-free survival (PFS). These measures can be used as QTLs and/or in centralized study monitoring at a study level; and can be summarized to quantify risk at a portfolio level. In the area of patient recruitment, recruitment in complex trials is often facilitated by the use of stochastic models. The COVID-19 pandemic brought an additional layer of complexity to recruitment modelling: as countries went into shutdown, sponsors faced the dilemma of re-planning study delivery, re-balancing their portfolios “on the fly”. That translated into the need to adapt the recruitment model to account for effect of COVID-19. Work will be presented on an approach that blends Poisson-Gamma models, real-time recruitment data updates and epidemiological modelling of COVID-19 spread. In the area of decentralized trials, the use of alternative mechanisms to conduct patient follow-up, such as telemedicine and remote assessments, in addition to core clinic visits has been accelerated by the COVID-19 pandemic. Clinical trial sponsors had to react quickly to incorporate different methods for managing the conduct of ongoing clinical trials assuring the robust collection of safety and efficacy data. Lessons learned will be shared on how the impact of changes in study conduct were managed and consequences to the planned analyses.
Platform trials combine the advantages of multi-arm trials with the flexibility of classical drug development based on independent studies. Based on an overarching master-protocol, treatments can enter and leave the platform trial over time. An important factor for the substantial statistical efficiency gain of platform trials is the potential to share data from control arms. In this session we will discuss statistical issues arising from the sharing of controls focusing especially on challenges associated with the incorporation of non-concurrent controls in treatment control comparisons. While, non-concurrent controls can improve the statistical efficiency, they potentially may introduce bias through time trends if these are not appropriately accounted for. Other aspects, include the impact of correlated test statistics caused by shared controls, potential information leakage of control group effect estimates which may become known when results from stopped treatment arms are reported, and the potential to change the control group, for example if one of the experimental treatments becomes the new standard of care.
In this session, we discuss Bayesian and frequentist methods to maximize the information gain from shared controls, while maintaining the robustness of clinical trials with respect to type I error rates and bias.
FDA often relies on Advisory Committees to provide independent advice on whether a drug in development is safe and effective. The FDA advisory committee meeting (ACM) is a critical part of the NDA/BLA process if the committee is called for review. In this session, we will discuss the objective and process of the ACM, and share the experience of different statistician roles from past ACM.
Speakers from the FDA, industry, advisory committee, and consulting company may be invited to share their insights: 1) Industry perspective: How does a sponsor prepare for the meeting and what role does sponsor statistical function play? 2) FDA perspective: What is the role of a FDA statistician in ACM and how does he/she prepare the meeting? 3) Consulting company: Share the experience and advise on how to make the meetings more effective and productive. 4) Statistical advisor in AC: What is the unique role for the statistical advisor(s) in AC? What material and information must be ready before he/she can vote?
Patients with cancer or other recurrent diseases may undergo a series of therapies in the course of their disease. Time-to-event analyses comparing treatments can be complicated by when subsequent therapies, such as salvage treatments are given during follow-up. For example, acute lymphoblastic leukemia patients receive hematopoietic stem cell transplantation (HSCT) as a subsequent therapy or a kidney transplantation may be given to patients being treated with dialysis. In clinical trials, subsequent therapy may be a considered an intercurrent event that may cause complication in the interpretation of the experimental therapy. Additionally, the use of subsequent therapy may be nonrandom. For example, certain subsequent therapies may be administered only to patients who had beneficial responses to the experimental treatment, resulting in an imbalance in the use of subsequent therapy between study arms. ITT approach that ignores subsequent treatments may answer one scientific question of interest, but may not be relevant in certain clinical contexts where the primary interest is in quantifying a treatment effect without a contribution of a subsequent therapy. Clear specification of a scientific question of interest, study design, and analysis with a causal interpretation are essential for a successful clinical trial.
In this session, speakers will present perspective of FDA, academic and industry experts on the novel methods to analyze the impact of subsequent therapy in time-to-event endpoint.
The ICH E9 (R1) Addendum on Estimands and Sensitivity Analysis in Clinical Trials (Step 4) was finalized in November 2019 and subsequently implemented by many regulatory agencies. Where are we now? How has the ICH E9(R1) Addendum improved drug/biologic development? What are the common challenges and ways to address them? How can we improve interdisciplinary communication? What open questions remain, and what future directions should we consider? These and other questions will be discussed in this session by clinical and statistical representatives from the industry and the FDA. Talks will cover experience with using the estimand framework from industry or regulatory perspective: 1) common challenges and ways to address them 2) keys to productive interdisciplinary collaboration 3) estimand framework use impact on drug/biologic development 4) open questions and future directions. Duration: 75 minutes (all speakers and panelists confirmed). 1. Talk 1: Devan Mehrotra (Merck, ICH E9 (R1) Expert Working Group member, statistical) – 20 minutes. 2. Talk 2: Miya Okada Paterniti (FDA, Lead Physician, clinical) – 20 minutes. 3. Panel Discussion – 20 minutes: Panel Member 1: David Lebwohl (Intellia Therapeutics, clinical), Panel Member 2: John Scott (FDA, CBER, statistical), Panel Member 3: Bohdana Ratitch (Bayer, statistical), Panel Member 4: Sylva Collins (FDA, CDER, statistical). 4. Audience Questions and Answers – 10 minutes.
The increased cost and high attrition rate demand innovations from every aspect of drug development. In response to the challenges, FDA launched a Complex Innovative Trial Design (CID) Pilot Meeting Program in 2018 to support the use of complex adaptive, Bayesian, and other novel clinical trial designs. In early 2020, The Experimental Cancer Medicine Centre (ECMC) network from UK published guidelines for all stages of Complex Innovative Design (CID) trials in the hope of reducing the time it takes to get innovative treatments to patients with cancer.
Complex innovative designs include but not limited to complex adaptive designs such as multiple arms and multiple stages and Bayesian designs. CID designs have been applied to a few therapeutic areas across all phases with many focusing on oncology and rare diseases. CID trials present many challenges including clinical operation, interpretation, and statistical challenges around type I error control and inference. In addition, CID designs require additional simulations and sponsor-regulatory review meetings due to the complexity of the methods.
In this session, recent development of complex adaptive designs and Bayesian methods will be discussed. Experts from both industry and regulatory agencies will share their experiences regarding the design implementation, sponsor-regulatory agency interactions, best practices and lessons learned.
Artificial intelligence (AI) uses algorithm and software to combine large amount of data to learn automatically from patterns of features in the data and interpret underlying complex phenomena. AI, currently at the center of the medical horizon, is expected to be used on an ongoing basis to change care pathways by expediting early detection and improve patient access to needed healthcare. The regulatory issues and challenges with endpoint selection, study design, subject selection and statistical analyses are currently under scrutiny given the complexity and sophistication to which data is analyzed as opposed to traditional statistical methods. In this session, we will present and discuss statistical and regulatory challenges in diagnostic medical devices based on artificial intelligence.
In the past 40 years, the field of statistics has witnessed tremendous methodological innovations that have contributed immensely to the development and regulatory evaluation of medical products, including drugs, biologics and medical devices. Examples of such methodological innovations include those in the areas of adaptive design, covariate adjustment, multiple endpoints, missing data, estimand, subgroup analysis, causal inference, Bayesian techniques, artificial intelligence, machine learning, biomarkers, diagnostic test evaluation, to name but a few. However, as new technologies and new data sources emerge, more methodological innovations are in demand, for example for generating real-world evidence from suitable real-world data, utilizing master protocols, and developing precision medicine. In this session, an experienced group of industry and regulatory statisticians will discuss some methodological innovations achieved in the field of statistics in the past 40 years and how these innovations help improve the development and regulatory evaluation of medical products, as well as look ahead to the challenges and opportunities we face. Drs. Greg Campbell of GCStat Consulting, LLC, Paul Gallo of Novartis, John Scott of FDA/CBER and Lilly Yue of FDA/CDRH, have been confirmed to present from industry and FDA perspectives.
The Journal of Biopharmaceutical Statistics (JBS) is processing a special issue on real-world evidence (RWE). This special issue features innovative statistical methodologies and case studies that incorporate real-world data (RWD) or RWE into medical product development and regulatory decision-making, especially in areas of unmet medical needs. For instance, in clinical trials for treatment of rare diseases or rare, genetically targeted subsets of common diseases that need efficacious treatment, randomization may not be feasible due to subjects availability and ethical concerns. Using external controls by borrowing information from RWD sources or historical trials can provide relevant evidence (i.e., RWE) to strengthen the evidence from single-arm trials. The use of novel hybrid approaches to forming a control arm in oncology trials will be another highlight in this issue. Further, this issue covers innovative adaptive designs that utilize historical information from different populations in planning for pediatric trials. With the advances of technology and increasingly available RWD/RWE, the design strategies and data analyses in drug development will be enhanced substantially in efficiency. This session will focus on recently published special-issue articles with innovative thinking from the perspectives of the pharmaceutical industry and FDA. The objective is to highlight new and diverse uses in RWD and RWE and stimulating discussion and/or further research and innovation. Speaker 1 from industry. Tentative topic: assessing the safety and effectiveness of a Malaria vaccine using real-world data. Speaker 2 from industry or government. Tentative topic: a case study of glaucoma device approval with RWE --- China regulatory experience. Speaker 3 from government. Tentative topic: leveraging multiple real-world data sources in single arm clinical studies using propensity score-integrated composite likelihood approach.
Targeted therapies naturally have subgroups. This session discusses current oversights in stratified analyses comparing treatments in randomized controlled trials (RCTs):
1. Using efficacy measures such as Odds ratio (OR) and hazard ratio (HR) can make a prognostic biomarker appear predictive (e.g. OAK trial results in Gandara et al. 2018), targeting wrong patients, because the inference is affected by a covert factor even with ignorable treatment assignment in an RCT. As shown analytically and with real immunotherapy patient level data, OR and HR cannot meet the causal Estimand requirement of ICH E9R1.
2. Subgroup Mixable Estimation (SME) will give causal inference in an RCT, by accounting for the prognostic effect - the covert factor hiding in plain sight when mixing Relative response (RR) and ratio of median survival times (RoM).
3. Current computer packages compound the mistakes by mixing on the logarithmic scale, creating an artificial Estimand for the whole population which changes depending on how the population is divided into subgroups.
In this session, different perspectives from FDA and industry experts will be shared on these causal estimands.
An unprecedented number of new cancer targets are in development, and most are being developed in combination therapies. Back to 2013, FDA publish a guidance for industry on ecodevelopment of two or more new investigational drugs for use in combination. The guidance emphasizes on new investigational drugs and advocates factorial design. It has been challenging from industry perspective due to its demand on larger trial and longer development time. In this session, we will show how we can design smart trials to adequately show component contribution in combination therapy.
The rationale for combination therapy is to use drugs that work by different mechanisms, thereby decreasing the likelihood that resistant cancer cells will develop. Among all the potential combinations of immunotherapy, chemotherapy, targeted therapy, what strategies to take for selecting which combinations to further develop and how to design combination therapies clinical trials from Phase I to Phase III? In this session, we will present a new approach to predict efficacy endpoints (such as PFS, ORR, DoR and tumor size) of combination therapies based on efficacy information from each component of the combination and discuss its statistical implications for Phase II and III trial design and monitoring. We will also look at leveraging historical data into the design and analysis of phase 2 randomized controlled combination trials to improve efficiency of drug development programs to reduce sample size without loss of power. In summary, this session not only celebrates the innovations until today, but also looks toward the future of new statistical techniques in facilitating data-driven Phase II/III designs and decision-making in oncology combination therapy development.
The session will highlight real life experiences implementing Bayesian designs in proof of concept and pivotal COVID-19 vaccine trials. The speakers and discussant will reflect on the key methodological considerations, regulatory interactions, and operational challenges. Bayesian sequential proof-of-concept (POC) trial to investigate the efficacy of Bacillus Calmette-Guérin (BCG) in providing protection against COVID-19 infections through "trained immunity" will be presented. The main design consideration is to provide a framework to rapidly establish a POC on the vaccine efficacy of BCG under a constantly evolving incidence rate and in absence of prior efficacy data. The trial design is based on taking several interim looks and calculating the predictive power with the current cohort at each look. Decisions to stop the trial for futility or stopping enrollment for efficacy are made based on the last two predictive power computations. We will also discuss the statistical aspects of the Pfizer Phase III Bayesian adaptive vaccine trial. This trial incorporates multiple interim analyses, each based on achieving a sufficiently high Bayesian posterior probability of vaccine efficacy. The trial also incorporates early stopping for futility based on Bayesian predictive probabilities. The talk will include key statistical aspects and regulatory challenges of the design. We will then also give the regulatory perspective of the the use of master protocols to accelerate drug development during the pandemic. The 3 presentations would be followed by an open discussion led by the Session Chair.
In randomized clinical trials, adjustments for baseline covariates at both the design and analysis stages are highly encouraged by regulatory agencies. At the design stage, covariate-adaptive randomization (CAR) is most widely applied as “Balance of treatment groups with respect to one or more specific prognostic covariates can enhance the credibility of the results of the trial" (EMA, 2015); at the analysis stage, covariates can be used to gain efficiency as "Incorporating prognostic baseline factors in the primary statistical analysis of clinical trial data can result in a more efficient use of data to demonstrate and quantify the effects of treatment with minimal impact on bias or the Type I error rate" (FDA, 2021).
In response to the FDA draft guidance on covariate adjustment released in May 2021 and the heated discussion around that, this session invites experts from academia, regulatory and industry to present state-of- art methods from a practical perspective and the connections to the FDA draft guidance. An oriented Q&A will be hosted, where the panelists will discuss the broad impact, provide opinions for better practice, and respond to open questions from meeting attendees.