All Times EDT
Donna Rivera is the associate director for pharmacoepidemiology at the US Food and Drug Administration Oncology Center of Excellence. She leads the Oncology Real-World Evidence Program, focusing on the use of real-world data and real-world evidence for regulatory purposes, as well as managing the real-world data research portfolio strategy and development of regulatory policy. Rivera has extensive research experience in the use of real-world data to advance health equity, observational study designs and methodological approaches, and appropriate uses of real-world data for medical development to increase access of effective therapies to patients. She is currently a Scientific Executive Committee member for the COVID-19 and Cancer Consortium and leads Project Post COVIDity, a collaborative real-world data effort to assess longitudinal sequalae of COVID-19 on patients post infection.
Nancy Dreyer is senior vice president and chief scientific officer, emerita for real-world solutions at IQVIA and adjunct professor of epidemiology at The University of North Carolina at Chapel Hill. She is responsible for driving innovation in medical product development and commercialization using passive and/or active collection of real-world data to generate evidence for regulators, clinicians, patients, and payers. A fellow of both the International Society of Pharmacoepidemiology and DIA, she is well-known for her thought leadership. Dreyer has helped advance the use of real-world evidence for regulatory purposes, influencing the content of recent guidelines by regulators in the US, Europe, and China, each of which cite one or more of her publications. Her substantial executive and field experience have helped hone her pragmatic views.
Theodore (Ted) Lystig is senior vice president and chief analytics officer at BridgeBio, where he provides leadership and guidance in the use of robust statistical and research design methods. He also holds the position of adjunct assistant professor within the division of biostatistics at the University of Minnesota. Lystig is an elected fellow of the American Statistical Association and an elected member of the International Statistical Institute. He is a founding officer and past chair for the ASA Section on Medical Devices and Diagnostics and an executive committee member of the Clinical Trials Transformation Initiative. Lystig is an internationally recognized industry leader in statistical methodology, especially in active surveillance for medical devices. He is a frequent speaker at international statistics meetings and has given invited seminars at venues such as Stanford, Harvard, and the US Food and Drug Administration.
Pallavi Mishra-Kalyani is a supervisory mathematical statistician in the Division of Biometrics V, Office of Biostatistics, which supports the Office of Oncology Drugs at the Center for Drug Evaluation and Research. Since joining the FDA in 2015, Mishra-Kalyani has contributed to the efforts to understand and address statistical issues related to the potential use of external controls, real-world data, and real-world evidence for regulatory purposes. Her research interests include statistical methods for observational data, causal inference, and nonrandomized trial design, and she has organized and participated in several statistics and oncology workshops, conferences, and working groups in these areas.
This townhall will be an open discussion with invited panelists who have a diverse and wide variety of experience in the pharmaceutical and medical device space using digital technologies. This open TownHall style session will cover many aspects of how digital technologies have changed the way that we do healthcare research. Fitting with the theme of the workshop, we will be prepared to discuss openly how technologies, the effect of the pandemic, war, access to healthcare, have shaped our approaches to healthcare research. From Industry, the statistician who analyzed data from the CHIEF-HF trial will discuss his experiences using technologies that made remote clinical trials possible. Another person from diagnostics will discuss a high-level overview of the modeling process, specifically simulation of disease processes; in particular, diabetes mellitus, and the lifestyle and medication models that have been created. For the FDA, panelists will be able to discuss the regulatory perspectives on statistical challenges from a remote clinical trial and data collection challenges from the use of mobile technologies.
Dr. Hong Lu is an assistant director in the Division of Biostatistics at CDRH of U.S. FDA. Dr. Lu supervises the statistical team that reviews in vivo diagnostic devices, which includes a great number of digital health products such as mobile medical apps and software devices. Dr. Lu holds a doctorate degree in statistics from the University of Michigan, Ann Arbor.
Dr. Andrew Potter is a mathematical statistician in the Division of Biometrics I at CDER of U.S. FDA supporting the review work in the Division of Psychiatry. He also leads digital health technology initiatives in the Office of Biostatistics in CDER. His research interests include the use of digital health technologies in clinical trials and the analysis of high frequency outcome data and is involved in working groups at FDA on this topic. He received his PhD in Biostatistics from the University of Pittsburgh and his bachelor’s in physics from the Cornell University.
CV Damaraju has more than 25 years of work experience in the Pharma Industry across multiple Therapeutic Areas. He is currently a statistical leader at Janssen R&D, LLC supporting the Medical Affairs, Cardiovascular & Metabolism franchise. He also serves on the BASS Program Committee and is an elected officer of the ASA NJ Chapter. His latest publication with the CHIEF HF study that got published in the Nature Medicine journal. His ongoing work involves design and analysis of clinical trials integrated with digital health technology and real world data streams.
Gail Kongable is the Senior Manager, Simulation and Modeling in Abbott Rapid Diagnostics Division. She is a Family Nurse Practitioner with over 30 years of work experience in the nursing field. She has over 21 years of experience with simulation and modeling research and analytical services. She has over 25 publications in peer-reviewed medical journals just within the last five years.
Rare diseases are defined differently in each region through its respective regulatory health agencies. These definitions are usually based on the population in that specific region to separate rare from common diseases. Intuitively, if a condition is regulated as a rare disease, there are a relatively small patient population and limited knowledge to properly diagnose and treat such a condition. This brings an obvious challenge in drug development, where clinical trials usually rely on central-limit-theorem-based statistics that requires a large sample size. Encouraged by many legislative incentives, more and more drug manufacturers are investigating promising orphan products in treating rare diseases. Statisticians also rose to the challenge to tackle the small sample size issues associated with orphan drug development. In the FDA draft guidance entitled “Rare Diseases: Common Issues in Drug Development”, natural history study was emphasized and is the first component listed in the document to be considered for the clinical development of an orphan product. A well-designed natural history study could provide background of the rare disease in terms of disease progression, endpoint selection & validation, and treatment duration. The same guidance also discussed the importance of external historical control usage to reduce the number of clinical trial subjects and gain efficiency in treatment effect demonstration. Real world data is an intuitive resource for both natural history studies and historical controls. However, there remains a gap in appropriate methodologies for real world data usage in rare disease setting due to selection bias and temporal effect. Other common real world data issues, such as missing data and reproducibility, are often less concerning in this setting. That creates a unique platform for statisticians and researchers to think outside the box and propose novel approaches. In this session, we will discuss FDA guidance on rare disease clinical program in detail and have a balanced representation from industry and regulatory. We will summary how to better design natural history studies and to utilize real world data to better serve the need of clinical development of rare diseases. Details will include statistical considerations, innovative methods, and trial designs, supported by real case examples.
Statistical leadership is vital to the continued improvement of pharmaceutical drug development. It is well documented that the cost and time of drug development continue to grow at rates that are unsustainable. The statistical leader plays an important role in ensuring that drugs are developed more efficiently via innovative designs and analyses, that the right candidates move forward, and that decisions are made with a quantitative framework that properly balances all available data. Additionally, there have been many learnings obtained and statistical leadership demonstrated throughout the COVID pandemic, and now is the time to capitalize on those learnings to ensure they are carried forward. There is tremendous opportunity for statisticians to lead the pharmaceutical industry into the next generation of drug development.
In this session, statistical leaders from across industry and regulatory will discuss the current state of statistical leadership in drug development and opportunities to utilize statistical leadership to drive drug development forward. These leaders will also reflect on the lessons learned historically that showcase the value of statistical leadership and discuss keys to the successful leadership. The intent of this session is to identify specific actionable next steps to partner together to dramatically shift the role of statistician as leader in all phases of the drug development process. This session is in partnership with Biopharmaceutical Statistics Leadership Consortium.
This will be a panel discussion in which panelists will have specific roles and provide responses pertinent to their function and experience. This panel will include confirmed participants in the following actual roles: Industry Statistician, Industry Clinician (EU), FDA Statistical Division Head, FDA Clinical Team Leader, ICH E9(R1) Addendum Expert, and former European Medicines Agency Statistician. In addition a panelist will respond from the perspective of prescribers and patients. A sampling of questions are, as follows: • What are relevant questions for different stakeholders (health authorities, prescribers, payers, sponsors, etc.) in a trial? Any examples? • Are estimand proposals driving innovation in terms of using more focused clinical questions and proposing novel ways to handle intercurrent events? Are we only seeing analytic approaches similar to the existing pre-estimand approaches? • How would estimand choices relate to labeling language? • Now that sponsors are submitting estimand descriptions in their protocols/SAPs, is anything changing in how the primary efficacy endpoint is analyzed? Or has the estimand framework been generally used to describe/support the standard analyses that have been performed? • Treatment policy strategy is widely recommended by health authorities. It can be implemented only if data can be retrieved after the intercurrent events handled by this strategy. For example, how likely are patients who have discontinued treatment be followed up at the targeted clinical visit? How much data needs to be retrieved for a meaningful analysis? • How are treatment discontinuations handled? Does the reason for discontinuation make a difference in how these are handled for the analysis? • Are there differences in preference for estimand strategies between different health authorities? • Should limitations in estimator (e.g., stronger assumptions and less desirable operating characteristics) influence the questions of interest in a trial?
Recent approvals of tumor-agnostic indications in Pembrolizumab, Larotrectinib and Entrectinib have generated tremendous interest in developing targeted medicines using the tumor-agnostic pathway. Although it has generated a lot of excitement, study designs as well as regulatory framework for tumor-agnostic pathway remain incipient. In this session, ASA BIOP Oncology Methods SWG Master Protocol Sub-team and DIA Innovative Design Scientific Working Group (IDSWG) oncology sub-team will jointly discuss statistical considerations in study designs with potential to support tumor agnostic approval pathway and potential challenges in such designs and strategies, including types of study designs when a pre-specified biomarker is involved, control of type I error, and number of tumor types included in the study etc. Considerations on biomarker and/or companion diagnostic development to fulfil regulatory requirement will be shared. Statistical considerations on common challenges and concerns in the approval of such an indication will be reviewed, including but not limited to, addressing homogeneous efficacy in certain tumor types with small sample sizes, evidence required in accelerated approval versus post-market requirements etc. Case studies as well as hypothetical scenarios will be provided during discussion.
The increasing availability of medical data and advancements in the field of artificial intelligence (AI)/machine learning (ML) have resulted in the rapid growth of AI/ML-based software as a medical device. Fundamentally, ML algorithms analyze large amounts of data to identify useful patterns for making predictions and recommendations. Because of their close dependence on the data, the performance of ML algorithms can be highly sensitive to shifts in the data due to changes in clinical practice patterns, patient case mix, epidemiology, and more. This has led the FDA, along with other regulatory agencies, to publish a document on Good Machine Learning Practice that includes the guiding principle "Deployed models are monitored in real-world use with a focus on maintaining or improving safety and performance." Nevertheless, there are many open statistical questions on how quality assurance and improvement of ML algorithms should be performed, including how to appropriately utilize real-world data streams and how to minimize the risk of introducing deleterious model updates. This session will bring together perspectives from academia, industry, and the FDA to discuss recent advances in statistical methodology for ensuring the long-term safety and effectiveness of AI/ML-based software as a medical device and the many challenges that lie ahead.
Bayesian methods have attracted great attention and been applied increasingly to clinical trials since the FDA issued the guidance on Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products in 2020. With its ability to utilize historical data or real-world data (RWD), Bayesian analysis allows us to shorten development timeline, reduce the number of participants in clinical trial or increase the amount of information for more efficient and robust statistical inference and thus to lower the total cost. One key assumption for the application of Bayesian analysis is consistency between historical and current datasets. Propensity score is often used to match patients and alleviate the concern of inconsistency. A power prior can further be applied for dynamic data borrowing based on the observed level of inconsistency. Moreover, information embedded in a surrogate endpoint or treatment effect on a surrogate endpoint is helpful for enhancing the assessment of the probability of success (PoS) of a phase III study. In this session, innovative Bayesian methods will be presented. It will cover a stratified propensity score-integrated power prior approach to augment a treatment group (in a single-arm study or a two-arm randomized controlled trial) from data sources such as real-world and historical clinical studies containing subject-level outcomes and covariates. A more robust bivariate Bayesian method to link the treatment effects on surrogate and phase III endpoints for the assessment of PoS for a phase III study will be discussed. Other recent novel Bayesian methods will also be shared by the speakers. Simulations and examples will be used to compare the performances and illustrate the applications of the methods to inspire future research.
In May 2021, the U.S. Food and Drug Administration (FDA) released a revised draft guidance for industry on “Adjustment for Covariates in Randomized Clinical Trials for Drugs and Biological Products”. This guidance discusses adjustment for covariates in the statistical analysis of randomized clinical trials in drug development programs. It specifically focuses on the use of prognostic baseline factors to improve precision for estimating treatment effects. Despite regulators such as the FDA and the European Medicines Agency recommending covariate adjustment, it remains highly underutilized leading to inefficient trials in many disease areas. This is especially true for binary, ordinal, and time-to-event outcomes, which are quite common in COVID-19 trials and are, moreover, prevalent as primary outcomes in many disease areas (e.g., Alzheimer’s Disease, stroke, …). Research and guidance on this topic could therefore not be more timely. In response to the FDA draft guidance on covariate adjustment, this session invites experts who represent a variety of viewpoints, coming from academia, Pharmaceutical industry and FDA. The aim of this session is to discuss and address some key obstacles that lead to the underutilization of covariate adjustment, which were brought up in the comments submitted to the FDA in response to their May 2021 draft guidance.
In 2020, generic drugs account for 88% of the prescription drugs in U.S. With the passage of the Generic Drug User Fee Act (GDUFA) in 2012, generic drugs have helped to reduce the time and cost of development of therapeutic products without compromising safety and effectiveness. Likewise, the Biologics Price Competition and Innovation Act (BPCI Act) of 2009, has created an abbreviated licensure pathway for biosimilar products that are demonstrated to be not clinically different from an FDA-approved biological product. The market share of biosimilar products has spiked in recent years, following the same trajectory of generic drugs.
However, statistical research is still lagging in these areas. For locally-acting generic drugs, a three-arm parallel clinical end-point BE study is often used to establish BE between a generic (T) and an innovator drug (R). Clinical endpoint BE studies, however, are much more expensive than traditional pharmacokinetic (PK) BE studies. Due to the potential inconsistency between the original NDA study and the ANDA study for generics, some clinical endpoint BE studies may over or under- estimate some study design parameters and end up enrolling more than a sufficient number of subjects or failing to pass BE due to insufficient power. For Biosimilar studies, when there are multiple reference products, (e.g., EU-approved product and US-licensed product), a pharmacokinetic/pharmacodynamic (PK/PD) bridging study is often conducted in order to bridge the clinical data from the original region (e.g., Europe) to the new region (e.g., United States) in support of the biosimilar regulatory submission in the new region. The purpose is to avoid duplicated clinical trials for clinical similarity between a proposed biosimilar product and the reference product in the new region provided that there is no relevant demographic difference in the two regions. How to optimize these study designs for BE and biosimilar studies in order to improve the effectiveness and efficiency of the study is a pressing task for both the regulators and applicants under both the Biosimilar User Fee Amendments (BsUFA II) and the Generic Drug User Fee Act II (GDUFA II).
In this session, speakers from FDA and academia/industry will propose innovative adaptive and alternative statistical designs for clinical endpoint BE studies and PK/PD biosimilar bridging studies. Statistical models and methods under the proposed statistical designs will be discussed. Power, sample size, and Type 1 error rate will be compared to traditional study designs using simulation. An FDA expert in the field will comment and provide recommendations based on these talks. Speakers and discussant from the FDA and academia/industry will present and discuss innovative complex study designs to optimize biosimilar and bioequivalence studies.
This session includes the following two presentations:
Presentation 1: Title: Study design for Assessment of Biosimilars with Multiple References Speaker: Shen-chung Chow, Professor, Duke University, Former Associate Director for Biosimilars at OB/FDA
Presentation 2: Title: A Novel Two-Stage Sequential Adaptive Comparative Clinical Endpoint Bioequivalence (BE) Study Design
Speaker: Wanjie Sun, Lead Statistician, DBVIII/OB/CDER/FDA
Discussant: Stella Grosser, Division Director, DBVIII/OB/CDER/FDA
The ICH E9 (R1) guidance on estimands and sensitivity analysis was finalized by the FDA in May 2021. The Pharmaceutical Industry Working Group on Estimands in Oncology has been continuing to review and refine key clinical trials issues in light of the estimands framework. This panel discussion addresses some of the work the Working Group’s task forces have been engaging in to look more closely at issues common in oncology trial design.
Question we'll discuss include:
How can we encourage consistent analysis and interpretation of Duration of Response and Time to Response in clinical trials?
What is the clinical question of interest if patients receive the option of subsequent therapy?
How does concern about causal estimands impact the way we do time to-event trials?
What do we mean by follow-up time in a clinical trial?
Statisticians have acquired a comprehensive understanding of the statistical principles applied to clinical trials. We have been abiding by those, and thus developing procedures and methods for clinical trial design and analysis. However, the COVID-19 pandemic has posed enormous challenges to statisticians who need to adapt to the ultra-fast development pace, while holding to the statistical principles.
Based on our experiences developing COVID therapeutics as well as preventive vaccines, it can be challenging to pre-specify the "important details"—as advised in ICH E9—of a clinical trial, with regards to design, conduct, or analysis. The disease can change, as new variants of the virus emerge; the treatment can change, as other interventions (e.g., vaccinations) are added as concomitant medications; and the patient population de facto can change, as CDC guidance on vaccination recommendations or personal protective equipment usage change. Moreover, with the continuously evolving pandemic, it becomes almost impossible to pre-specify the treatment effect or even the "background" event rate. As one of the consequences, the typical interim analysis strategy may fail to work. At the final analysis, cases have been reported for which the pre- and post-interim analysis results drastically differ, which obfuscates the interpretation of the findings even with the most straightforward design.
Meanwhile, the battle against the pandemic also provides opportunities for statisticians to develop and improve complex innovative trial designs (CID). Platform trials become almost necessary. In one master protocol, multiple therapeutic or vaccine candidates are evaluated by multiple cohorts (e.g., different patient populations based on oxygen use), and treatment arms and cohorts are added or dropped from ongoing trials based on clinical and statistical findings. In each group (with specific treatment and patient cohort), phase 1/2/3 seamless and adaptive designs are often applied, and sample size re-estimation becomes essential. Indisputably, the pandemic is expediting the progress of CID development.
In this session, speakers from the FDA, industry and academic will report the statistical challenges they have experienced when designing, conducting, analyzing, and reviewing COVID-19 therapeutic and vaccine clinical trials. They will also discuss the practical and innovative approaches which they have implemented or can potentially be applied to address the issues. Specifically, the following topics will be presented.
First, given the urgent need, simultaneously testing multiple treatment arms (e.g., different combinations of monoclonal antibodies) and multiple endpoints (e.g., hospitalization or death, and symptom resolution) can be advantageous and even necessary. At the same time, it is also desirable to stop the trial early, when an efficacy signal is detected from a treatment arm(s). This, however, can create multi-fold hurdles for the interim analysis and multiplicity control. The speaker will discuss details of the issues, together with possible solutions based on actual trial development experience.
Second, non-inferiority (NI) trials can be deemed necessary for COVID-19 therapeutic development because of ethical considerations. Nevertheless, identifying a clinically and statistically appropriate NI margin can be an extremely formidable task, since the "background" rates for infection or clinical outcome such as hospitalization or death are changing constantly. In this topic, the speaker will discuss the practical approaches for determining NI margin and for conducting sensitivity analysis to evaluate the possible consequences regarding the pre-specified NI margin.
Third, temporal effect—i.e., the effect of shifts in patients’ characteristics, trial conduct, and other features of a clinical trial—clearly cannot be ignored in COVID-19 trials, as in typical settings. The speaker will discuss a statistical procedure for considering temporal effects in COVID-19 clinical trials in the context of confirmatory decision-making.
Last but not least, the unique statistical challenges for developing a vaccine in an expeditious manner without compromising safety and effectiveness in the context of an outbreak will also be discussed, including determining correlate of protection, modifying ongoing Phase 3 clinical studies to evaluate long-term safety and efficacy when a vaccine becomes available during the study, and designing clinical studies to evaluate safety and efficacy of new candidate vaccines or new variants post vaccine authorization or licensure.
The goal of personalized medicine is to narrow the reference class to yield more patient specific effect estimates to support more individualized clinical decision making. Patients in a trial differ from one another in many ways that can affect the outcome of interest and the potential for benefit. Heterogeneity of treatment effects (HTE) is the variation in how individuals respond to a treatment. Treatment benefit often varies among individuals in a trial due to difference in important demographics or disease characteristics, such as gender, race, age, region, disease subtypes etc. HTE is also observed in the study-level parameters of meta-analysis because of an imbalance in important disease characteristics (known as “effect–modifiers”). Appropriate assessment of HTE is critical to regulators, pharmaceutical companies, policy makers, researchers, and patients. In current practice, the primary analysis focuses on the treatment benefit observed in the overall population. However, this "average" benefit may not be applicable to all patients in the trial due to possible heterogeneous benefit in different subgroups. The conventional subgroup analysis is often inadequate to address HTE due to uncontrolled false positive signal detection (treatment benefit) and high variability due to limited data in each subgroup. Bayesian methods are particularly useful in this context. FDA Center for Drug Evaluation and Research (CDER) started an initiative to publish drug trials snapshots (DTS) for the public. Estimated treatment effects from Bayesian hierarchical models are included in several published DTS to provide more precise efficacy results for different group of patients.
This session will focus on several case studies on the application of Bayesian method in estimating heterogeneous treatment effect in clinical trial and regulatory decisions. The real-life examples include but not limited to using Bayesian hierarchical model in preparing DTS, borrowing information from one region in supporting regulatory approval for another region, and borrowing information from a subtype to support a separate indication for another similar but different subtype of disease. This session will feature speakers from industry and regulatory agency.
Real World Data (RWD) have been emerging as important data sources for deriving Real World Evidence (RWE) for medical and healthcare policy research and decision making. The innovative use of RWD leads to improvement in trial design, reduction in study size and enhancement in statistical inference. RWE can also be used to answer questions that cannot be addressed using data from randomized clinical trials. Nonetheless, the analysis of RWD posts great challenging. Given the inherent non-interventional nature in the real-world setting, confounders need to be accounted for. Therefore, many existing methods for analyzing data from a randomized clinical trial will not be directly applicable and usage of causal inference framework is preferrable. Comparability of patients across treatments is a key requirement for a valid assessment of treatment effect. Propensity score based on individual patient data (IPD) is often used for patient matching to ensure the comparability. Moreover, many investigations using RWD involves multi-site datasets, sharing sensitive Individual Patient Data (IPD) is subject to strict regulations and is logistically prohibitive. In such scenario, propensity scores for individual patients cannot be derived by the user and the alternative methods should be applied to match RWD or count for heterogeneity across multiple data sources. In this session, speakers from regulatory, industry and academia will share their current thinking and new methodologies developed to meet the challenge in RWD analysis. They will cover novel methods for leveraging external IPD/RWD and the specification of summary statistics (without the need of IPD) provided by the data owners for data matching across multiple RWD sources for comparative analysis. Other technical issues such as handling of informative missing data may also be discussed. Healthier patients tend to have more missing covariates as their health information is less frequently captured in a real world setting.
Over the past decade, drug development in oncology has shifted from cytotoxic agents to drugs with new mechanisms of action, such as cancer immunotherapies, targeted therapeutics, T-cell engagers and others. A key challenge for these new agents is that the assumption “more is better” in terms of dosage may no longer hold true. As a result, health authorities and especially the FDA, are requiring more thorough dose finding and dose optimization prior to the initiation of pivotal trials. Initiatives such as the FDA’s project Optimus are great examples to demonstrate efforts on developing new guidance for cancer drug makers to test a wider range of doses early in development. These requirements and recommendations shift focus away from determining the maximally tolerated dose, which only considers the toxicities of the drug, towards identifying an optimal biological dose, which takes into account overall efficacy and tolerability. Such optimization requires consideration of complex mechanisms of action, schedule optimization, long-term drug tolerability, and possibly novel pharmacodynamic endpoints. Consequently, thoughtful study designs, translational data, and statistical modeling play an increasingly important role. The paradigm shift will require efforts from multiple parties including the regulators and industry sponsors, which perfectly fits to the theme of the ASA Biopharmaceutical Section Regulatory-Industry Statistics Workshop. This session will feature speakers and discussants from industry, academia and the FDA (all speakers confirmed) to share real examples, recent statistical innovations in this field and their views on these recent developments and impact on the oncology drug development in the pharmaceutical industry.
Are the log-rank and score tests valid* when covariate-adaptive randomization is used in oncology trials? This must have been the underlying concern behind FDA’s recent requests for re-randomization tests in studies using the Pocock and Simon’s minimization, which dynamically balances the treatment allocations across a large number of prognostic factors. Aside from minimization, another example of covariate-adaptive randomization is the stratified permuted block design commonly applied when the number of factors is small. Balancing treatment allocation across multiple prognostic factors via covariate-adaptive randomization is necessary in virtually all oncology trials now, which makes the above question relevant to both the practitioners and the regulators. The theory on valid inference about the treatment effect with time-to-event endpoints and covariate-adaptive randomization has been lacking until very recently. If the model is misspecified in trials with stratified randomization, the widely applied robust score test [Lin and Wei (1989); https://doi.org/10.1080/01621459.1989.10478874] was shown to be conservative by the groundbreaking work of Ye and Shao (2020; https://doi.org/10.1111/rssb.12392). This result, however, was not established for minimization other than via simulations. Interestingly, model misspecification caused by the omission of some factors from the analysis model is more common in trials when minimization is warranted. Recent work by Johnson, Gekhtman and Kuznetsova (draft manuscript, 2021) extends the theory developed by Ye and Shao (2020) to show that both the log-rank and the robust score tests are conservative under minimization if the model is misspecified.
In this session we propose to raise awareness about the emergent advances on this much debated problem and elucidate their implications through streamlined technical talks and a discussion.
*A hypothesis test is valid if its type I error is no larger than a given significance level.
Circulating Tumor DNA (ctDNA), the genetic material released from tumor cells into the blood, offers great opportunities in the molecular diagnosing and monitoring of cancer, such as early detection, patient selection, monitoring and predicting treatment responses in the neoadjuvant and adjuvant setting. However, there are still many challenges that need to be addressed before ctDNA can be used as an “early endpoint” to predict long-term cancer survival outcomes in early-stage cancers. In this session, we will discuss the challenges and variability in ctDNA detection, and assess clinical and statistical considerations that would help allow the use of ctDNA as a meaningful endpoint in regulatory decision-making.
Real-world evidence (RWE) is playing an increasing role in health care decisions, as indicated in the recent FDA draft guidances on RWE in 2021. The magnitude and heterogeneity of real-world data (RWD) brings an opportunity to utilize innovations in machine learning (ML) and causal inference to generate RWE for medical and regulatory decision-making. Causal inference from RWD is challenging due to confounding, and treatment switching, etc.. Propensity score (PS) are commonly used but require stringent model assumptions. Doubly robust estimation (DRS), including double score matching (DSM), augmented inverse propensity weighted, and targeted maximum likelihood have emerged as flexible methods for causal inference (correct specification of only the PS or the outcome model). However, incorporating ML in DRS is a challenging and active research topic. Approaches include the Super Learner, double ML, and model averaging. Estimating the individual treatment effects (ITE) is important in personalized medicine. Different ML techniques provide approaches for estimating the ITE. Most approaches first use ML to predict potential outcomes under each candidate treatment and then derive individualized treatment regimens (ITR). Recent literature shows the advantages of estimating ITR using DRS estimates of ITE. It remains unclear which ML based strategies work best for ITR. This session drives toward best practices for using ML in assessing causal inference both on a population and individual basis. We present recent work from industry, academia, and regulatory agencies on this topic. The session will especially be appealing researchers working on design and analysis of RWE. List of invited speakers: • Shu Yang (North Carolina State University): “A unified framework for DSM: theory, balance measure, and practice” • Di Zhang (FDA): “Regulatory overview of using machine learning in causal estimation” • Ilya Lipkovich (Eli Lilly and Company): “Evaluation of different analytic strategies for estimating optimal ITR in Real-world data”
The 21st Century Cures Act was signed into law nearly 5 years ago and it is near the end of PDUFA VI (2018-2022). These two regulations mark milestones for the FDA’s emphasis on enhancing both the use of real-world evidence (RWE) in regulatory decision-making and the agency’s capacity to review complex innovative designs (CID). Moreover, they provided a platform for knowledge sharing. The pharmaceutical industry has increasingly collaborated with multiple health agencies on both fronts since the regulations. As a result, many real cases that either incorporate RWE in the trial or implement CID have surfaced. They are either examples to follow or lessons to learn. Statisticians and clinical trial researchers have established several implementation guides for CID, ranging from adaptive designs to master protocols, addressing both operational and statistical issues. There is also a surge of methodologies in RWE research including propensity score models and Bayesian frameworks. Recently, master protocols with RWE components have emerged mostly in oncology and hematology therapeutic areas. Combining the two poses unique challenges. For example, if RWE is introduced to augment the control arm in a platform trial that only the concurrent control data are used for each treatment comparison, how the RWE can be leveraged in each comparison should be carefully calibrated with appropriate statistical models. In addition, challenges arise when RWE is used to refine parameter estimations for optimizing the design of clinical trials. The DIA Innovative Design Scientific Working Group (IDSWG) Oncology team is tasked to investigate both CID and RWE usage in clinical trials, in order to establish implementation guidelines tailored for oncology trials. This session is organized by this working group to discuss statistical and operational considerations in RWE incorporation within a master protocol setting. Case studies from real examples and simulation will be shared to illustrate this type of clinical trial design. Speakers from industry, academia and regulatory agencies will present guidance and lessons learned from their perspectives.
In recent years, the quantitative analysis of medical images is receiving mounting attention for extracting diagnostic, prognostic, or predictive information from medical images for patient diagnosis, treatment, and management. It has been known that readers who review and make the interpretation of these images play a critical role in this decision-making process. However, the variability among readers—radiologists, pathologists, different facilities, etc.—in their interpretive performance can be quite extensive due to their subjectivity, training and experiences, etc. Such reader variation, both intra- and inter-reader variability, therefore, makes the evaluation of medical products very challenging. In this session, we will invite speakers from different stakeholders to investigate multiple aspects of the challenges imposed by reader variation in clinical trials that involve medical imaging analysis. Based on their experiences in clinical trial design, analysis or regulatory review, the speakers will discuss 1) the measurement and assessment of reader variation, 2) study designs and analysis of reader variation reduction (e.g., with assistance of artificial intelligence), and 3) novel statistical methods for evaluating the safety and effectiveness/efficacy of medical products in the presence of reader variation. The topics presented in this session will provide insights and generate discussion among statisticians on how to handle reader variation in pre-market clinical trials involving medical images.