All Times EDT
An unprecedented number of new cancer targets are in development, and most are being developed in combination therapies. The rationale for combination therapy is to combine drugs that work by different mechanisms, therefore improving efficacy and decreasing the likelihood of developing cancer resistance. A few key questions in combination therapy development are: how to select combination partners; what is the proper drug development stage to demonstrate the contribution of component (CoC), and to what degree CoC needs to be demonstrated, and how to better use external data (RWE or historical clinical trials) to design more efficient trials in combination setting? Back in 2013, FDA published guidance for the industry on the co-development of two or more new investigational drugs for use in combination. The guidance emphasizes new investigational drugs and advocates factorial design which in general may not be efficient. This session will build on last year’s session on a similar topic and we will further expand to the design strategies to improve the trial efficiency and also meet the regulatory requirement. In this session, we will discuss selection of combination partners using predictive biomarkers by synthesizing information from publications, historical trial data; design consideration of proof of concept trial with the goal of screening potential active combination with limited resources; Statistical methods to evaluate the contribution of components will also be discussed. We will share regulatory perspectives, including examples of combination trials, where multi-arm multi-stage (MAMS) might be considered.
Following the initial draft guidance for industry on master protocols for oncology trials in October 2018, FDA recently released the finalized guidance in March 2022, which includes considerations of specific types of designs in a master protocol, i.e., basket, umbrella, and platform trials that could potentially expedite drug development by increasing statistical and/or operational efficiencies. For oncology basket trials where a single investigational agent (or combination) is evaluated across different tumor types (baskets), “information-borrowing” as an effective strategy is widely utilized and conveniently handled in the Bayesian framework by most established approaches. However, the underlying hierarchical model assumptions may not hold due to patient-level heterogeneity, and basket homogeneity/exchangeability may be alternatively characterized to reflect potentially different efficacy benchmarks. In addition, the decision-making process involving which basket(s) may be terminated early due to futility and which basket(s) are promising for further late-stage development could be further improved by utilizing “dual criteria” that account for both the reference and target response rates rather than the current binary rules which can be arbitrary. This session will feature pharmaceutical industry and regulatory agency speakers to share their perspectives and insights on the aforementioned practical considerations of information-borrowing and informed decision-making in basket trials.
The design and analysis of medical device studies pose different statistical challenges for both diagnostic and therapeutic (non-diagnostic) medical devices. Some examples of diagnostic medical devices include complex equipment to produce images such as x-rays or scans, immunoassays, and blood glucose monitors; whereas therapeutic devices include pacemakers, catheters, and devices that use radiation to treat cancer and tissue defects. Statistical issues for these kinds of devices include the use of sham controls, inability to perform blinded studies, non-inferiority, repeated measures and historical controls. Diagnostic devices pose a very diverse set of challenges, especially with artificial-intelligence (AI) enabled medical devices. This session will address experiences with therapeutic and diagnostic devices with regulatory and statistical challenges and innovation in study design and methodologies.
Session Description: The FDA’s Oncology Center of Excellence recently launched Project Optimus, which will develop new guidance to address issues relating to dose optimization in early clinical trials assessing the safety and efficacy of oncology drugs. This signals paradigm-shifting with new scrutiny and more emphasis on optimizing the dose for novel oncology drugs from the FDA. This paradigm-shifting has sparked tremendous interest and also questions on why, when and how to optimize the dose. In this session, three confirmed speakers from FDA, industry, and academia will speak. The FDA speaker who is one of the leaders of FDA’s dose optimization effort, will share her experience, in particular why and when dose optimization is important for drug development. The speaker from industry who had extensive experience and had serve as a panelist in a dose optimization forum will discuss the experience, current practice, and challenges of dose optimization in industry. The speaker from academia will discuss the statistical challenges of dose optimization and provide some design strategies to address the challenges.
FDA speaker Title: Changing the Dosing Paradigm for Oncology Drugs Abstract: In this talk, we will review the historical dosing paradigm for oncology drugs, including why drugs have been dosed at the maximum tolerated dose (MTD). We will identify why this approach is not optimal for the development of non-cytotoxic agents. We will discuss the importance of planning a dose optimization strategy early in clinical development. We will highlight key components of a successful dose optimization strategy which include: sufficient characterization of pharmacokinetics, incorporation of pharmacodynamic endpoints, evaluation of dose and exposure- response relationships, and use of randomized dose trials when appropriate. Case examples will be incorporated to illustrate important concepts. At the end of the talk, participants will understand why dose optimization is an essential component of developing safe and effective oncology drugs and understand how to design and implement a successful plan for dose optimization.
Industry speaker Title: Consideration of oncology dose optimization designs from industry perspectives Abstract: The recent FDA Project Optimus has profound impact on oncology drug development. Across industry, we have seen feedbacks from the agency request more dose finding studies for oncology drugs. It becomes an important topic for drug companies how to balance the development speed and rigor in terms of dose optimization. Oncology drugs have historically been given a pass because of the belief that “more is better,” dating back to the genesis of chemotherapy. However, it is no long necessary true for agents with new mechanisms of action and greater efficacy and different safety profiles than older drugs. For example, chronic, low grade toxic effects may be more relevant and interfere with prolonged administration, hinder adherence and therefore may result in disease progression. Therefore, looking at toxicity other than DLT (dose limiting toxicity) beyond short time DLT winding (typically one cycle) is important. For subsequent dose expansion or phase II study, more studies are proposed to evaluate multiple dose levels (or with control), and subsequent confirmatory will also be adapted depending on the outcome of earlier trials. In this presentation, we will share our experience in address these questions with innovative statistical methodology and its implementation consideration.
Academia speaker Title: Bayesian Adaptive Designs for Dose Optimization Abstract: FDA recently released the Guidance for Benefit-Risk Assessment for New Drug and Biological Products and launched Project Optimus, which will develop new guidance to address issues relating to dose optimization in early clinical trials assessing the safety and efficacy of oncology drugs. This highlights the importance of dose optimization in the era of targeted therapy and immunotherapy, and the need of paradigm shifting from the maximum tolerated dose (MTD) to the optimal biological dose (OBD) with the goal of maximizing the risk-benefit tradeoff for patients. In this talk, from the statistical and trial design viewpoint, I will contrast fundamental differences between the identification of the OBD and the MTD, and highlight challenges of dose optimization. I will discuss adaptive design strategies to address these challenges, including model-based designs and model-assisted designs for dose optimization. Trial examples will be used illustrate the methodology.
In October 2021, FDA issued final guidance for industry (GFI) that describe pathways for animal drug sponsors to use approaches like adaptive study designs, real world evidence, and biomarkers to establish drug effectiveness; and more detailed recommendations on how to leverage data collected from foreign countries to support approval of their products in the U.S. FDA’s recommendations and current thinking were issued in four separate GFIs: • Use of Data from Foreign Investigational Studies to Support Effectiveness of New Animal Drugs (NADs) • Use of Real-World Data and Real-World Evidence to Support Effectiveness of NADs • Biomarkers and Surrogate Endpoints in Clinical Studies to Support Effectiveness of NADs • Adaptive and other Innovative Designs for Effectiveness Studies of NADs
These GFIs are intended to encourage animal drug sponsors to consider innovative approaches in investigations to support the approval of new animal drugs. The recommendations closely align with those already issued by FDA’s other medical product centers. However, animal drug evaluation presents unique challenges in the use of innovative approaches, due in part to small study sizes, difficulties in assessing animal response, and inherent variability of target populations. The session has three identified speakers. The FDA speaker will provide a broad introduction of the new guidance and the speaker from industry will offer industry perspective regarding opportunities to advance drug development using the innovative approaches outlined in the guidance documents. The speaker from academia will discuss current methods and opportunities for collaboration and research within the framework of the key statistical principles outlined in the guidance. This session will be a timely forum for statisticians and regulators in industry, academia, and FDA to discuss the new guidance and its implementation in veterinary drug approvals, along with lessons learned from human medical product applications.
Adaptive seamless designs gained popularity for reducing time and number of patients a clinical study takes to discover, develop and demonstrate the benefits of a new drug. Instead of conducting several development phases of a study, seamless designs combine two phases trials into one trial, possibly Seamless Phase 1/2, Phase 2a/2b or most commonly, Phase 2/3. Therefore, the adaptive seamless design is advantageous to the conventional study design on reducing the duration of the clinical trial and achieving greater efficiency by using data from both stages. Although the benefits are appealing, adaptive seamless designs require significant effort in trial planning and implementation regarding statistical methodology and operational considerations. This session will focus on researchers’ recent effort in innovative adaptive seamless clinical study designs. In particular, Dr Chow will present an overview of statistical methods for analysis of different types of seamless adaptive designs, considering same or different study objectives and endpoints at different stage, a case study will also be presented; Dr Jin will present a framework of seamless phase 2/3 design with multiple endpoints or with multiple treatment arms, without inflating the family wise Type 1 error under a mild assumption, simulations will also be presented with illustrative examples. Dr Scott from the FDA will provide a discussion of the two speakers’ presentation from regulatory and statistical perspectives.
Safety assessments present an important component of new drug applications in either pre-market or post-market settings. Central to the assessment of drug safety for pre-market application is to assess the adequacy of the testing for safety and to determine the significance of the adverse events and their impact on the approvability of the drug (risk/benefit analysis). It is also needed for describing the safety issues that should be included in product labeling should the drug be approved and whether additional safety studies and /or risk-management plans are needed. To flesh out these objectives, many questions are investigated, e.g., incidence, timing, duration, seriousness, reversibility, and recurrence of AEs, duration of follow-up, the impact of dose (reduction), timing and frequency of treatment, and effects of rescue treatments on AEs. In the absence of more information about the relationship of AEs to treatment duration, for example, selection of a specific number of patients to be followed for 1 year is to a large extent a judgment call based on the probability of detecting a given AE frequency level and practical considerations. For products intended for long-term treatment of non-life-threatening conditions, (e.g., continuous treatment for 6 months or more or recurrent intermittent treatment where cumulative treatment equals or exceeds 6 months), the ICH E1 and FDA have generally recommended that 1500 subjects be exposed to the investigational product (with 300 to 600 exposed for 6 months, and 100 exposed for 1 year). Hence, a large database has been helpful to make risk-benefit decisions to determine who may or may not benefit from the product. However, even though clinical trials provide important information on a drug’s efficacy and safety, it is impossible to have complete information about the safety of a drug at the time of approval. Some adverse events may not be observed in clinical trials due to their size and duration. Or that the precision of the adverse event may be inadequately characterized. Use of registries, large safety studies, existing large electronic health databases—like electronic health records systems, administrative and insurance claims databases, and registries—to keep an eye on the safety of approved medical products in real-time. Therefore, the true picture of a product’s safety actually evolves over the months and even years that make up a product’s lifetime in the marketplace.
This session will focus on new statistical techniques of looking at safety in product development. The speakers will talk on the questions outlined above through the following topics from their current research:
1. Bayesian Estimate of Risk Ratios and Absolute Risk Differences in Integrated Analysis of Pre-market Safety 2. Efficient Methods for Signal Detection from Correlated Adverse Events in Clinical Trials 3. Advanced Novel Visual Analytics of Drug Safety Data, Tools, and Resources
As the regulatory environment becomes progressively receptive towards utilizing real-world evidence, a spectrum of real-world data (RWD) incorporation techniques in trial conduct and analysis has seen increasing interest and adoption in different stages of drug development. One focus of leveraging RWD is to reduce sample size requirement for making efficacy claims, where a sufficient number of patients over a reasonable time period can be enrolled/included to meet desired statistical power to demonstrate efficacy of an investigational treatment. To this end, some approaches based on propensity scores are popularized and are primarily used to create matched groups for patients that are controlled for confounding given a set of baseline covariates. However, some limitations are present in practice when these methods are implemented. For instance, iterative balance checking on the measured confounders and tweaks to the exposure models are often inevitable to achieve joint balance, which can be time-consuming and less objective; when the available sample size in investigational treatment arm is very small, standard matching/weighting techniques may not be feasible due to the limited representation of target population; moreover, the subtlety of what should be matched or what is actually being estimated are often lack of full consideration when propensity-score-based methods are developed or applied. In this session, some proposals to address these issues will be illustrated among participants from regulatory agency and industry. Clinical trial examples will be discussed.
Novel approaches for flexible and efficient decision-making in drug development rely on innovative trial designs. The patient’s voice translated into conceptualization of “success” for a research outcome is a key driver in designing clinical trials, and is linked to the improvement of the treatment benefit/risk profile over existing standards of care, decreased health care costs, and/or accelerated time to approval. These metrics can in turn be linked to development efficiency. Efficiency can be viewed from the perspectives of multiple stakeholders, not only at the trial level but also at the program and portfolio levels. We begin the session with an overview of the concept of efficiency. The next presentation will provide an example: a trial in the medical device field for a pediatric population with multiple interim analyses and an adaptive time point of assessment. The session concludes with patients points of view and FDA perspectives on concepts of innovative trial designs.
presenters: • Jie Xu, JNJ Vision • Zoran Antonijevic, Abond CRO • Gigi McMillan LMU Bioethics Institute, Pat Furlong, Parent Project Muscular Dystrophy (PPMD) • FDA speaker
The topic of this session was inspired by a plenary session from RISW2021, in which the role of ‘statistical thinking' in the era of big data was examined, with emphasis on statistical collaboration across multiple quantitative disciplines. As the complexity of pharmaceutical sciences problems grows at high pace, the challenges of statistical innovation do so as well. And the very definition of what “statistical innovation” means keeps evolving. It’s often unclear to today’s professionals what will be the next direction of statistical innovation in a few years from today and how our profession will be shaped by the world of Data Science and AI. In this session we build upon these questions by examining how different statistical innovation groups approach this challenge. We believe the modern innovation is about building a complex combination of skills that go far beyond the technical abilities of any one individual and leverage the power of collaboration across various quantitative disciplines. We illustrate the concept by presenting a few case studies highlighting this strategy. The first talk will cover industry strategies on how to build and maintain “cutting edge” future skills to remain at pace with rapid pharmaceutical development. It will include a case study of large cross-functional project to embed quantitative decision making in the clinical operations space. The second talk will be from an FDA speaker, providing insight into how the agency foresees the key topics of future innovation in clinical trials and builds its analytical and computational capabilities accordingly. This will be illustrated with overview of PDUFA VII initiative. We will conclude with a panel discussion to tie it all together and to provide additional perspectives from academia, technology companies, and various industry representatives. The aim of this session is to assist statistical professionals in getting a balanced view of possible future paths in stat innovation
The document ICH E9 (R1) has brought much attention to the concept of estimands in the clinical trials community. With the release of the FDA guidance on estimands and sensitivity analysis in clinical trials in May 2021, statisticians in the pharmaceutical industry have been revising templates of protocols and statistical analysis plans to incorporate the "five attributes" of an estimand—treatment, population, variable, intercurrent evens, and population-level summary, the "five strategies" for addressing intercurrent events—treatment policy strategy, hypothetical strategies, composite variable strategies while on treatment strategies, principal stratum strategies, and other critical concepts such as sensitivity analysis, supplementary analysis, etc.
Most of the ongoing efforts focus on identifying, understanding, and handling various intercurrent events in a clinical trial in the estimand framework. Nonetheless, there are valuable even crucial points made in the ICH E9 (R1) addendum, but were not fully fleshed out, nor well discussed in the literature.
In this session, speakers from the FDA and industry will report several cases and scenarios they have experienced or noticed when implementing the statistical principles in the addendum in their work and draw attention to several facets of estimands besides intercurrent events.
First, in the context of observational studies, we will discuss how weighting schemes can be connected to estimands, or more specifically to one of its five attributes identified in the addendum, and the attribute of a population within the Rubin Causal Model. Three estimands are examined from both theoretical and practical perspectives. Factors that may be considered in choosing among these estimands are discussed.
Second, we will discuss how to develop a novel estimand by integrating tumor burden in the treatment effect evaluation in oncology clinical trials. Although intercurrent events and missing data are not the major issues in the settings we consider, we will illustrate that the sequential process emphasized in the addendum—trial objective, estimand, main estimator/estimate and sensitivity estimator/estimate—is still valuable for estimand development and implementation.
Third, we observe that hypothetical strategies are somewhat overused based on our experience. Moreover, the hypothetical strategies are often accompanied by mixed-effects model repeated measure (MMRM) approaches for handling unobserved data. We will explore the potential risks associated with the hypothetical strategies plus the MMRM approach by simulation studies and discuss several alternative options of this approach.
Surrogate endpoints have been used in the drug development in various therapeutic areas to predict clinical effect. The use of surrogate endpoints could substantially expediate the drug development programs, leading to the increase interest in developing and analyzing surrogate endpoints. However, it is always challenging to evaluate the surrogacy in clinical trials and even more difficult to articulately quantify the relationship of surrogate endpoint and clinical endpoint, which could be crucial at the trial planning stage.
In this session, we will present and discuss some recent advances on using surrogate endpoint to predict clinical effect. Dr. Luan Lin from Biogen will present a new modeling framework to evaluate surrogacy of a biomarker on a clinical endpoint. She will demonstrate the application of the method to a rare and progressive neurodegenerative disease trial. Our second speaker, Dr. Ronan Fougeray from Servier, will present a method to incorporate information from a surrogate endpoint for early futility decision making in an adaptive clinical trial. He will demonstrate the application of the method to a confirmatory phase III oncology trial. Dr. Therri Usher from the FDA will lead the discussion for this session.
Artificial Intelligence (AI) and Machine Learning (ML) are making headlines in clinical research, especially in areas such as drug discovery, digital imaging, disease diagnostics, and genetic testing. Although the vision of AI/ML in precision medicine is alluring, there is a need to distinguish genuine potential from hype. In this session, we will invite thought-provoking speakers from FDA, Industry and Academia to talk about the statistical challenges AI/ML are facing in precision medicine and how to overcome their limitations to unleash the great potential of machine learning-powered precision medicine.
Group sequential design (GSD) is one of the greatest statistical innovation in modern statistics. Sequential analysis started about 100 years ago by Dodge and Romig (Dodge and Romig, 1929), and was further enhanced by Wald by adding sequential hypothesis testing (Wald, 1945). As it stands today, GSD and its variations have become the most popular method in trial design and monitoring. Many statisticians consider GSD a standard and straight forward approach in a clinical trial. Is this a true statement? Is there any chance we may misuse GSP in clinical trials? Are there any caveats or precautions we need to consider other than statistics?
In this session, we will invite renowned experts in this field to address those questions above. The presenters will give a historical review of the development of GSD and their variations. Most importantly, real-world experiences regarding the application of GSD in trial design and monitoring will be presented. Regulatory agencies will share their experience when they interact with sponsors on implementation and interpretation of GSD. Future directions of GSD development will also be discussed.
FDA’s ‘Project Optimus’ encourages sponsors to move away from conventional dose-finding for modern cancer therapies to achieve better dose optimization. In oncology and cellular therapy, the dose determination with the optimal benefit-risk profile has always been a critical topic to drive the success of drug development, which requires comprehensive data driven decision making in phase I and II trials before committing to pivotal trial. To protect patients from toxic dose levels while efficiently explore the possible therapeutic dose levels, adaptive dose finding design with considerations more than dose limiting toxicity (DLT) in cycle I is recommended for candidate dose level nominations. Additionally, to further characterize the exposure response relationship, multiple arms with randomization of patients to two or more candidate doses can be utilized to obtain a full spectrum of information and guide decision making. In cellular therapy, the dose optimization is even more challenging due to the complex mechanism of action, variability in dose levels, and manufacturing capability. Thus, to address the unconventional dose-response relationship, umbrella trial with multiple dose levels is extremely important. This calls for careful calibration of the appropriate design and estimation. Therefore, as inspired by Project Optimus, we are proposing a session to invite experts from FDA, industry, and academia to discuss and share their insights on innovative strategies guiding decision making for dose optimization in oncology and cellular therapy phase I and II studies. It will cover but not limited to the below aspects: 1. Novel dose finding designs beyond dose-limiting toxicities and/or from beyond the first treatment cycle; 2. Modeling effort incorporating a full spectrum of information for exposure response characterization; 3. Master protocol, umbrella trial and platform trial design for estimation and decision making; 4. Information borrowing from historical studies to improve the estimation accuracy; 5. Adaptive randomization in umbrella trial (i.e., multiple dose expansion cohorts).
Introduction
A major goal of any clinical development program is to implement the most efficient clinical trials to demonstrate the clinical benefit of a new drug. Traditionally, in oncology drug development, in order to achieve this goal and gain regulatory approval of a new drug, sponsors had to establish safety and antitumor activity, and then to demonstrate efficacy benefits against an active comparator, usually SOC, with randomized comparative clinical trials being the gold standard. As the dynamic of oncology drug development changed with increasing demand for reduced time to drug approval and demonstration of greater clinical benefits with a transition from conventional chemotherapy to targeted agents the traditional drug development paradigm has changed.
Motivation
Single-arm trials may now be sufficient to support regulatory approvals (conditional/accelerated) of targeted cancer drugs in molecularly selected patient populations or in situations when conducting a large comparative trial is impractical or unfeasible, e.g., in population of patients with rare cancers. One of the biggest limitations of single arm trials is lack of reference for comparison.
Problem Statement
Can we use prior data of study patients as a clinically relevant reference for comparison? I.e., can we use each study patients as his/her auto-control comparing the benefit achieved on the last prior therapy with the benefit of study drug? Due to the natural history of cancer progression-free or disease-free time is shorter, and responses usually decline on subsequent lines of therapy. Spontaneous remissions are possible but unfortunately extremally rare. If a new drug has an anti-tumor effect it may change the natural history of the disease and provide improved treatment benefit comparing to the prior therapy.
Statistical approach (examples)
Response data - BOR observed with a study drug vs. BOR from the last prior therapy: proportion of patients with equal or better BOR on a study drug; proportion of patients with better BOR on a study drug - Formal test to compare paired proportions - Visualization (e.g., Sankey plot?)
Time-to-event data - Growth modulation index (GMI) = "Trt effect on study drug" / "Trt effect on last prior therapy" [Von Hoff (1998)] - GMI of 1 or above as a sign of activity, GMI of 1.33 or above as a marker of meaningful clinical activity of a new treatment - Visualization (e.g., waterfall plot?) - Formal test to compare right-censored paired data - Using time on treatment as a surrogate endpoint for the time patients are deriving clinical benefit when PFS data is unavailable Visualization (e.g., K-M plot?)
Summary
Intra-patient analysis in a single arm trial is not a replacement of a randomized controlled trial but rather a way to generate supportive evidence (surrogate measures) when an alternative solution is not feasible and/or time consuming or when an interpretation of surrogate endpoints is beneficial either for hypothesis generation and/or for regulatory interactions.
Questions for discussion
Statistical perspective: shall we consider prospectively planning intra-patient analyses with precise definition of endpoints and formal statistical hypotheses? Shall we optimize data collection to capture more details on prior therapy in a more standardized way? What is the best statistical methodology to use?
Regulatory perspective: can intra-patient analysis support a regulatory claim or make a review process easier?
Payer perspective: can intra-patient analysis also help supporting payer negotiations?
Covariate adjustment through linear or nonlinear models is often used in the analysis of clinical trial data, as it leads to efficiency gains when the covariates are prognostic for the outcome of interest. Careful consideration is required when adjusting for covariates in nonlinear models such as logistic regression and Cox regression. For these nonlinear models, inclusion of baseline covariates can change the key treatment effect (estimand). For example, the conditional treatment effect can differ from the unconditional treatment effect due to non-collapsibility as described in the draft FDA guidance released on May 2021.
It is important that the statistical analysis aligns with the estimand of interest. For unconditional treatment effect, covariate adjusted estimators have been developed over the past few years which are typically robust to misspecification of the used regression models. Despite the extensive literature and recommendations by FDA on the statistical theory and properties of these methods, practical experience is limited, and a few open questions require further discussion, especially for time-to-event outcome. For example:
• What are the considerations of choosing between conditional vs unconditional effect as the primary estimand? Are there specific scenarios in which the unconditional (or conditional) treatment effect is more clinical meaningful? How to communicate conditional and unconditional treatment effects to all stakeholders involved in clinical trial?
• How to implement existing methods in estimating unconditional treatment effect in clinical trials and how to address their limitations.
Evaluations of these methods are ongoing within communities. This session will focus on addressing some of the above-mentioned scientific questions and share experience of real-life implementation of covariate adjustment methods. This session will feature prominent participants from industry, academia and regulatory agency, including one speaker and three panelists from FDA.
The 21st Century Cures Act (Cures Act) in 2016 has encouraged the sponsor to explore and investigate the use of Real-World Data (RWD) and Real-World Evidence (RWE) in facilitating regulatory decision-making, including efficacy and safety demonstration of a drug. Since its enforcement, more and more real-world databases have been initiated with higher data quality and often built-in data extraction and analysis tools. On the other hand, FDA must fulfill its mandate to issue guidance about the use of RWE to help support an approved drug seeking a new indication or meeting post-approval study requirements. In the last quarter of 2021, the FDA published a series of guidance documents on RWD utilization standards and strategies. Each guidance has its specific purpose, ranging from electronic health records (EHRs) and medical claims (“Real-World Data: Assessing Electronic Health Records and Medical Claims Data To Support Regulatory Decision-Making for Drug and Biological Products”), to registries (“Real-World Data: Assessing Registries to Support Regulatory Decision-Making for Drug and Biological Products Guidance for Industry”) in support of submissions for drug and biologic approval, including data standards specific to RWD (“Data Standards for Drug and Biological Product Submissions Containing Real-World Data”) and other considerations (“Considerations for the Use of Real-World Data and Real-World Evidence To Support Regulatory Decision-Making for Drug and Biological Products”) in regulatory decision-making. These draft guidance documents had an open period asking drug developers and external stakeholders to weigh in on strategies utilizing RWD. Since then, there have been several comments on the public docket and much more discussions in the public domain on this particular subject. This panel will bring together experts and stakeholders from the industry, RWD vendors and regulatory agencies to discuss different aspects of the guidance and its implications. Panelists from the FDA will speak to the requirements and standards required from RWD sources and how RWD could expedite their review process. Data vendors will address concerns and difficulties, and the impacts on implementing all or portions of these guidance. Panelists from the pharmaceutical/academic setting will address the challenges faced while using these guidance documents. The panel will address points of concurrence and identify areas that could be improved with additional guidance or clarity. Possible next steps for furthering the use of RWD in clinical research will also be explored.
AI/ML methods are often mentioned in connection with health data. Applications currently are in the field of diagnosis of disease and prediction of events. For an example for the 1st cat. in an ICU setting see https://pubmed.ncbi.nlm.nih.gov/32152583/, while for the latter category reference is made to BAYER's app MyIUS (https://www.bayoocare.com/en/) developped for a hormonal birth control device. However it seems that application of AI/ML methods in the setting of an initial NDA has not yet gained much interest. This session would therefore aim to contribute to the application of AI/ML methods in the regulated field of late stage clinical development. Topics will be as follows: 1. Industry perspective: Case studies where AI/ML methods have been applied in the drug development process: a. Which methods have been applied and in which settings? b. Where have these methods provided additional insights? c. How have these methods be communicated to internal stakeholders? d. Is there an optimal balance between the prediction performance of an AI/ML model and its interpretability? 2. Regulatory perspective: a. Is FDA considering the application of these methods? b. Do these methods have the potential to impact the review process and are there cases where such methods have been used? c. Given adequate and validated prediction performance, could AI/ML algorithms trained with submission data subsequently be used for guiding treatment decisions, i.e. be a “digital CDx”? 3. Operational topics: a. AI/ML–methods to increase data quality. 4. Technical perspective: a. Validation and quality control of AI/ML analyses software coded in open source languages like R/Python. b. Life cycle, CI&CD and continuous quality control of AI/ML algorithms used with patient data (including post-approval scenarios). We are currently in contact with stakeholders that can cover these aspects. The session would consist of 3 presentations (industry and regulatory) plus a panel discussion.
Tissue biopsy remains the standard of practice for cancer diagnosis. However, it may be invasive and costly. Liquid biopsy-based tests using circulating tumor DNA/cell-free DNA (ctDNA/cfDNA) is developing rapidly and finding its application in precision medicine companion diagnostic (CDx). Recently, there are also a lot of research for Liquid biopsy-based test regarding its potential in monitoring a patient's response to treatment. In this session, we will discuss challenges with study design and statistical analysis for evaluating such liquid-based diagnostic tests.
The benefit-risk assessment of a new medical product is complex and involves trade-offs between often conflicting multiple efficacy and safety endpoints, in addition to the different methodologies for benefit and risk assessments. Therefore, succinctly describing the benefit-risk profile and communicating the trade-offs between benefits and risks in a clear and transparent manner, using all available evidence, is critical for regulatory decision-making and individual patient management. Bayesian analysis, in addition to conventional approaches, provides an alternative framework to perform such quantitative assessments of the benefit-risk trade-off by allowing formal utilization of prior information and integrating different sources of information and uncertainty. With an increased emphasis on improving the benefit-risk assessment process at FDA, there has been an increase in efforts on the sponsors’ side for quantitative benefit-risk assessment, often within a Bayesian framework. Innovative Bayesian methods for benefit-risk assessment, along with empirical examples, will be presented in this session. Demonstration of a software package developed by the presenters may also be included. Through such illustrations, presenters will discuss various research experiences with Bayesian benefit-risk methods, including its strengths, limitations, and potential future applications.
Randomized clinical trials (RCTs) are the gold-standard for demonstrating efficacy and safety of a new drug, device, or intervention because it can ensure balance and avoid any confounders. However, RCTs can be very expensive and can take many years to complete especially for many rare diseases and small-size clinical trials mainly due to the lack of enough patients. In addition, for life threatening diseases commonly seen in oncological and hematological area, RCTs may not be possible and could be unethical. Therefore, a single arm studies by utilizing Real World Data (RWD) from electronic health record systems, literature, or registry either serving as a control, or prior information to further assist in trial planning or efficacy estimation of the new drug become attractive. Given the consideration of comparability between study data and real-world data in terms of study design components, patient population, efficacy and safety measurements, the practical use of RWD in the efficacy analyses can be challenging. In particular, it is well known that RWE can only correct for biases due to measured confounding, not for unmeasured confounding. Missing data occurring due to the limitation of the data collections between the on-going study and the historical data can be another serious concern. Although many different sensitivity Analyses, such as Bayesian twin regression, E-value, high-dimensional propensity score have been proposed to study the robustness of an association to potential unmeasured confounding and missing data, how to make the final decision by incorporating the sensitivity analyses is not yet clear. This session is intended for audiences who are interested in utilizing the RWD/RWE in oncological and hematological clinical trials gain a clear understanding of challenges and learn how to correctly use sensitivity analyses to study efficacy robustness in the presence of unmeasured confounding. Speakers from regulatory agencies, pharmaceutical companies, and academia will share their latest research, practical trial examples, and potential methods.
Buyse (2010) extended generalized pairwise comparisons (GPC) for the analyses of multiple outcomes using their hierarchical importance order. Under the GPC, each patient in the Treatment group is compared with every patient in the Control group; and first the more important outcomes (e.g., death), then less important endpoints (e.g., a non-fatal outcome such as disease progression in an oncology study) are considered. “Win” as a result from such comparisons is a very attractive idea from Pocock et al. (2012). Over the past decade, the win ratio (ratio of win proportions, Pocock et al. 2012), the net benefit (difference in win proportions, Buyse 2010) and the win odds (odds of win proportions, Dong et al. 2019) have been developed and comprehensively studied. These three win statistics (win ratio, win odds and net benefit) test the same null hypothesis of equal win probabilities in two groups. As nonparametric methods, they can handle semi-competing risk situations (i.e., fatal outcomes plus non-fatal outcomes) and non-proportional hazard situations (e.g., delayed treatment effect typically seen in Immuno-Oncology). Their flexibility allows a composite of multiple endpoints in any data type (e.g., time-to-event, continuous, ordinal). The win statistics have been used in practice (e.g., design and analysis of Phase III trials) and in support of regulatory approvals (e.g. tafamidis for treatment of cardiomyopathy per the ATTR-ACT trial). In this session, we will have two speakers and a panel to present and discuss win statistics with respect to (1) theoretical development, (2) regulatory experience, and (3) practical considerations; (4) future perspectives.
References: 1) Buyse M. 2010. Generalized pairwise comparisons of prioritized outcomes in the two-sample problem. Statistics in Medicine 29(30):3245-3257.
2) Pocock SJ, Ariti CA, Collier TJ, Wang D. 2012. The win ratio: a new approach to the analysis of composite endpoints in clinical trials based on clinical priorities. European Heart Journal 33(2):176-182.
3) Dong G, Hoaglin DC, Qiu J, Matsouaka, RA, Chang Y, Wang J, Vandemeulebroecke M. 2019. The win ratio: on interpretation and handling of ties. Statistics in Biopharmaceutical Research 12(1):99-106.
4) Maurer MS, Schwartz JH, Gundapaneni B, et al. 2018. ATTR-ACT Study Investigators. Tafamidis Treatment for Patients with Transthyretin Amyloid Cardiomyopathy. New England Journal of Medicine. 379 (11):1007–1016.
5) Label for VYNDAQEL/VYNDAMAX https://www.fda.gov/media/126283/download (e.g., see Page 13).
Diagnostic tests often involve more than one/single analyte or biomarker, e.g. multivariate index assay which combines the values of multiple variables, or complex genomic signatures such as microsatellite instability (MSI), tumor mutation burdens (TMB), etc. The analytical and clinical validation for these complex biomarkers can be different from single analyte/biomarker validations, which can create different challenges for study design and statistical analysis. In this session, we will discuss the various challenges associated with analytical and clinical validation studies for evaluating such complex biomarkers.