All Times EDT
The FDA recently issued a draft guidance on “Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products”. Stated in the guidance, trial designs that might be considered novel or complex innovative include those that formally borrow external or historical information in data analysis. Utilizing all available information through data borrowing will save resources, shorten development timeline and sometime be ethical in order to limit the number of patients in the placebo control group. This is particularly essential in a rare disease area where it is very challenging to recruit patients. The assumption for the validity of data borrowing is consistency of the historical and current studies. Matching patients’ baseline characteristics or other approaches can ensure the consistency regarding the patient population. On the other hand, dynamic borrowing using a power prior in a Bayesian analysis will allow the amount of data borrowing relying on the observed homogeneity in the study endpoint between the two data sources. The combination of the two technics should ease the major concern for data borrowing. In this session, regulatory, industry and academic speakers will share their thoughts on historical data borrowing. More specifically, the discussion will be on the patient selection/matching methods, frequentist operating characteristics of power and Type I error probability of Bayesian hypothesis testing. In addition, non-inferiority and superiority assessments in Bayesian framework will be considered. Simulation results and examples will be applied to illustrate the applications of the methods.
Developments have been in progress on the tools and methodologies to consolidate, organize, and structure real-world data (RWD) to generate research-grade evidence and ensure that confounding variables are accounted for in analyses. Synthetic control arm (SCA), is a novel clinical trial designs, mimicking a placebo, which utilizes RWD and enables rapid “Go/No Go” decisions. SCA requires disease is predictable and that the standard of care is well-defined and stable. Analytic techniques such as natural language processing, and statistical and machine learning methods are needed to extract relevant information from structured and unstructured data. A framework on the real-world evidence program by FDA was released in December 2018. Like any novel research initiative, the proposed use of historical control data to build a SCA has some associated risks. Selection bias and historical time effect are obvious risk factors. Simulation studies can aid in understanding the bias-variance trade off and more generally, the influence of the historical control data. So, a careful statistical planning and designing, along with a thorough understanding of the characteristics of the target population of interest, are required to circumvent some of those risks. In this session, we plan to discuss data mining and machine learning for the generation of SCA, adaptation of SCA to the clinical trials and associated statistical techniques.
One of the most important considerations in designing clinical trials is the choice of outcome measures. These outcome measures could be clinically meaningful endpoints that are direct measures of how patients feel, function and survive. Alternatively, indirect measures, such as biomarkers that include physical signs of disease, laboratory measures and radiological tests, often are considered as replacement endpoints or potential “surrogates” for clinically meaningful endpoints (Fleming TR 2012). These surrogate endpoints are sometimes employed in clinical trials and when used judiciously, can accelerate and focus the study of new therapies and can greatly enhance our understanding of their mechanisms of action. This session will feature speakers from industry, academia and regulatory agency who will discuss their insight into challenges of analysis of surrogacy, the pros and cons of the currently available methods in assessing the surrogate endpoints, outline some of the advantages, disadvantages and specific statistical considerations associated with the use of surrogate endpoint. Several examples in oncology trials will be provided.
The drug development in oncology and I-O area is constantly accelerated. There is a desperate need of new therapies for patients, driving pharmaceutical companies to invest tremendous resource. However, the challenge in development is substantial including timeline and high cost that delays patients access to new medicine. Furthermore, the complex mechanism and new approvals have significant impact on the probability of success of programs with always changing benchmarks. There are several key scientific questions in I-O development: dose level, combination agent, and population selection. The traditional clinical development pathway requires multiple phase I and II studies with enormous number of patients and long time. Considering the extremely high cost and “to be proved” drug mechanism, plus patient enrollment challenge due to many other ongoing clinical trials at investigational sites, it is critical to have an optimal design to answer the key questions utilizing patient’s data efficiently and to improve the probability of success. Therefore, innovative seamless phase I/II designs with consideration of treatment selection and population enrichment is of great interest. This parallel session will review and discuss the recent advance, challenges and opportunities of using complex trial design and analyses to expedite drug development and regulatory pathways, including role of simulation and best practice. Presentations will be given by a group of experts from industry, regulatory agencies and academia: Yuan Ji from University of Chicago: “ROBOT: A Robust Bayesian Hypothesis Testing Method for Basket Trials”. Jingjing Ye from FDA: “A working platform trial for rare cancers”. Jianchang Lin from Takeda Pharmaceuticals: “Flexible Semiparametric Meta-Analytic-Predictive Prior for Historical Control Borrowing”. Brian Hobbs from Cleveland Clinic: “Designing trials for potentially non-exchangeable patient subpopulations”.
Oncology is a competitive therapeutic area where the landscape is constantly changing. Some of the treatments produce remarkable responses with complete eradication in some cases. But nearly all treatments face drug resistance issues where they ultimately stop working for many patients. It is speculated that combination therapy can overcome resistance and gain greater potency by attacking the cancer at multiple points on cell signaling pathways or by attacking multiple pathways. Many of the combination regimens in the future are expected to have PD-(L)1 as the backbone and many of the patients will be PD-(L)1 failures in some respect. This poses a unique challenge to the future of oncology drug development. Traditionally developers have built combinations only from drugs that were already approved as monotherapy by the regulatory agencies. In the last decade, we have observed companies increasingly develop dual-novel or novel-novel combinations, where neither of the component drugs has been approved for use alone. In some situations, one of the component drugs could be from a class where a drug has been approved, such as PD-(L)1, or the component drugs are not yet approved for the specific patient population. In respond to this trend, FDA issued a guidance on codevelopment of combination therapies in 2013. Complex study designs to accommodate more trial arms with the accrual of an extensive number of patients or to consider single-arm combination therapy with external/historical control are being increasingly required, although facing unprecedented regulatory challenges. Trial sponsors and regulators will need to balance the level of evidence needed for approval in the context of data that may already be available to ensure equipoise and expedite development. This session will have speakers from both regulatory and industry to discuss the challenges and strategies in novel-novel combination therapy development, dose finding and confirmatory evidence generating.
The selection of an external control using real world data (RWD) is an extremely important part of the study design for rare disease studies or single arm studies. Real-world data and evidence have been increasingly used in regulatory and healthcare decision-making since the passage of the 21st Century Cures Act on December 9, 2016. It has significantly broadened the potential of external controls. The source data for an external control can be from real world data (RWD) including electronic health records (EHR), Patient Registry and Laboratory Database, or from other randomized clinical trials RCT). The external control can be a group of patients treated at an earlier timeframe or treated during the same time period but in another trial or registry. This control group is chosen for which there is detailed information including where pertinent individual subject data regarding demographics, baseline status, concomitant therapy and course on study. The most common way to select an external control group has been based on propensity score (PS) matching method. The PS is typically estimated using a logistic regression model that incorporates all variables that may be related to the outcome and/or the treatment decision. An expansion to the PS model was proposed by Tan (2019) which is a causal inference method that not only has the doubly robustness property but also extends the propensity score model and the regression model to semiparametric models with monotone constraint on the nonparametric parts. Another method is to borrow information through the use of a prior and then to apply the Bayesian methodology. Our presentation will explore this and other model for selecting the external control group and the subsequent trial analysis. Also, the presenters will discuss how their methods were applied in the clinical trial.
Treatment effect heterogeneity (different patients respond differently to treatments) plays a more and more essential role in evaluating the efficacy of treatments. With technology advancements, besides the traditional baseline demographic and disease endpoints, more patient characteristics data (e.g. -omiscs data) are now being collected at an unprecedented scale. In parallel, many advanced statistical learning methods have recently been developed to handle high-dimensional data. High-dimensional data from multiple sources in combination with advanced analytical tools offer great promise of addressing heterogeneity of treatment effects. At the same time, the study of complex nature of disease response to treatment, however, still poses great challenges to statisticians. For examples, can we identify an optimal treatment for a given patient rather than finding the right patient for a given treatment? Instead of entirely data driven, can we incorporate domain knowledge into statistical learning to obtain biologically more interpretable and meaningful results?
This section invites industry, academic, and regulatory speakers to discuss the challenges and recent developments in addressing heterogeneity of treatment effects. Dr. Pingye (Eric) Zhang from Merck & Co., Inc. will present a method for value function guided subgroup identification. Dr. Qi Long from University of Pennsylvania will talk about knowledge-guided statistical learning methods in precision medicine. The discussant for the session will be from FDA.
The COVID-19 pandemic has a significant impact on clinical trials around the world. Recent studies have shown that approximately 5000 studies in different therapeutic areas are currently impacted by this pandemic situation. The area of impact is vast including operational (e.g., site initiation and visits, site audits, drug supply) and scientific (e.g., changes in population, data collection). The impact of COVID-19 also poses new risks to the interpretation of trial results and its broad applicability for future clinical practice. Pandemic may cause significant lost to follow-up or missing planned visit which eventually results in a large amount of missing data. The potential consequences include inability to acquire the data needed to meet trial objectives or to implement pre-specified analysis plans, with obvious impacts on clinical development programs. Other consequences involve change in the target population, informative drop-outs, and inadmissibility of assessing the primary estimand of interest While some of the issues may be addressed through operational excellence, the study design and planned statistical analyses will need to be adjusted to address these issues without compromising the integrity of the trial. The impact of the COVID-19 outbreak on the conduct and reporting of clinical trials is also recognized by regulatory agencies, and important guidelines have been issued and updated (FDA and EMEA).
This session will focus on the necessary statistical considerations for design and analysis of clinical trial conducted during the pandemic. Both scientific and regulatory impacts will be discussed. The session consists of eminent speakers and panelists from industry and different regulatory agencies.
Bayesian methods have emerged as particularly helpful in combining the disparate sources of information while maintaining reasonable traditional frequentist characteristics. It helps to borrow information strategically under different heterogeneity (e.g., different disease subgroups, different regions, different studies etc.). The combined information may provide enough justification for smaller or shorter clinical studies without sacrificing the goal of evidence-based medicine. Modern computing power and algorithms now make it possible to take advantage of Bayesian continuous knowledge building. Also, recent FDA guidance (Adaptive Design Clinical Trials 2019, Interacting with the FDA on Complex Innovative Trial Designs 2019 (draft)) reflects Bayesian methods as a scientifically rigorous and safe experimental approach to clinical trials plus a statistically sound way of incorporating prior knowledge to make better decisions.
This session will focus on various applications of Bayesian statistics in clinical trial. This includes designing non-inferiority trials, borrowing trial external information for smaller and efficient design in rare disease and pediatric population, detecting safety signal for a drug from real world data etc. The proposed session consists of eminent speakers and discussant from FDA, industry and academia.
People love to hear stories, but we scientists in general are not good storytellers. How can we harness the power of storytelling to effectively communicate clinical data and artificial intelligence? The discovery and development process of medical products generate large amount data, from genomic data, patient level data, to large networks of health care networks. How to extract knowledge in a better way from data and exploit this knowledge is a question equally challenging for industry, academia and regulatory agencies. In this session, we will bring in three experts to share their invaluable insights about how to use storytelling and advanced analytics to bridge the gap between clinical data, machine learning, statistics and the mind of audience.
The final ICH E9(R1) Addendum on “Estimands and Sensitivity Analysis in Clinical Trials” (2019) recommends a change in mindset on how clinical trials are planned. According to this guidance, trial protocols should include precise descriptions of the treatment effect reflecting the clinical question(s) posed by the trial objectives, done through defining trial relevant estimands. Main and sensitivity estimators would need to be pre-specified for each estimand.
Implementation of this framework has been initiated across companies in the pharmaceutical industry. Regulatory discussions on estimands have also been conducted across projects from different disease areas. Different working groups on estimands have been formed and there has been an increasing number of publications on this topic.
This townhall will discuss the current impact of the implementation of the estimand trial planning framework. The townhall panelists will include key clinical and statistical leaders from both FDA and Industry, including a senior FDA clinical policy maker, members of the ICH E9(R1) Expert Working Group from FDA and Industry, as well as influential experts on this topic from Industry and Academia. Important questions will be asked, including:
• How does this guidance shape regulatory interactions, including those for submissions and labeling? • How are both clinicians and statisticians adopting this framework in project teams across disease areas? • How can we translate clinical questions into clear trial objectives and scientific questions of interest? • What are the key steps to implement this framework, including using protocol and SAP templates?
Each member of the panel will have a short presentation on a pre-determined question and the audience will have the opportunity to share additional points of view and engage with more questions.
Confirmed participants: Robert Temple-FDA , John Scott-FDA , Frank Bretz-Novartis, Craig Mallinckrodt-Biogen , Scott Emerson-U Washington
Virtual (or decentralized) clinical trials (VCT) have been introduced into medical research and drug development. These trials take advantage of online technology (apps, social media, monitoring devices, etc.), digital inclusion platforms (recruitment, endpoint measurement, adverse reactions), and order and send medications or device directly to the subject’s residence, allowing subjects to be homebased at every stage of the clinical trial. The VCT can provide subjects with access to clinical research even if they reside far from a traditional medical center. This digital transformation has also changed the mindset of both subjects entering clinical trials and researchers performing these clinical trials. VCTs have already been conducted in overactive bladder and osteoarthritis trials. The characteristics of a VCT are 1) potential subjects are identified based on the subject’s online searches and medical activities, 2) determination of qualification for the trial is based on EMR, laboratory or imaging visits, or self-reported information, 3) consent is obtained via the internet, 4) subjects receive treatment via home delivery, and 5) subjects report information via electronic monitoring or self-report. Issues to be considered in design and implementing these clinical trials are: 1) the possible diversity of the study population (including equitable selection and selection bias considerations), 2) the impact on the inclusion and exclusion criteria for the clinical trial, 3) the impact on randomization and masking of study treatment, 4) the impact on data collection and verification, 5) statistical methods used to analyze such data, and 6) the impact on regulatory submissions of the clinical trial data. Our session will explore these topics. Each presenter will discuss his/her views on virtual clinical trials, and how they can be integrated into our medical research programs.
The recent advances in chimeric antigen receptor (CAR) T cell therapy heralds an exciting new era in cancer treatment. While dramatic clinical responses to CAR-T treatment in B-cell malignancies and multiple myeloma generate enthusiasm, the unique feature of such therapy poses new challenges in the study design and statistical analysis of clinical trials. The major challenges facing CAR-T trials include: (1) As the development of autologous CAR-T therapy may be subject to manufacturing failure, should the randomization occur before or after the “successful” product being developed? (2) Eligible subjects may receive bridging therapy prior to the administration of CAR-T product during the manufacturing period. How should the treatment effect of interest be properly addressed given the joint effect of bridging therapy and CAR-T product? (3) Various non-proportional hazards patterns manifested in CAR-T trials violate the proportional hazards assumption, resulting in underpowered studies conducted in the conventional fashion. How can a novel study design salvage the power loss? (4) How can the manufacturing process and trial eligibility criterion be improved via artificial intelligence and machine learning? This session will highlight the recent innovations in statistical methodologies to address the unique challenges of CAR-T trials and feature speakers from industry, academia and regulatory agency to share their insights.
This session proposal is submitted on behalf of the ASA Biopharm Section Real-World Evidence Scientific Working Group (WG).
Data from “real world” clinical practice and medical product utilization – outside of clinical trials – are regarded as an increasingly important source of evidence generation that holds high potentials to increase efficiency and improve clinical development and life cycle management of medical products. Both US and EU regulatory agencies, public-private partnerships and health technology assessment organizations have launched major initiatives to address the concerns and considerations in the use of real-world evidence (RWE) to inform regulatory decision making. In August 2017, the FDA has issued a final guidance document on use of RWE to support medical device regulation decision making. Additionally, FDA has committed to exploring the use of RWE in drug regulatory decision making through stakeholder engagement, demonstration projects, and guidance development.
This WG represents a range of statisticians from regulatory, industry, and academics, working on a rapidly developing and promising area, in which statisticians need to play a central role. It is organized in two workstreams. The first workstream focuses on the use of RWE for label expansion. The second workstream focuses on the use of RWE to inform study design and the use of external controls. Both workstreams have conducted an extensive landscape and gap assessments. The results of the overall assessment were presented in the 2019 BIOP RISW. The WG is in its second year and is moving from a landscape and gap assessment phase to a research phase. The research topics identified in the initial phase include estimands in RW setting, outcomes ascertainment with use of machine learning, causal frameworks, computable phenotypes, and complex exposure patterns. These research topics and details will be covered in the 2020 RISW.
The United States is in the throes of an opioid epidemic. More than 130 people die every day from opioid-related drug overdoses. An estimated 40% of opioid overdose deaths involve a prescription opioid1. The misuse, abuse, and diversion of prescription drugs is a significant public health problem. As a consequence of the increasing rate of prescription drug abuse, regulatory requirements for the assessment of abuse potential of CNS-acting drugs have become increasingly stringent, particularly in the United States. A large focus of the abuse potential evaluation of drug products is the human evaluations of pharmacological effects, both pharmacokinetic and pharmacodynamic (PD). This is typically assessed by conducting a human abuse potential (HAP) study. There are two types of HAP studies, those designed to evaluate new molecular entities (NME) and those assess abuse-deterrent opioid formulations (ADF). Final FDA guidance for each type of study were published in 2017 and 2015, respectively. Each guidance has outlined detailed recommendations for statistical analysis. However, many issues have emerged in actual practice, which have resulted in difficulties in complying with guidance recommendations such as hypothesis testing margins, study variabilities, selection of endpoints, and the best way to present totality of evidence. This session will present the various perspective of key players involved in the design and analysis of HAP studies, including statisticians, pharmacologists and clinicians from FDA and industry. The goal is to discuss the challenges and present potential solutions for these emerging issues. The discussion will focus on areas where improvement can be made, including the evaluation of next generation ADFs, statistical modeling assumptions and the approaches within the study design to reduce the variability in primary and secondary outcome measures.
Reference: 1. https://www.cdc.gov/drugoverdose/epidemic/index.html
In 2020, executive level Statistics Leaders of leading biopharmaceutical companies formalized a consortium to facilitate communication and non-competitive collaboration to help industry be optimally positioned to face the challenges and seize the opportunities in drug development that are emerging on many fronts; to support visibility, health and advancement of the statistics profession; and where appropriate to provide a unified voice on key topics or issues. One recent example of this was the consortium sponsored industry proposal around COVID-19 published in 2020 with broad representation across companies. This session will serve to introduce the consortium further, and allow company leaders across industry to share their perspectives on how this consosrtium can help or be utilized by the broder community on a wide variety of topics their organizations are engaged with including topics such as novel designs, innovation, leadership, technology and the ever-changing clinical trial landscape Time will be given for open panel Q&A as well.