All Times EDT
We live in the day and age of abundance of data-from traditional randomized controlled clinical trials, to observational studies, patient registries and expert opinions, to name a few. Due to recent advances in technology, having access to this data has become easier (e.g. Transcelerate initiative). Not surprisingly, the desire to utilize all this available data in drug development decisions to benefit patients and public has increased as well. Having access to large amounts of data can be overwhelming at times with a common reaction to engage with information selectively. Bayesian decision making framework helps to avoid this pitfall by creating a structured approach incorporating prior information based on relevant data into study design and decision-making. Both PDUFA VI and 21st century act legislations call for expanding use of complex innovative designs, which include trial designs with formal incorporation of prior knowledge and use of external or historical control data. Several PhRMA member companies have pioneered these approaches in their development decisions and will share their experiences today. The two talks from industry’s statistical methodology experts will share a common element of Bayesian decision making as a key driving force behind development decisions but will also highlight nuances reflecting specifics of each company’s approach. The panel discussion following the talks will reflect views from other and regulators. The talks will cover prior elicitation,synthesis of data (with case studies), application of award-winning Bayesian Quantitative-Decision-Making framework and using conditional assurance to de-risk studies.
Proposed speakers:
1. K. Price or M. Thomann (Eli Lilly)
2. T. Montague or I. Perevozskaya (GSK)
3. Panel: speakers + S.Roychoudhury (Pfizer) + J. Davis(FDA )+ R. Beckman (Georgetown Univ.)
It is well known that the estimates of treatment effect from real-world studies are subject to systematic biases including selection bias, information bias and confounding bias. While the first two types of biases can be tackled by study design, confounding bias, however, can be very challenging due to the complex relationship of potentially confounding variables, either measured or unmeasured and time-independent or time-dependent, which causes the difficulty in estimating causal effect of an intervention. This session includes four presentations discussing recent development in causal inference methodologies for estimating treatment effects in real-world studies including pragmatic clinical trials.
Speakers:
Yaru Shi, PhD Merck Research Laboratories, Merck & Co., Inc. Presentation Title: Causal inference for self-controlled case-series studies
Ying Wu, PhD Southern Medical University Email: wuying19890321@gmail.com Presentation Title: Targeted estimation of stochastic treatment effects
Rachael Phillips University of California at Berkeley Email: rachaelvphillips@berkeley.edu Presentation Title: Targeted Machine Learning for Causal Inference with Real-World Data
Fang Liu, PhD Merck Research Laboratories, Merck & Co., Inc. Email: fang.liu11@merck.com Presentation Title: Mediation analysis in pragmatic clinical trials
A final guidance on Estimands and Sensitivity Analyses has just been posted. It is obvious there is a need to discuss what comes next, namely what analyses ought to follow. Repeated measures designs are very common in many indications but relying on large sample theory in a rare disease setting would be a challenge as studies are smaller. Often, when the outcomes are quantitative, two sets of options can exist, a slope-based outcome and a change from baseline outcome. Both of these outcomes can be analyzed using a mixed model. For a change from baseline outcome, the analysis is generally referred to as a mixed model repeated measures (MMRM) approach, while a longitudinal mixed model is often used for the slope-based outcome. For either approach it is likely that an analysis that assumes all missing data is missing at random can be misleading.
Any mixed model requires one specify a covariance model. One obvious question is how flexible are the choices? For example, is an unstructured covariance matrix required since it does not take advantage of the time series nature of the data. In rare diseases, loss of efficiency can mean the difference between success and failure. Will the newer approaches to defining an estimand and not assuming incomplete data are missing at random impact the choices? The longitudinal mixed model is a slope-based approach where a mixed model is fit where the outcome is the measured value at each time point and time is treated as a continuous variable with time, treatment group, a time by treatment group interaction term, and stratification factors as covariates. One alternative approach is to estimate each individual’s slope and to then perform an ANCOVA to test for differences between groups. Again, one would need to consider the impact of how intercurrent events impact the decisions if at all.
The purpose of this session is help clarify the pros and cons of these two broad approaches to repeated measures with a focus on rare diseases.
Interim analysis has been widely planned and implemented in modern clinical trials. It is an analysis of data that is performed prior to formal completion of a trial to allow early stopping (for futility or efficacy) or other trial adaptations (e.g. sample size adjustment). Various methodologies have been developed in these areas. However, recent experiences in clinical trials with interim analysis highlighted the importance of scrutinize of such designs and their implementation. In this session, new research in innovative methods for interim analysis will be presented. Statistical controversies, potential pitfalls and lesson learnt in interim analysis will be illustrated. Practical challenges of planning and implementation of interim analysis will also be covered. Three industry speakers will first present their research and experiences in the planning and/or implementation of interim analyses in clinical trials. An FDA speaker will then give a regulatory perspective regarding pros and cons of adopting interim analysis planning in clinical trials, associated procedures, and other important considerations in the design and implementation of the interim analysis in clinical trials.
The industry is in need of rational guideposts for the decision to invest the next 40-50 Million dollars in a late phase clinical trial. A good starting solution is Lalonde’s dual-criterion approach, which consists of injecting probabilistic considerations into pre-defined decision criteria, anchored in the concepts of Lower Reference Value (LRV), and Target Value (LV).
In this session, we will embed this approach in the context of good principles of interdisciplinary decision making, and we will demonstrate its virtues and limitations. We will discuss alternative methods such as decisions based on Bayesian statistics that would allow incorporation of external information, and assurance-based decisions.
Two speakers will cover the industry perspective – one Ph.D. statistician from a mid-sized organization, and one M.D. from a major pharmaceutical company. A discussant from FDA will bring in the agency’s perspective.
The NIH reports that there are approximately 7,000 rare diseases affecting more than 25 million Americans. Approximately 80% of them are caused by a single-gene defect, and about half of them affect children. Because many of these diseases are serious or/and life-threatening conditions and most of them have yet approved therapies, there remains a significant unmet medical need for effective treatments. The product development for rare diseases poses numerous challenges including: (1) Scarce, heterogeneous and dispersed patient population which results in difficulties in identification of suitable patients in clinical trials; (2) Poor understanding of diseases which make it difficult to set clinical endpoints, outcome measures or surrogate (biomarker) endpoints in clinical trials; (3) Sensitive subpopulations such as neonates or pediatrics which provoke additional ethical considerations when conducting clinical trials. In addition to the above challenges, there are other considerations and challenges for new classes of treatments that aim to cure rare diseases, for example, gene therapies. By targeting a single-gene defect, gene therapy often does not need repeated dosing and that one dose can have effect that lasts for a life time. Novel trial design and analyses have been proposed to overcome the aforementioned challenges. This session will invite speakers from FDA, industry and academia to discuss trial design and analysis considerations that are important to the development and approval of products for rare diseases, including frequentist and potential use of Bayesian approaches with information borrowing, adaptive designs and assessment of safety and efficacy for small molecules, gene therapies and other novel treatments. Examples will be provided.
The opportunities for statisticians to have leadership impact in the pharmaceutical industry and regulatory world are considerable at all levels. The Leadership-in-Practice Committee (LiPCOM), a newly formed committee within the Biopharmaceutical Section, is organizing this session to create a forum for gaining insights and triggering discussion on how statisticians in the pharmaceutical industry and regulatory world can uniquely influence decision-making and therefore, raise visibility around their leadership impact. This session will feature presentations from two statistical leaders who will share their experiences on how they evolved into influential leaders during their careers. They will offer thoughts on what types of leadership skills and competencies are required for statisticians, especially given the changing environment (e.g., real-world evidence, big data) in the pharmaceutical/regulatory space, as well as explore the potential for statisticians to broaden their leadership impact and advance their career outside statistics. A panel discussion will follow the presentations. The panelists will discuss the topics and questions from the presentations based on their own experience.
The use of Bayesian trial designs has been attractive and is recognized as an important tool for improving design efficiencies since it enables the incorporation of existing earlier trial results or other knowledge through the specification of prior distributions, and therefore potentially reducing time, cost, and unethical exposure of subjects to inferior treatments. The incorporation of external data to the current trial result is however not without controversies. The Bayesian framework utilizing various approaches combines prior data and information with current trial results to make informed decisions in a timely manner. This is thought to be a critical step to bring innovation into the development and approval of medical products. There have been numerous usage of Bayesian methods for the medical product development in the literatures. For examples, use of prior adult trial results to augment and plan the trial design of a pediatric study, use of Bayesian model in subgroup analysis, use of Bayesian method to aid decision making in platform trials and use of historical data to design a trial for a rare disease indication. Many researchers have expressed great interests to understand regulatory perspectives and experiences in the utility of Bayesian approaches in design and analysis for the approval of medical products. In this session, we invite speakers from FDA Centers (CBER, CDER, and CDRH) to present their experiences and perspectives relative to the use of Bayesian approaches in designs and analyses. Examples and case studies will be provided.
For slowly progressed diseases, such as nonalcoholic fatty liver disease (NAFLD) including nonalcoholic steatohepatitis (NASH), and sickle cell disease (SCD), it may take years or even decades, to observe disease-related morbidity and mortality. For these types of chronic diseases, drug developers usually cannot afford to wait a prolonged period to examine and market drugs solely based on the hard and objective clinical endpoints. To reduce burden of diseases, the regulatory agencies have now offered an accelerated approval pathway option to allow the registration trials of some chronic diseases to be conducted based on good biomarkers as surrogate endpoints. Nevertheless, what determines good surrogate endpoints that can be used to reasonably likely predict outcome benefit still need to be carefully examined and validated with strong statistical evidence. In addition, before trials can be launched, the major clinical and statistical components, including the study design, exact study population, selections of biomarker and surrogate endpoints, should be prospectively defined. In this session, we will bring together experts from FDA, academia and pharmaceutical industry to share their insights, experiences and recommendations about how to overcome the challenges and issues in the clinical development when using the biomarker endpoint for the accelerated approval.
Speakers(2): Dr. Weiya Zhang, FDA, Weiya.Zhang@fda.hhs.gov; Dr. Gene Pennello, FDA, Gene.Pennello@fda.hhs.gov; Panelists(2): Dr. Peter Mensenbrink, Novartis, peter.mesenbrink@novartis.com; Dr. Shein-Chung Chow, Duke University, sheinchung.chow@duke.edu;
ICH released the final version of the ICH E9(R1) “Addendum on Estimands and Sensitivity Analysis in Clinical Trials” on Nov 20, 2019. During the past two years prior to its finalization, estimand has become a popular topic and many stakeholders actively assess its impact in clinical trials from both methodological and operational aspects. With the finalization of the addendum, it is time to evaluate how the concept of the estimand framework is utilized in actual practice, including designing clinical trials, planning statistical analyses, answering clinical questions of interest, and interacting with health authorities. In this session, we will invite speakers from both regulatory agency and Industry to give their experience on application of the estimand framework in their organization and their perspectives on next steps.
The Food and Drug Administration recently launched the Complex Innovative Trial Design (CID) Pilot Program as a deliverable under the sixth iteration of the Prescription Drug User Fee Amendments. The goal of the program is to facilitate the advancement and use of complex adaptive, Bayesian, or other innovative clinical trial designs requiring simulations to estimate operating characteristics. One avenue for achieving this goal is through the sharing of regulatory case examples. The ability of the FDA to publicly share examples prior to approval of a therapeutic agent is a unique aspect of the Pilot Program. This session will include examples from the CID Pilot Program with the goal of advancing the use of appropriate innovative trial designs.
This session will feature three speakers, two industry representatives and a FDA representative as follows: JonDavid Sparks, Eli Lilly and Company Stephen Lake, Wave Life Sciences Gregory Levin, Food and Drug Administration
When hazards are non-proportional, conventional log-rank tests and the Cox model are not optimal or even invalid, and discussions of clinical trial design and analysis have mainly focused on the restricted mean survival time. In this session, we present four novel methods with non-proportional hazards (NPH) that have become readily applicable, and illustrate them with real clinical and regulatory examples.
1) Motivated by the delayed onset of treatment effects, a NPH situation typically observed in immuno-oncology, as part of Cross-Pharma Non-proportional Hazards Working Group, we present the MaxCombo test. It can provide robust inference when the treatment effect varies over time.
2) Cancer immunotherapy often shows a mixture of short-term risk reduction and long-term survival benefit. We propose a change sign weighted log-rank test for the setting with crossing hazards. The appealing features of this design are increased study efficiency and detection of both short-term risk reduction and long-term survival.
3) When there are multiple time-to-event outcomes, competing risks may occur. Eg, death precludes the occurrence of progression in oncology trials. Traditional competing risks methods assume independent censoring. We derive a new consistent estimator of the cumulative incidence functions in the presence of independent and/or dependent censoring.
4) When prioritized multiple time-to-event endpoints are of interest, conventional time-to-first-event analyses treat all outcomes as equally important. Instead, the win ratio considers more important outcomes first (eg, death is considered first, then progression). The win ratio has gained popularity (eg, tafamidis approval in 2019), but was so-far applied only at the population level. We propose a covariate-adjusted win ratio. All of our speakers and discussant were confirmed: Satrajit Roychoudhury (Pfizer), Jianrong Wu (Univ of Kentucky), Judith Lok (Boston Univ), Gaohong Dong (iStats Inc), and James Huang (FDA).
There is a growing interest in using real-world data and evidence (RWD/RWE) for clinical trial design and analysis and regulatory decision-making. For example, RWD/RWE can be used as an external control in single-arm trials or used to augment concurrent control group of a randomized controlled trial. The most challenging issue in using external data is the comparability of external controls with the trial patients. On the other hand, as stated in the Framework of the FDA’s Real-World Evidence Program, multiple data sources may be used in deriving RWE in drug development and regulatory settings, which brings another challenge on how to combine different sources of RWD. In addition, RWE can be enhanced through modeling the inclusion and exclusion criteria of a clinical trial. This session includes four presentations, two on novel statistical and machine learning approaches to leverage RWD, one on combining multiple RWD sources and another on modeling the inclusion and exclusion criteria to enhance RWE.
Speakers:
Lilly Yue, PhD, Deputy Director Division of Biostatistics, Center for Devices and Radiological Health U.S. Food and Drug Administration Email: lilly.yue@fda.hhs.gov Presentation Title: Novel Statistical methods for Leveraging Real-World Data in the Regulatory Settings
Zhiwei Zhang, PhD NIH/NCI Email: zhiwei.zhang@nih.gov Presentation Title: Adjusting for population differences using machine learning methods
Gang Li, PhD Janssen Pharmaceuticals Email: GLi@its.jnj.com Presentation Title: Perspective Plan for Studies Combining Real Word Data Sources
Thomas Jemielita, PhD Merck Research Laboratories, Merck & Co., Inc. Email: thomas.jemielita@merck.com Presentation Title: Enhancing Real World Evidence by Modeling the Clinical Inclusion/Exclusion Criteria
The totality-of-evidence approach has been applied to FDA’s various decision process from therapeutic product approvals to food nutrition labeling. In this session, we provide an overview of totality of evidence, which could encompass data collected from multiple investigational studies including pre-clinical and clinical trials as well as real-world data from various sources. To meet the statutory requirement of substantial evidence, careful evaluation of biochemistry, toxicology, clinical pharmacology, clinical/PRO efficacy and safety data on multiple therapeutic options (e.g., doses) forms the regulatory foundation by which the decision for product approvals could be made. Following the overview, we focus recent methodological advances on evaluation of effectiveness incorporating data from multiple clinical trials, multiple doses, multiple endpoints, as well as both clinical trials and natural history studies of real-world evidence. A prospective meta-analytical approach is presented to address the challenges of comparability and completeness of incorporating data from natural history studies. In this session, we will invite statisticians and clinicians from industry, academia, and FDA to share their experience and vision of the application of totality-evidence approaches.