Drs. Janet Woodcock and Lisa LaVange from the FDA published a review article on master protocols in NEJM in July 2017, where they comprehensively discussed the use, example and benefit of using umbrella, basket, and platform trials. Also, recent FDA approvals in drug development, such as pembrolizumab to treat variety of tumors sharing a common genetic signature, demonstrated the regulatory agency’s encouragement of innovative designs. With all these efforts from the regulatory agency and also the targets for new drug becoming more and more precise, we expect to see an increasing use of such innovative designs in future drug development. In this session, renowned speakers and panelists will share and discuss their experience with these designs, especially basket trials, emerging in the drug development.
Real-world evidence (RWE) can be synthesized from real-world data (RWD). Potentially, RWE can reduce the cost and duration of clinical trials in the process of evaluating the effectiveness and safety of medical products. Therefore, there has been a great level of interest in using RWE in regulatory decision making with the wide availability of various RWD and the public and regulatory encouragement.
The interest in synthesizing RWE is even stronger for medical device clinical studies. FDA allows sponsors to employ single-group studies for the pre-market evaluation of a medical device when "the device technology is well developed and the disease of interest is well understood." These single-group studies may either compare the investigational device to a non-concurrent control group or a performance goal (i.e., a numerical target value pertaining to an effectiveness or safety endpoint) derived from non-concurrent information. The abundance in RWD provides a pool of subjects that may be large enough to apply sophisticated methodologies to gather non-concurrent controls or determine performance goals. With these advanced methodologies, the common issues of comparability and temporal bias in selecting non-concurrent control or determining performance goal may be alleviated.
However, challenges arise concerning how to use RWE for regulatory decision makings. These challenges include issues in determining the relevance, reliability and comprehensiveness of RWE as well as issues in conducting statistical design and analysis and drawing robust inference using RWE for medical device clinical studies.
In this session, speakers will present perspectives of FDA, academic and industry experts on the use of RWE in both pre and post-market settings, and share examples and proposals in using existing RWD sources.
Medical devices provide a rich setting in which to increase efficiency of clinical studies by incorporating historical data. Medical devices have physical and local effects, which, when modifications to the device are minor, may be predictable from previous device generations. In addition, overseas studies may yield information on the new device, and historical studies provide information on responses to comparator treatments. Bayesian methods offer a means of combining such prior information with current information to make inference on treatment effects. While the mathematical models for using historical data are relatively well established, improvements are needed in methods for constructing informative priors and setting performance benchmarks using historical data. This session will focus on improved methods for leveraging historical data.
The first talk will examine the process of constructing an objective performance criterion (OPC), a benchmark derived from historical data to serve as a comparator for an experimental treatment in a one-arm trial of a novel device. Laura Hatfield will discuss the construction and application of such a measure in trans-catheter aortic valve replacement devices. The second talk will address recent advances in the development of the discount prior Bayesian approach, which employs dynamic borrowing of historical data based on the similarity between historical and current information. Donald Musgrove, a member of the Medical Device Innovation Consortium (MDIC) working group, will present these techniques. The final talk, from the perspective of an FDA reviewer, will discuss what to borrow, how much to borrow, and keys to effective communication with FDA on issues related to borrowing. Laura Lu will present this talk, with specific examples illustrating the challenges with Q/IDE submissions incorporating Bayesian designs. (All three speakers are confirmed.) The session will conclude with a floor discussion.
The utilization of modeling and simulation approaches across a broad range of applications including dose selection continues to grow. A number of model based approaches have been developed which show potential for improvement over simple pairwise comparison. These include MCP-MOD [1], model averaging [2], and empirically-selected dose response models [3]. A fundamental challenge for clinical dose response estimation and subsequent dose selection is the relatively low signal-to-noise compared to many common applications of dose response methodology with pre-clinical data. Another challenge is that the dosing ranges studied are often too narrow to display the full shape of the changes in response as dose increases. These conditions can produce poor performance from many statistical methods that are in common use because of their excellent asymptotic properties under more favorable conditions. The use of modeling and simulation to influence decisions about the experimental drug doses manufactured to permit improved dosing designs will be discussed. Analysis methods, including Bayesian approaches, will be emphasized that integrate multiple data sources (e.g. pharmacokinetic data, longitudinal repeated measurements, historical data) to increase the data available from a single primary outcome variable in a single dose ranging study. The designs and analyses considered will cover most therapeutic areas other than oncology, which has differing objectives and resulting methods for dose response estimation.
Rare disease and pediatric clinical trials often are challenging to conduct. One main reason is the relatively small numbers of patients available for recruiting. Additionally, there may be uncertainty in the selection of the primary endpoint due to many factors including significant clinical heterogeneity among patients or because these diseases are still not fully understood. Third, the use of concurrent placebo control may be impractical.
Based on prior research and successful clinical trials, various approaches have been proposed to tackle the challenges associated with the conduct and analysis of these types of trials. Nevertheless, there is no consensus on acceptable methods. Regardless of the criteria used, the ultimate goal is to allow flexibility and ensure sufficient ability to maintain the standards for evaluating drug safety and efficacy.
In this session, three speakers are invited to share their research on innovative designs and advanced statistical methodologies, including meta-analyses and Bayesian approaches to utilizing historical data for confirmatory trial design and analysis. They will share their experiences and provide practical examples to facilitate discussion.
With much fanfare in the popular media, artificial intelligence(AI) and modern statistical or machine learning are poised to bring fundamental shifts to entire industries. Pharmaceutical industry, with its heavily data-driven drug discovery and development process, also sees many exciting opportunities in this new era. In this session, key opinion leaders from industry, academia, and regulatory agency will discuss the latest efforts to bring advanced analytic tools such as Deep Neural Network, iFusion Learning, and random forest into drug discovery and development/review. The organizer of this session is looking to stimulate informed discussions, and dispel the myths about AI and statistical learning through concrete examples of real-world applications in drug development. The tentative speakers and topics are listed below: 1. Minge Xie, Distinguished Professor, Rutgers University The speaker will introduce the concept of Individualized fusion learning (iFusion), which enhances inference for an individual via adaptive combination of confidence distributions obtained from its clique (i.e., peers of similar individuals). 2. Haoda Fu, Ph.D., Eli Lilly The speaker is leading the enterprise wise ML/AI group at Eli Lilly and Company. He will provide an overview on applications and opportunities using novel technology to improve pharmaceutical industries. Examples will cover drug discovery, manufactory, commercialization, and connected care device for mobile care. 3. Richard Baumgartner, Ph.D., Merck & Co. The speaker will review examples of statistical learning including variable screening in high throughput biomarker data, identification of adverse events in safety statistics and determination of risk factors in late stage clinical development.
Traditionally, drug development has followed an orderly sequence of Phase 1, Phase 2, and Phase 3 studies. In recent years, seamless designs have gained popularity due to the desire for rapid development and approval of new drug products to ensure timely patient access to safe and effective drugs. Under the seamless design framework, for example, a Phase 2 study and a Phase 3 study can be combined into one continuous study. As a result, the study sample size and time can be reduced. However, the logistics and statistical inferences of a seamless study may be challenging.
The goal of this session is to discuss the issues and challenges in seamless designs. The session will include two speakers: Dr. Shiowjen Lee from FDA/CBER will present a regulatory view on seamless designs and case examples; Dr. Zhaoyang Teng from Takeda will present a seamless Phase 2/3 study design and discuss how to make go/no go decision and plan sample size. Dr. Keaven Anderson from Merck will be the discussant.
Establishing an acceptable safety profile of a candidate drug is an important requirement to progress the drug into a marked medicine. Key to safety assessment is development and validation of robust assays. Although several regulatory guidelines have been issued, associated statistical methodologies continue to evolve. For example, in the recently issued FDA Draft Guidance for Industry: Assay Development and Validation for Immunogenicity Testing of Therapeutic Protein Products, a more stringent approach for cut point setting is advocated. In this session, statistical challenges and opportunities concerning pre-clinical/clinical safety evaluation will be highlighted. In particular, the thought process that led to the FDA recommendation of a new statistical method for immunogenicity assay cut point will be discussed, so will an alternate method be presented. This session consists of four presentations, two expert statisticians representing industry perspective and two representing a regulatory perspective, will together provide fresh insight on the application and issues surrounding statistical approaches to pre-clinical/clinical safety evaluation.
Speaker 1: Susan Kirshner, FDA Speaker 2: Jincao Wu, CDRH, FDA Speaker 2: Jochen Brumm, Genentech Speaker 3: Jianchun Zhang, MedImmune
The recently released ICH E9 R(1) Addendum (Aug 2017) highlighted the importance of intercurrent events, e.g., “choosing and defining efficacy and safety variables as well as standards for data collection and methods for statistical analysis without first addressing the occurrence of intercurrent events will lead to ambiguity about the treatment effect to be estimated and potentially misalignment with trial objectives.” The ICH E9 R(1) working group recommended five alternative strategies when estimating treatment effect in clinical trials with intercurrent events, namely, treatment policy (i.e., the true ITT estimand), composite strategy, hypothetical strategy, principal stratification strategy, and while on treatment strategy. However, the Addendum only provided general guidance and considerations for these alternative strategies. Much work needs to be done to resolve the remaining challenges in handling different intercurrent events in different clinical trials for drug development. In this session, speakers from the FDA and the industry will discuss some novel statistical methods and practical approaches for these alternative strategies with application to case studies, followed by discussion from the chair of the ICH E9 R(1) working group. Statisticians from the regulatory agency and industry will bring various and valuable perspectives to this rapidly evolving statistical topic.
Discussant: Tom Permutt, FDA
The ICH E14 Q&A was revised in December 2015 and now enables pharmaceutical companies to use concentration-QTc (C-QTc) modeling as the primary analysis for assessing QTc prolongation risk of new drugs. Because the C-QTc modeling approach is based on using all data from varying dose levels and time points, a reliable assessment of QTc prolongation can be based on smaller-than-usual TQT trials or based on single- and/or multiple- dose escalation (SAD/MAD) studies during early-phase clinical development in order to meet the regulatory requirements of the ICH E14 guideline.
Following the Q&A document R3 on ICH E14, C-QTc modeling is increasingly being used as primary analysis for the assessment of QTc-prolonging potential of a new drug. The statistics on report and protocol reviews received since the revision from FDA QT- interdisciplinary review team will be presented. We will share some common regulatory review issues and recommend good practices using C-QTc modeling in regulatory submission setting. Recent research development in C-QTc modeling may be discussed if time permits.
Geriatric patients are underrepresented in clinical trials in spite contributing to a disproportionate large disease burden and major consumption of prescription drugs and therapies. Geriatric patients have approx. 60% of the disease burden but represent only 32% of patients in phase II/III CTs. (Herrera, et.al., 2010). CT participation has been low in research across all therapeutic areas (e.g., on oncology, CNS, arthritis, and CVS). These failings may limit generalizability, provide insufficient data about positive or negative effects of treatment among this population. Elderly patients are accompanied with comorbidities, multiple drugs for multiple conditions, various disabilities and quality of life issues, organ function differences, and fraility. . Also, the high prevalence of polypharmacy with aging may lead to an increased risk of inappropriate drug use, under-use of effective treatments, medication errors, poor adherence, drug-drug and drug-disease interactions, and most importantly adverse drug reactions. Another consideration is that chronologic age alone is inadequate. Use of a standard geriatric assessment would be helpful and provide groups with different age-related conditions (fit, vulernable, frail) for analysis. The FDA issued guideline for Industry to encourage the fair representation of elderly patients in CTs and made suggestions: 1) modification of I/E criteria, 2) develop outcomes that are relevant; and 3) make adjustments to measurements and adjustments for comorbidity. These considerations may lead to a decision for the elderly population to be a sub-group of a single CT, or separate CT to be able to provide the needed information for submission. The presenters will review the current status of reporting of geriatric patient results in CTs, how do currently summarize geriatric results, how many geriatric patients are needed for a clinical submission for drug approval, and best practice for planning for a geriatric population in CTs.
Discussant: Rick Chappell, Univ of Wisconsin at Madison
This session will be tied to the 2015 FDA draft (or final, if available) guidance on inter-disciplinary, aggregate assessment for IND safety reporting and will serve as a forum to discuss key topics in the guidance. Many companies have been trying to establish internal processes and teams to implement this guidance. Practical challenges and concerns have arisen during implementation. For example, should the unblinded aggregated SAE analysis involving ongoing pivotal studies be performed internally or through an external group? What is an appropriate threshold to trigger a Safety Assessment Committee to further investigate the causal association of an SAE with the investigational drug? How do we determine anticipated SAE rates, especially for studies without a control arm? In addition, controversy over the merits of blinded vs unblinded monitoring still exists. In this session, experience and thoughts on handling these practical issues will be shared and discussed.
Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on data accrued in an ongoing trial. These designs have potential to provide more information about which subpopulations benefit from a treatment; however, there are tradeoffs (e.g., in sample size and trial duration) in using these designs compared to standard designs. The session includes clinical applications and new methodological research involving these adaptive designs.
Our speakers are from diverse backgrounds: industry, the FDA, and academia. Dr. Scott Berry, President and Senior Statistical Scientist at Berry Consultants, LLC, will present the design and analysis for a recently completed Bayesian adaptive enrichment trial for treating severe stroke that demonstrated successful results. Dr. Min (Annie) Lin from the FDA will present case studies and discuss the regulatory challenges associated with adaptive enrichment designs. Dr. Michael Rosenblum from Johns Hopkins University will present new statistical methods for improving power in adaptive enrichment designs, and for computing optimal tradeoffs compared to standard designs.
With the fast merging field of safety assessment during clinical development, safety has become the “new” efficacy. However, safety events of interest are often rare, unpredictable and with no clear underlying causes, which will require further clinical evaluation. This is where the new era of innovative methodology meets the need of clinical development of safety assessment and adapts to the existing practice. In this session, we will focus on recent innovative methodologies for the development of clinical safety assessment. The potential advantages of the Bayesian approach relative to classical frequentist statistical methods include the flexibility of incorporating the current knowledge of the safety profile originating from multiple sources into the decision-making process. The first presentation will be on Bayesian thinking of safety assessment from ASA safety working group Bayesian methodology subteam. Individualized medical decision making is often complex due to patient treatment response heterogeneityand. Pharmacotherapy may exhibit distinct efficacy and safety profiles for different patient populations. An ``optimal" treatment that maximizes clinical benefit for a patient may also lead to concern of safety due to a high risk of adverse events. Thus, to guide individualized clinical decision making and deliver optimal tailored treatments, maximizing clinical benefit should be considered in the context of controlling for potential risk. The second presentation will discuss how to identify personalized optimal treatment strategy that maximizes clinical benefit under a constraint on the average risk using machine learning and artificial intelligence algorithm. The third presentation will be from the regulatory agency discussing recent innovative methodologies for safety assessment from the regulatory perspective.
Discussant: Qi Jiang, Amgen
The developments in precision-based diagnostics devices can provide patients with more accurate results and better care, as well as lower costs. Different technologies and platforms, such as Immunohistochemistry (IHC), Fluorescent in situ Hybridization (FISH), and high-throughput sequencing, have been implemented to develop companion or complementary diagnostic devices for different therapeutic products. The application of these advanced or emerging technologies for diagnostic devices has brought with it many challenges to study design and data analysis. In this session, we will discuss the statistical issues and considerations associated with the emerging diagnostic technologies including the liquid biopsy, IHC assays, Next Generation Sequencing (NGS) onco-panels, Biodosimeters for rapid estimation of radiation exposure.
Multi-regional clinical trials (MRCTs) have been increasingly utilized to support the approval of drugs in multiple regions and to expedite the development of drug products in the global market. However, when differences in response to drug arise across regions, it may become challenging to interpret the trial results. In this session, two speakers will talk about heterogeneity in MRCTs and their suggestions to handle this issue: Dr. Janet Wittes from Statistics Collaborative will discuss heterogeneity observed from data collection and reporting and Dr. Mark Rothmann from the FDA will illustrate the use of Bayesian shrinkage estimation in analyzing regional treatment effects. We will also be joined by panelists from both the industry and FDA, who will share their experiences and insights on how to deal with regional heterogeneity in MRCTs.
Ever since the 21st Century Act (Cures Act) signed into law on December 13, 2016, there is a surge of interests in innovative trial designs and data analyses. The new PDUFA VI also initiated a pilot program “for highly innovative trial designs”. These proposals are mostly aiming at life-threatening conditions and/or rare diseases. The latter is defined as a condition that affects fewer than 200,000 people in the US. This session will focus on rare disease clinical programs, which inherit the usual challenges from small sample trials but also possess some unique hurdles. In particular, besides the study power issue, rare conditions are often difficult to diagnose (that can be translated into difficulties of study population identification), without a readily acceptable efficacy endpoint, and lacking of historical information. A more creative mind is required to design a practical clinical trial that can render robust evidence. Naturally, statisticians are consulted for efficient trial designs which systematically incorporate data from sources outside of clinical trial settings and novel analysis methods to better understand the data collected and inform future trial designs. This session will discuss regulatory experience in rare disease program and present alternative models and trial designs that can be potentially utilized for rare disease clinical program. Both regulatory and industry perspectives will be represented.
Discussant: Forrest C. Williamson, Eli Lilly and Company
It has been more than 5 years since the National Research Council's Panel on Handling Missing Data in Clinical Trials released its report on the Prevention and Treatment of Missing Data in Clinical Trials. This Report has had an appreciable impact on the design, conduct and analysis of clinical trials. Recently ICH E9 was amended to clarify issues related to estimands and sensitivity analyses, two key issues discussed in the Report. In this proposed panel discussion, we would like to bring together FDA, academia and industry statisticians to assess where we are with the prevention and treatment of missing data in clinical trials. We would like to focus the discussion on three areas: quantity of missing data, precision of definitions of estimand, and the role of sensitivity analysis. Our objective is to assess the overall state of our science with regard to the impact on study design and conduct (informed consent process, inclusion of subjects in a study even after they have discontinued study medication), how the data collected from individuals after they stop study-specific treatment but complete study assessments for the entire trial have been used and interpreted, whether there has been a reduction in missing data as a result of using the recommended strategies in the Report, and what choices of statistical models and software to assist in design and analysis of the data to address missing data and sensitivity issues have been found useful.
In December 2016, the “21st Century Cures Act” was signed into law marking a strong commitment to modernize clinical trials. The Prescription Drug User Fee Act VI, passed in August 2017, is complementary legislation that includes many of the themes outlined in the 21st Century Cures Act. One area of great interest to statisticians is the advancement and use of complex adaptive, Bayesian, and other novel clinical trial designs. Under PDUFA VI, the FDA will launch a Complex Innovative Designs (CID) pilot program. CID are those designs which require simulations to understand their operating characteristics, their statistical properties and operational features. This session aims to illustrate examples of innovative designs. One example will briefly describe a Bayesian non-inferiority study in confirmatory settings with augmentation of the active control arm with historic control efficacy data from the literature. Another example will describe a basket program by investigating one drug in multiple indications, as well as using historical control information to reduce the size of the placebo arm. The philosophy of learning from the current trial, borrowing information from the historic data, or combining direct and indirect evidence from multiple trials, and updating our beliefs may provide scientific justification for more cost effective and efficient clinical studies without sacrificing the goal of evidence-based medicine. The session will aim to provide an overview and the vision of the CID pilot program and Bayesian innovative designs also from the regulatory perspective.
Data from “real world” practice and utilization – outside of clinical trials – is regarded as a pragmatic source of evidence with high potential to support clinical development and life cycle management of medical products. Both US and EU regulatory agencies, public-private partnerships and health technology assessment organizations have launched major initiatives to address the concerns and considerations in the use of real world evidence (RWE) to inform regulatory decision making. In PDUFA VI, the FDA has committed to explore enhancing use of RWE in regulatory decision making, to hold public workshops, and to publish draft guidance documents between 2018 and 2021. EMA also has published multiple guidance documents on the usage of RWE (e.g. the 2017 EMA Patient Registry Initiative to support research in understanding natural history of disease and characterizing the effectiveness and safety of products). Most notably, there has been increasing public and private partnership on RWE research. GetReal was a project funded by the Innovative Medicines Initiative, a three-year two-billion-euro public-private consortium in Europe from October 2013- October 2016.
Robust RWE will not only leverage increasing volumes of data, but weave together different sources of data, such as clinical data, registry, and electronic health records, to bridge the gaps between efficacy and effectiveness. Although many challenges and limitations remain with the use of RWE, there have also been many successful case studies. In this session, invited speakers will discuss the roles of RWE in the regulatory environment, provide an overview on utilizing RWE to augment clinical development and life cycle management, share successful RWE studies as well as their challenges, and finally elaborate on developing RWE strategies in clinical development and life cycle management. Potential speakers and discussant will include experts from regulatory agencies, academia, and industry.
Bioequivalence (BE) studies are used to evaluate whether generic drugs are equally bioavailable to their pioneer counterparts in the rate and extent to which the active ingredients are absorbed and become available at the site of drug action. Typical BE studies are conducted at blood level with a two-sequence, two-period, cross-over design. Veterinary drugs share many of the same challenges as human drugs. For example, highly variable drugs, also seen in veterinary drug applications, have different acceptable criteria and statistical analysis method from the regular drug. Additionally, blood level BE studies might not be feasible for some drugs; instead clinical endpoint studies might be more appropriate. Evaluation of BE in animals may be complicated because veterinary drugs are delivered to groups of animals rather than to an animal individually and that all animals are typically enrolled in a study at the same time. Speakers from FDA, industry, and academia will tackle these challenges and propose innovative solutions for designing and analyzing BE studies for animal drugs.
In many clinical applications of survival analysis with covariates, the commonly used semiparametric modeling assumptions, e.g., Proportional Hazards (PH), Proportional Odds (PO), Accelerated Failure Time (AFT), and other similar models, though semi-parametric, may still turn out to be stringent and unrealistic, particularly when there is scientific background to believe that survival curves under different covariate combinations (e.g., comparing two treatments) will cross during the study period. This session brings together speakers from industry, regulatory and academic institutions to explore various practical scenarios where such assumptions are violated. The speakers will present results and examples for estimating conditional hazards and also testing when the underlying model assumptions are likely to be violated. Empirical results based on several simulated data scenarios under various modeling assumptions will be presented and regulatory perspective of the consequences of using wrong assumptions will also be discussed. The session includes confirmed speakers from academia, industry and regulatory agencies making it an ideal forum to bring together experiences and knowledge that would be relevant to the topic of modeling and simulation methods.
In 2016, generic drugs accounted for 86% of the market share in US. Generic drugs have saved the U.S. Health care system $1.67 trillion in the last decade. For certain drugs such as locally-acting drugs, three-arm clinical end-point BE studies are often used to establish BE between a generic drug and an innovator drug. Clinical endpoint BE studies, however, are much more costly than pharmacokinetic studies used for systemic drugs. How to optimize study design and improve effectiveness and efficiency of clinical endpoint BE studies is a pressing task under the Generic Drug User Fee Act II (GDUFA II) instituted in 2017. In this session, speakers from the FDA and industry will discuss novel statistical methods and results from different perspectives, including how to design effective endpoints for an Abbreviated New Drug Application (ANDA) based on the NDA study design, how to optimize the allocation ratio of sample size among the generic drug (TEST), innovator drug (REF), and placebo (PLB) under different scenarios, and how to incorporate adaptive design in clinical endpoint studies. All of these strategies will help cut costs for sponsors and ultimately promote the availability of affordable generic drugs to the American public. This session will include three presentations followed by a discussion. Statisticians from the agency and industry will discuss issues and novel statistical approaches in bioequivalence.
Discussant: Stella Grosser, FDA
Across all areas of drug development, opportunities for statisticians to influence abound and are increasing. Thousands of statisticians working across the spectrum of drug development underscore the relevance of statistical skillset in the pharmaceutical industry. Furthermore, statisticians are integral part of regulatory teams tasked with the responsibility to make decision on the safety and efficacy of pharmaceutical products. Beyond technical competence, specific competencies are required for statisticians to influence decision-making effectively in a multidisciplinary drug-development-team environment. The key leadership competencies for statisticians (e.g. listening, networking and communication) will be reviewed and discussed. This session will feature presentations from Gary Sullivan and John Scott on “Growing your Influence in a Multidisciplinary Drug Development Team”. Gary will speak from industry perspective while John will approach the topic from a regulatory perspective. Both presentations will highlight the importance of statistical leadership, the requirements for statistical leadership and the impacts of statistical leadership. A panel discussion featuring prominent statisticians across regulatory, industry and academia will follow the presentations. The panelists will discuss topics and questions engendered to corroborate the presentations based on their own experiences.