The clinical trial community has long understood the utility of historical information in the context of clinical trial design. From just "being aware of the literature", through to formal meta-analytic approaches, statisticians appreciate the role of evidence synthesis when tasked with designing a new study.
But why should that historical information not also play a role in the analysis of the new study, and contribute to inference about the parameter(s) of interest?
Straightforward application of Bayesian methods is one obvious way to bring the historical data (as informative prior) into that inference. But there is understandable scepticism about such a naïve approach (patients in new trials are unlikely to be exactly “the same” as patients in the past) combined with regulatory concerns over the impact on decision-making risks. Yet the feeling remains that to ignore that information is to render sub-optimal the learning process about parameters of interest.
What clinical trial practitioners would like is to be able to use an informative prior distribution that “looks like” the historical data, if the historical data are relevant – suitably modulated to incorporate the knowledge that the future is unlikely to be an exact extrapolation of the past – and which appropriately down-weights the impact of historical data should it not be commensurate with the new data.
This workshop will introduce several methods that have been discussed in the literature, such as power, commensurate and robust mixture priors, with a focus on re-use of control arm data - although other applications will also be highlighted. It is based on a successful course developed by GSK and the UK MRC Biostatistics Unit.
Realistic examples for a pharmaceutical R&D audience will be used, and as each method is introduced, participants will learn how to apply it to real data and to investigate its impact on the trial’s operating characteristics and the posterior inference they would make.
In the era of precision medicine, drug development has become increasingly complex when a companion diagnostic is developed in parallel. Drug/device developers and regulatory agencies have been facing challenges in syncing up the timelines of the two development processes, as well as associated statistical issues. In July 2016, FDA released the draft guidance: “principles for codevelopment of an in vitro companion diagnostic device with a therapeutic product”. This guidance laid out principles, recommendations, and methods for co-developing drug and companion diagnostic device. In addition, numerous statistical methods, especially in the field of trial design, have been proposed to maximize the probability of success of clinical development program. This short course consists of three parts. In Part 1, we will provide an overview of the FDA guidance and general considerations. In Part 2, statistical methods for addressing key issues such as biomarker threshold determination, bridging study, prescreening issues, are discussed. In Part 3, we will review innovative trial designs proposed to include biomarker hypothesis testing that may require a companion diagnostic device. These include platform, basket, umbrella trials for learning phase trials, various statistical testing strategies in biomarker stratified designs and adaptive biomarker designs for confirmatory trials. Basic knowledge of clinical trials statistics is required.
Multiple comparisons is for making multiple decisions. The Partitioning Principle is a fundamental approach to multiple comparisons. This principle guides the formulation of null hypotheses such that controlling the error rate familywise controls the probability of making incorrect decisions. When there is a pre-specified decision path (e.g., testing for efficacy in low dose only after efficacy in high dose has been shown), the Partitioning Principle formulates the null hypotheses in a way that guarantees decision-making will follow the pre-specified path, proving besides that multiplicity adjustment is not needed when testing follows a single path. In the absence of pre-specified decision paths, this Principle covers common step-down methods (e.g., Holm’s) and step-up methods (e.g., Hochberg’s) as special cases.
In settings with multiple paths (e.g., testing involves multiple doses and multiple endpoints), the Partitioning Principle more straightforwardly stays on paths than closed testing. In addition, this Principle crucially proves that if all the null hypotheses are rejected along a path, then the significance level allocated to that path can be re-allocated to the other paths. This helps building the connection to the recently popularized graphical approaches to multiple comparisons. Using graphical approaches, one can easily construct and explore different test strategies and thus tailor the test procedure to the given study objectives. The resulting multiple test procedures are represented by directed, weighted graphs, where each node corresponds to an elementary hypothesis, together with a simple algorithm to generate such graphs while sequentially testing the individual hypotheses.
Examples with testing multiple doses and multiple endpoints in combination, as well as testing targeted therapies with subgroups, will be given to illustrate the above approaches and principles.
In statistical methodology research, simulations are among the ways to show operating characteristics of the proposed method against the existing methods. Depending on the response variables of interest in such simulations, univariate or multivariate, iterative or non-iterative, simulation designs must be considered very carefully to produce generalizable and repeatable conclusions in any given platform and this task is much more difficult and under-recognized than it is supposed to be. In this short course, we will introduce simple to more complex simulation designs and the importance of simulation size; we will describe potential pitfalls that may not be easily recognizable and suggest what metadata to be captured for a clear description of the simulation results. We plan to carry out examples both in SAS and R to show similarities and differences between two platforms. In doing so, we will also utilize Graphical Analytics techniques, which are indispensable components of statistical learning and practice and must be made part of any simulation plans as well. We will introduce graphical approaches for simulation diagnostics and descriptive graphics to summarize the simulations.
Oncology immunotherapies have the potential to be effective in more tumor indications than a non-immunotherapy, as demonstrated by PD-1 (or PD-L1) immune checkpoint inhibitors such as pembrolizumab and nivolumab in recent years. Following the success of PD-1 (or PD-L1) inhibitors, a flood of next generation immunotherapies with different mechanisms of action (e.g., LAG3, CD40, ICOS, TIM-3, IDO1, GITR, STING, OX40, TIGIT) are being developed. While the expectation is high for these new immunotherapies, it is unrealistic to expect all of them to have the same success as the immune checkpoint inhibitors. In reality, they may be effective in a wide range of tumor indications, may be effective only in a few tumor indications, or may not be effective at all. Even if they are indeed as effective as the immune checkpoint inhibitors, it will be challenging to demonstrate their clinical benefit given the improved standard-of-care. It is imperative to apply innovative and cost-effective statistical strategies to the development of these new immunotherapies.
This tutorial will present the lessons and experiences learned from the development of the checkpoint inhibitors. It will also present statistical strategies on efficacy screening, design adaptation between Phase 2 and Phase 3, and adaptive biomarker enrichment design and platform/basket design for Phase 3 trials. Familiarity with multivariate normal distribution and an open mind are the only prerequisite for this tutorial.
Proper addressing missing data in clinical trial analysis remains complex and challenging, despite a great amount of research that has been devoted to this topic. Conventionally, under the missing at random (MAR) assumption, we often use maximum likelihood or multiple imputation based methods for inferences. However, the MAR assumption is unverifiable from data. More critically, the estimand under MAR is hypothetical as indicated in the recent ICH E9 addendum and has been considered as overly-simplistic and unrealistic. Both regulatory agencies and industry sponsors have been seeking alternative approaches to handle missing data in clinical trials under missing not at random (MNAR) assumption.
This half-day tutorial is intended to cover various methods that have been advocated in dealing with missing data and illustrates how to carry out the analyses using SAS software. The tutorial begins with an overview of conventional missing data handling methods such as maximum likelihood methods, multiple imputation, generalized estimation equation approaches, and Bayesian methods. The rest of the course is devoted to more recently-developed methods, such as sensitivity analysis to assess robustness, control-based imputation, and tipping point analysis. Real clinical trial examples will be presented for illustration with implementation of the analysis using SAS/STAT software, including the MIXED, MI, MIANALYZE, GEE, and MCMC procedures.
Outline: 1. Review missing data and conventional methods * Restricted maximum likelihood (REML) * Mixed-effects Model Repeated Measure (MMRM), * Constrained Longitudinal Data Analysis (cLDA) * GEE and wGEE * Multiple imputation (MI) * Bayesian approach
2. Recently-developed methods for missing data * Review common estimands in clinical trials * Alternative and sensitivity analysis models * Control-based imputation * Tipping point analysis
Believe or not, we are in the era of the fourth industrial revolution. It is critical for statisticians to understand or even master the center piece of this revolution, artificial intelligence (AI). Google, Amazon, Facebook, IBM and many other companies have broken new ground in AI including self-driving cars, cashier-free convenience stores, smart hospital care, personal assistants and precision medicine. The course will cover the concept of AI, breakthroughs of AI in drug development including drug discovery, patient recruitment, patient compliance and prediction of patient and clinical trial outcomes, and a tutorial on deep learning methods, a set of novel tools for generating artificial intelligence, which were developed based on a class of almost forgotten old algorithms, neural networks that revived to become a mainstream in big data analytics thanks to advancement of big data and computer processing power. The utility of deep learning in analyzing electronic medical records will be illustrated in terms of predicting patient and clinical trial outcomes for clinical trial optimization. Lastly, an overview will be given on Python, commonly used software for deep learning, and the very necessary computing environment, cloud.
Syllabus: 1. AI: fourth industrial revolution, AI's central role, its impact to required working skills and why statisticians need to understand or even master AI. Impact of AI to clinical development. What is AI and how to build AI?
2. Deep Learning: Why deep learning? i.Successful examples of deep learning, ii. The power of deep learning in analyzing big data Why deep learning works? What is deep learning? i. Neural network and deep learning ii. Big data and deep learning. How to train, validate and interpret a neural network?
3. EMR: Why? i.Real world evidence ii.Competitive intelligence. iii. Clinical trial results prediction. What are EMR? How to analyze EMR? Case Study
4. Python and cloud computing
This short course focuses on adaptive enrichment designs, that is, designs with preplanned rules for modifying enrollment criteria based on data accrued in an ongoing trial. For example, enrollment of a subpopulation where there is sufficient evidence of treatment efficacy, futility, or harm could be stopped, while enrollment for the complementary subpopulation is continued. Such designs may be useful when it’s suspected that a subpopulation may benefit more than the overall population. The subpopulation could be defined by a risk score or biomarker measured at baseline. Adaptive enrichment designs have potential to provide stronger evidence than standard designs about treatment benefits for the subpopulation, its complement, and the combined population. However, there are tradeoffs in using such designs, which typically require greater sample size than designs that focus only on the combined population.
We present new statistical methods for adaptive enrichment designs (part 1 of the course), simulation-based case studies in Stroke, Cardiac Resynchronization Therapy, Alzheimer’s Disease, and HIV (part 2 of the course), and open-source, trial optimization software (part 3 of the course). The tradeoffs involved in using adaptive enrichment designs, compared to standard designs, will be presented. Our software searches over hundreds of candidate adaptive designs with the aim of finding one that satisfies the user’s requirements for power and Type I error at the minimum expected sample size, which is then compared to simpler designs in terms of sample size, duration, power, Type I error, and bias in an automatically generated report.
Target Audience: statisticians at master's level or beyond.
Currently, several challenges are facing the pharmaceutical industry including doubling the cost for research and development but decreasing the productivity (Woodstock, 2005). The pharmaceutical industry is under pressure to provide a clinical trial that answer more questions that are more efficient. The scientific communities are in great need to develop innovative trial designs that can incorporate multiple Tx, multiple Dx and provide results for multiple complex hypotheses. Also, these trials, in general, are more efficient and less costly than to perform the subset of trials individually. The master protocol trial design (Woodcock and LaVange 2017) is the centerpiece of innovation that incorporates the above goals in the clinical trial. The master protocol is defined as one overarching protocol that would answer multiple questions or hypotheses. Master protocols are designed for one Tx across multiple Dx or a single Dx across multiple Tx. These clinical trials are adaptive where group sequential design, sample size adjustment or dropping an arm may be part of the design. These designs include:Umbrella design is to study multiple targeted Tx in a single Dx, Basket design is to study a single targeted Tx in multiple Dx or disease subtypes, and Platform design is to study multiple Tx in single Dx where Tx are added or to leave the platform on the basis of decision algorithms. The master protocol may involve the direct comparison of competing Tx or to evaluate in parallel different Tx relative to their control Tx. After an interim analysis in a basket design, the decision to combine into in to larger Dx groups such as when using Simon’s two stage design for oncology trials. During this session the presenters will discuss the current usage and application of master protocols, what statistical methodology need to be considered when designing and analyzing these trials, and considerations when developing a clinical program with a master protocol included.
Safety evaluation is, albeit critical to a drug approval, often difficult to assess during pre-marketing clinical programs. Regulatory agencies require pre- or post-marketing clinical trials to further investigate specific severe adverse events with pharmacological rationale. These trials usually engage over a thousand subjects for multiple years. The results from these trials, however, are still inconclusive most of the time due to the expected low event rates and other confounding factors caused by the pure magnitude and length of these trials. Recently, efforts have been put in searching innovative safety trial designs and novel statistical approaches that can incorporate both historical and real-world data that may not be collected from pragmatic clinical trials. The foreseeable obstacles include the validity of data sources and the justification of proper approaches to be used. However, should such safety evaluation be streamlined to consolidate all safety information in hand, it could greatly improve regulatory agency’s decision-making process to either reasonably rule out a safety signal or more definitively identify a potential risk of a drug. This session will discuss innovative safety trial designs that take external data into consideration. Both regulatory and industry perspectives will be presented to illustrate lessons learned from past experience and to advance the proper usage of these new approaches in safety evaluation.
Binary endpoints are frequently used in equivalence or non-inferiority studies in many areas of pharmaceutical industry, such as generic drugs, biosimilar products, and vaccines. Even though the methodologies for equivalence and non-inferiority testing have been well established, in certain situations, the commonly used statistical methods may not be most appropriate. Therefore, researchers from the FDA, industry, and academia have continued to develop novel statistical methods for equivalence/non-inferiority evaluation of binary endpoints.
In this session, speakers from both the FDA and industry will present their recent research in equivalence/non-inferiority testing for binary endpoints. Dr. Luan will explore an alternative statistical method for the bioequivalence study with binary clinical endpoint in Abbreviated New Drug Applications (ANDAs) because the current method is not equally sensitive to detect the difference between the test product and the reference product across the response range, particularly when the success rate for the reference product is expected to be low. Dr. Patterson will present the recent development of equivalence/non-inferiority of binary endpoint testing in Vaccines. He will describe statistical analysis methods used to analyze clinical immunogenicity data, summarize the commonly applied methods used to assess efficacy, illustrate these methods using anonymized clinical data, and share his most recent research in his area. As a discussant, Dr. Chow will summarize the advantages, disadvantages, and challenges of these newly developed testing methods. This session will provide the audience the updated methodology and best practice in equivalence/non-inferiority testing for binary endpoint and inspire innovative ideas for this topic.
Discussant: Shein-Chung Chow, FDA
The presence of non-PH treatment effect has been well documented in the context of immuno-oncology (IO) studies, where a delayed separation of the Kaplan-Meier curves have often been observed. However the occurrence of non-PH is not limited to just IO. It can manifest itself in many other clinical settings and can take variety of other forms such as diminishing effect (KM curves joins together after sufficient follow-up) or crossing KM curves. In such cases, summarizing the treatment effect based on hazard ratio alone may not be informative. Hypothesis testing with log-rank test and clinical trials designed based on proportional hazard assumptions are likely to be underpowered to detect treatment differences. A cooperative effort with the Pharmaceutical Industry was initiated by the FDA in late 2016 to take a holistic approach in understanding the impact of non-PH in the design, analysis and interpretation of clinical trials. Both simulations and actual case studies were used to evaluate the pros and cons of different statistical approaches. The Non-Proportional Hazards Working Group has now made recommendations on potential updates to regulatory expectations for studies with non-constant treatment effect over time. This session will present recommendations for design and analysis of studies in this context and the recommendations will be discussed by regulatory, industry and academic representatives.
Kun He from FDA will be the discussant for this session.
Rare diseases pose several challenges for medical product development. Often not much is understood about the natural history of a disease, patients are hard to come by so trials are apt to be small, and it is often hard to convince patients to be randomized to a placebo arm. So we highlight several aspects of rare disease product development by focusing on rare neurological diseases because they are especially hard. Often patients do not present symptoms at birth, are quite variable within a “disease” and being diagnosed is hard. FDA wants clinically meaningful endpoints in evaluating clinical trials for any disease but identifying ones which have sufficient power in small trials can be difficult. We have invited three speakers to talk in this session. David Schoenfeld is a prominent statistician at Massachusetts General and has published extensively on statistical issues with ALS (Lou Gehrig’s disease) and he will talk about when an historical control is a good thing to do and when it is not. His work has been cited in many FDA submissions. Tristan Massie has been an FDA statistical reviewer involved in the evaluation of many rare neurological disease product submissions in CDER and he will comment about what is the same and what is different about rare disease submissions at the FDA. Susan Ward heads up an organization funded by industry allowing them to operate in a precompetitive space to better understand Duchenne’s Muscular Dystrophy. They make use of data outside of clinical trials (“real world evidence”) to support drug development for rare diseases. Their group is modeling patterns of progression in DMD. This work ought to improve subsequent trials for DMD. These kinds of efforts have been encouraged by FDA. DMD & ALS differ a lot but can highlight why understanding the disease is so critical to getting the right clinical trials.
In recent years, there has been a proliferation of regulatory and industry-wide initiatives on structured benefit-risk (BR) assessment. Examples of structured BR frameworks include the PrOACT-URL (Problem for¬mulation, Objectives, Alternatives, Consequences, Trade-Offs, Uncertainties, Risk Attitude and Linked Decisions), the Benefit Risk Action Team (BRAT) framework, and the FDA Implementation Plan on Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. Also, in June 2016 the ICH Expert Working Group finalized the Com¬mon Technical Document Section 2.5.6 on Benefit-Risk Evaluations. As a result of these efforts, the uptake and utilization of structured BR assessments has been increasing. However, the aforementioned frameworks are mostly qualitative in nature, and the utility of quantitative BR approaches has not been systemically explored, creating uncertainty about settings in which quantitative BR assessment could be optimally applied. In this session, speakers will discuss the common understanding and principles of quantitative BR methods, delineate situations where such methods could be utilized via case studies, and give recommendations on the settings for such application.
Graphical approaches and gatekeeping procedures in clinical trials are flexible and interpretable multiplicity adjustment methods that maintain the strong control of the family-wise error rate (FWER). The approaches have gained increasing popularity in confirmatory clinical trials (CCTs) since introduced. Several refined graphical approaches or gatekeeping procedures have been proposed to handle situations when the original assumptions may not hold in CCTs with complex design when there are:
• Multiple primary and secondary hypotheses • Multiple arms • Multiple interim looks • Multiple tests at both subgroup and overall population level
Three speakers will share their experience and ideas in the recent development and application of multiplicity methods in CCTs. • The first speaker will show how to build the correlation between the biomarker subpopulation and overall population into the procedure in oncology trials. This approach will boost the study power while maintain the strong control of the FWER. • The second speaker will discuss about enhanced mixture method to construct more powerful gatekeeping procedures if serial logical restrictions satisfy transitivity condition • The third speaker will provide case studies of applying multiple testing procedures in neuroscience late phase clinical trials.
In vitro studies are essential part of pharmaceutical development from the discovery of new molecular entities to approval and marketing of a new drug. In vitro dissolution in particular is critical to control product quality and to demonstrate bioequivalence. This session consists of four presentations, the first presentation will discuss regulatory perspective for the importance of in vitro bioequivalence while the second and third presentations will be presented by expert statisticians from the industry to discuss traditional and Bayesian methods to assess dissolution profile similarity. Lastly, industry experts will discuss the impact of particle size on dissolution of suspension products and modeling approaches to predict dissolution for a suspension product based on particle size distribution.
The challenge of a clinical trial is to estimate a clinically and regulatory relevant treatment effect in the presence of intercurrent events, such as treatment or study discontinuation, use of rescue medication or death. The draft ICH E9(R1) Addendum on “Estimands and Sensitivity Analysis in Clinical Trials” describes a structured framework that includes the specification of an estimand (i.e. “treatment effect to be estimated”), method of estimation (estimator) in the presence of informative and treatment-related events that occur after randomization, and sensitivity analysis to explore the robustness of inferences from the main estimator to deviations from its underlying assumptions. There are at least five strategies for addressing intercurrent events listed in the draft addendum that lead to different types of estimands. However, the selection of the appropriate estimators for each type of estimand is an open problem that needs further discussion and investigation. This session will share thoughts from both industry and regulatory perspective on the selection of estimators for various types of estimands. Case studies will be considered. The current practice in clinical trials will also be discussed, together with the next steps that could be taken to broaden and make it more efficient.
Discussant: David Petullo (CDER, FDA)
Most medical products are designed and developed based on one-size-fits-all approach. However, it is recognized that not all patients with the same disease would benefit from a specific therapy. Aiming to targeting the right treatments to the right patients at the right dose, Precision Medicine Initiative calls for development of treatment and prevention strategies taking into considerations of individuals’ differences of gene makeup, environment and lifestyle. FDA regulates therapeutic treatments including biologics, devices and drugs. Among them, products with precision medicine implication include autologous cancer vaccines, autologous cell therapies (or regenerative medicine) and companion diagnostic products. Because of unique features embedded in these products, there have been challenges and issues in the product development. For autologous therapies, as the main material for product manufacture comes from each patient’s tissue/cell, cell composition varies from patient to patient which results in a large variability in the total cell number count of the final product. This influences study outcomes of patients. Additionally, time to completion of product manufacture and manufacturing success rate can affect the study design and interpretation of study results. For example, products can take several months to be manufactured and patients become ineligible or die during the time. On the other hand, companion diagnostic products rely on validated testing tools or assays to select the “right” patients for treatments. Issues such as biological basis for selected biomarkers and biomarker validation process during on-going Phase 3 trials could potentially affect the product development and approval.
Dr. Shiowjen Lee from FDA will be the discussant and conclude the session.
The integrity of clinical trial data is critical to the public health mission of the FDA. Fraudulent and poorly collected data jeopardize the ability of FDA scientists to perform their analyses to ensure that safe and effective drugs are approved for the market. Data anomalies indicative of either extreme sloppiness and/or potential fabrication of data have been found in both New Drug Applications (NDA), as well as in Abbreviated New Drug Applications (ANDA). For NDAs, suspect data can often be contributed to problematic clinical sites and therefore the detection of such sites is of primary interest. In the case of ANDAs, data anomalies have been detected for both pharmacokinetic (PK) and clinical endpoint studies. Various efforts have been initiated to address this problem. Some of these efforts aim to improve the efficiency and effectiveness of the existing data anomaly detection tools, whereas others aim to develop novel methodology. Data anomaly detection methodology consists of a variety of tools, including but not limited to, the application of supervised and unsupervised statistical learning methods in clinical trial data where the response is defined as a clinical inspection outcome, the development of computational tools such as R-Shiny applications and several data anomaly detection approaches. The objective of this session is to present and discuss different such efforts initiated from inside and outside the FDA.
The growing use of real world data for medical research and the passing of the 21st Century Cures Act and PDUFA VI have accelerated the growing debate about the appropriate use of real world evidence (RWE) for regulatory decision making. Recent publications from regulatory leadership, draft guidance for use of RWE for devices, many medical & regulatory related conference discussions and white papers – such as from the Duke Margolis conference and NAM (National Academy of Medicine), have put forth the challenges and presented potential scenarios where RWE may be considered as part of the evidence package in support of regulatory decisions. However, less discussion has taken place on specific statistical issues or methodological solutions. Previous research from OMOP (Observational Medical Outcomes Partnership) raised challenging questions regarding the ability of observational research to accurately detect causal effects and avoid false findings. To improve the operating characteristics – and lead to confidence in decision making from such research, improvements are needed in 1) quality and linking of data sources; 2) research designs; and 3) statistical methodology. In this session, speakers from regulatory, industry, and academia will first discuss the statistical issues behind these challenges and ways to improve observational research. Second, these speakers will serve in a moderated panel session and will discuss the statistical issues and potential pathways forward across several presented scenarios for regulatory decisions, such as expanding label claims for an approved medication based on real world evidence.
Moderated Panel Discussion: Topics include challenges in using RWE for label expansion, improving quality of RWE to regulatory grade, pragmatic and other innovative designs, and analytics to improve the operating characteristics of RWE.
Composite endpoints have been widely used in clinical trials and drug development. When designed and analyzed appropriately, composite endpoints can have many advantages such as detecting weak signal of treatment effect that might not be picked up otherwise or achieving study goals with smaller trials with improved efficiency. On the other hand, misconstruction of or missteps in deriving composite endpoints may weaken the ability to reach reliable conclusions and make it difficult to analyze and interpret the endpoint.
Composite endpoint is also an important topic in the recently released draft FDA guidance of “Multiple Endpoints in Clinical Trials Guidance for Industry”. The guidance discusses designating individual elements of a composite endpoint as secondary endpoint with appropriate multiplicity control to reach proper conclusion. However, one situation remains unsettled for the case of a non-significant primary composite endpoint with a hard to ignore favorable effect from a component endpoint. The guidance also discusses how to appropriately conduct and interpret the decomposition analysis of individual elements and influencing factors.
It is worth noting that the wide adoption of composite endpoints across different therapeutic areas data settings (e.g. randomized trials, real-world data) and statistical analysis framework (e.g. longitudinal data analysis, time-event-analysis) requires careful considerations on capability, feasibility and specialty. The customized strategy and methodology can be curated around the implementation with composite endpoints to best fit for the purpose.
We propose to invite experts to discuss the construction and analyses of composite endpoints across different scenarios. It is also expected to generate insightful discussions on the practical implications of the aforementioned draft guidance in drug development and to shed light on some of the unresolved issues indicated in the guidance.
Discussant: Bruce Binkowitz, Ph.D., Vice President, Shionogi, Inc.
The speakers in this session discuss statistical and related considerations for effectively implementing the 2009 FDA Guidance for Industry on Patient-Reported Outcome (PRO) Measures: Use in Medical Product Development to Support Labeling Claims. Recent reviews (Cappelleri 2017) have summarized the number of drugs approved with a PRO as a primary or secondary endpoint appearing in the package inserts.
FDA’s PRO guidance has been described as a "roadmap" for developing a PRO. PROs developed using the guidance, with FDA regulatory and statistical input, are better positioned to inform a primary endpoint in trials that may lead to drug approval and marketing authorization.
The PRO guidance presents specific steps required for developing a PRO de novo and the use of existing PROs to inform a primary endpoint.
At a high level these steps include but are not limited to defining a target population, developing labelling statements as part of the target product profile, eliciting information about the disease and endpoints of interest from patients or caregivers (for example for a pediatric indication), establishing psychometric properties of the instrument, potential validation steps, logistics related to inclusion in clinical trials, data collection, transmission, and management, and the analysis and interpretation of the data.
Discussant: Dennis Revicki, Evidera
The use of tobacco products is the leading cause of preventable disease and death worldwide. In the United States, it accounts for more than 480,00 deaths every year. The Center for Tobacco Products (CTP) at the FDA is authorized to implement the Family Smoking Prevention and Tobacco Control Act (2009). Within this law CTP regulates the manufacturing, distribution, and marketing of tobacco products. Mathematical and statistical modeling is critical to estimate the impact of tobacco product use on the population as a whole and charter new frontiers in tobacco regulatory science.
This session will present novel mathematical computational models to study the impact of tobacco products use on the health of the US population including users and non-users of tobacco products. Presenters are experts in regulatory science and modeling. They were selected from industry, academia, and government to bring a broad range of expertise and experience in tobacco regulatory science.
Arming the immune system against cancer has emerged as a powerful tool in oncology during recent years. Instead of poisoning a tumor or destroying it with radiation, immunotherapy unleashes the immune system to destroy a cancer. The unique mechanism of action poses new challenges in the study design and statistical analysis of immunotherapy trials. The major challenges facing such a trial include: (1) Delayed onset of clinical effects violates the proportional hazard assumption and the conventional testing procedures would lead to a loss of power. Additionally, the duration of delay may vary from subject to subject or from study to study. 2) The development of autologous cancer vaccines may be subject to manufacture failure and not all eligible subjects randomized to the treatment would have a “successful” product being developed. More innovative statistical methodologies are needed to address the unique characteristics of immunotherapy trials.
This session will feature speakers from industry, academia and regulatory agency who will discuss their approaches to meet the challenges. The first speaker will give an overview of immunotherapy development from regulatory perspective; the second speaker will address how to account for delayed effect in the design of immunotherapy trial using an innovative statistical approach; the third speaker will discuss the practical statistical issues encountered in running immunotherapy trials.
ICH E11 Clinical Investigation of Medicinal Products in the Pediatric Population was finalized in early 2001. Although the basic principles in the guidance should still apply, many concepts and approaches are due for refreshment and modification. The recent addendum supplements the current E11 guidance in several areas, including extrapolation, modeling and simulation, and trial methodology. Traditionally extrapolation of efficacy heavily relies on the clinical evidence collected from pharmacokinetic (PK) and pharmacodynamic (PD) studies. Naturally, modeling and simulation are often referring to pharmacologic approaches. More than a decade after the guidance publication, scientists start to rehash the basic concept of extrapolation. It is reflected in the recent addendum where modeling and simulation are expanded to include mathematical and statistical models with additional attention to the model assumptions and validities. The EMA reflection paper published in October 2017 focused on extrapolation and provided more detailed guidance on methodologies. As expected, statisticians have been brought in the discussion for a better extrapolation schema and more efficient and practical trial designs that maximize utilization of existing data sources in a systematic manner. This session will discuss alternative models and trial designs for pediatric clinical program with case examples from both regulatory and industry perspectives based on their respective experiences.
There are three major domains in health care: diagnosis, treatment and prevention of diseases. Vaccines play a critical role in management of public health through prevention of diseases. In fact, CDC’s report ranks vaccination as one of the ten greatest public health achievements of the 20th century. Therefore vaccine development has been, is, and will be important. There are unique aspects in vaccine development that separate it from drug development: multiple endpoints and/or multivalent vaccine create the multiplicity challenge; dosing schedule and/or co-administration with other current vaccines present challenges in design and conduct of clinical trials; regulatory agencies require lot consistency study; and safety assessment/monitoring is very different in vaccine development since vaccines are mostly administered to healthy subjects. All these unique aspects require different statistical strategies. In this session, speakers from industry and regulatory agency will share some of these challenges and discuss innovative study designs and statistical methods encountered in vaccine development.
Drs. Janet Woodcock and Lisa LaVange from the FDA published a review article on master protocols in NEJM in July 2017, where they comprehensively discussed the use, example and benefit of using umbrella, basket, and platform trials. Also, recent FDA approvals in drug development, such as pembrolizumab to treat variety of tumors sharing a common genetic signature, demonstrated the regulatory agency’s encouragement of innovative designs. With all these efforts from the regulatory agency and also the targets for new drug becoming more and more precise, we expect to see an increasing use of such innovative designs in future drug development. In this session, renowned speakers and panelists will share and discuss their experience with these designs, especially basket trials, emerging in the drug development.
Real-world evidence (RWE) can be synthesized from real-world data (RWD). Potentially, RWE can reduce the cost and duration of clinical trials in the process of evaluating the effectiveness and safety of medical products. Therefore, there has been a great level of interest in using RWE in regulatory decision making with the wide availability of various RWD and the public and regulatory encouragement.
The interest in synthesizing RWE is even stronger for medical device clinical studies. FDA allows sponsors to employ single-group studies for the pre-market evaluation of a medical device when "the device technology is well developed and the disease of interest is well understood." These single-group studies may either compare the investigational device to a non-concurrent control group or a performance goal (i.e., a numerical target value pertaining to an effectiveness or safety endpoint) derived from non-concurrent information. The abundance in RWD provides a pool of subjects that may be large enough to apply sophisticated methodologies to gather non-concurrent controls or determine performance goals. With these advanced methodologies, the common issues of comparability and temporal bias in selecting non-concurrent control or determining performance goal may be alleviated.
However, challenges arise concerning how to use RWE for regulatory decision makings. These challenges include issues in determining the relevance, reliability and comprehensiveness of RWE as well as issues in conducting statistical design and analysis and drawing robust inference using RWE for medical device clinical studies.
In this session, speakers will present perspectives of FDA, academic and industry experts on the use of RWE in both pre and post-market settings, and share examples and proposals in using existing RWD sources.
Medical devices provide a rich setting in which to increase efficiency of clinical studies by incorporating historical data. Medical devices have physical and local effects, which, when modifications to the device are minor, may be predictable from previous device generations. In addition, overseas studies may yield information on the new device, and historical studies provide information on responses to comparator treatments. Bayesian methods offer a means of combining such prior information with current information to make inference on treatment effects. While the mathematical models for using historical data are relatively well established, improvements are needed in methods for constructing informative priors and setting performance benchmarks using historical data. This session will focus on improved methods for leveraging historical data.
The first talk will examine the process of constructing an objective performance criterion (OPC), a benchmark derived from historical data to serve as a comparator for an experimental treatment in a one-arm trial of a novel device. Laura Hatfield will discuss the construction and application of such a measure in trans-catheter aortic valve replacement devices. The second talk will address recent advances in the development of the discount prior Bayesian approach, which employs dynamic borrowing of historical data based on the similarity between historical and current information. Donald Musgrove, a member of the Medical Device Innovation Consortium (MDIC) working group, will present these techniques. The final talk, from the perspective of an FDA reviewer, will discuss what to borrow, how much to borrow, and keys to effective communication with FDA on issues related to borrowing. Laura Lu will present this talk, with specific examples illustrating the challenges with Q/IDE submissions incorporating Bayesian designs. (All three speakers are confirmed.) The session will conclude with a floor discussion.
The utilization of modeling and simulation approaches across a broad range of applications including dose selection continues to grow. A number of model based approaches have been developed which show potential for improvement over simple pairwise comparison. These include MCP-MOD [1], model averaging [2], and empirically-selected dose response models [3]. A fundamental challenge for clinical dose response estimation and subsequent dose selection is the relatively low signal-to-noise compared to many common applications of dose response methodology with pre-clinical data. Another challenge is that the dosing ranges studied are often too narrow to display the full shape of the changes in response as dose increases. These conditions can produce poor performance from many statistical methods that are in common use because of their excellent asymptotic properties under more favorable conditions. The use of modeling and simulation to influence decisions about the experimental drug doses manufactured to permit improved dosing designs will be discussed. Analysis methods, including Bayesian approaches, will be emphasized that integrate multiple data sources (e.g. pharmacokinetic data, longitudinal repeated measurements, historical data) to increase the data available from a single primary outcome variable in a single dose ranging study. The designs and analyses considered will cover most therapeutic areas other than oncology, which has differing objectives and resulting methods for dose response estimation.
Rare disease and pediatric clinical trials often are challenging to conduct. One main reason is the relatively small numbers of patients available for recruiting. Additionally, there may be uncertainty in the selection of the primary endpoint due to many factors including significant clinical heterogeneity among patients or because these diseases are still not fully understood. Third, the use of concurrent placebo control may be impractical.
Based on prior research and successful clinical trials, various approaches have been proposed to tackle the challenges associated with the conduct and analysis of these types of trials. Nevertheless, there is no consensus on acceptable methods. Regardless of the criteria used, the ultimate goal is to allow flexibility and ensure sufficient ability to maintain the standards for evaluating drug safety and efficacy.
In this session, three speakers are invited to share their research on innovative designs and advanced statistical methodologies, including meta-analyses and Bayesian approaches to utilizing historical data for confirmatory trial design and analysis. They will share their experiences and provide practical examples to facilitate discussion.
With much fanfare in the popular media, artificial intelligence(AI) and modern statistical or machine learning are poised to bring fundamental shifts to entire industries. Pharmaceutical industry, with its heavily data-driven drug discovery and development process, also sees many exciting opportunities in this new era. In this session, key opinion leaders from industry, academia, and regulatory agency will discuss the latest efforts to bring advanced analytic tools such as Deep Neural Network, iFusion Learning, and random forest into drug discovery and development/review. The organizer of this session is looking to stimulate informed discussions, and dispel the myths about AI and statistical learning through concrete examples of real-world applications in drug development. The tentative speakers and topics are listed below: 1. Minge Xie, Distinguished Professor, Rutgers University The speaker will introduce the concept of Individualized fusion learning (iFusion), which enhances inference for an individual via adaptive combination of confidence distributions obtained from its clique (i.e., peers of similar individuals). 2. Haoda Fu, Ph.D., Eli Lilly The speaker is leading the enterprise wise ML/AI group at Eli Lilly and Company. He will provide an overview on applications and opportunities using novel technology to improve pharmaceutical industries. Examples will cover drug discovery, manufactory, commercialization, and connected care device for mobile care. 3. Richard Baumgartner, Ph.D., Merck & Co. The speaker will review examples of statistical learning including variable screening in high throughput biomarker data, identification of adverse events in safety statistics and determination of risk factors in late stage clinical development.
Traditionally, drug development has followed an orderly sequence of Phase 1, Phase 2, and Phase 3 studies. In recent years, seamless designs have gained popularity due to the desire for rapid development and approval of new drug products to ensure timely patient access to safe and effective drugs. Under the seamless design framework, for example, a Phase 2 study and a Phase 3 study can be combined into one continuous study. As a result, the study sample size and time can be reduced. However, the logistics and statistical inferences of a seamless study may be challenging.
The goal of this session is to discuss the issues and challenges in seamless designs. The session will include two speakers: Dr. Shiowjen Lee from FDA/CBER will present a regulatory view on seamless designs and case examples; Dr. Zhaoyang Teng from Takeda will present a seamless Phase 2/3 study design and discuss how to make go/no go decision and plan sample size. Dr. Keaven Anderson from Merck will be the discussant.
Establishing an acceptable safety profile of a candidate drug is an important requirement to progress the drug into a marked medicine. Key to safety assessment is development and validation of robust assays. Although several regulatory guidelines have been issued, associated statistical methodologies continue to evolve. For example, in the recently issued FDA Draft Guidance for Industry: Assay Development and Validation for Immunogenicity Testing of Therapeutic Protein Products, a more stringent approach for cut point setting is advocated. In this session, statistical challenges and opportunities concerning pre-clinical/clinical safety evaluation will be highlighted. In particular, the thought process that led to the FDA recommendation of a new statistical method for immunogenicity assay cut point will be discussed, so will an alternate method be presented. This session consists of four presentations, two expert statisticians representing industry perspective and two representing a regulatory perspective, will together provide fresh insight on the application and issues surrounding statistical approaches to pre-clinical/clinical safety evaluation.
Speaker 1: Susan Kirshner, FDA Speaker 2: Jincao Wu, CDRH, FDA Speaker 2: Jochen Brumm, Genentech Speaker 3: Jianchun Zhang, MedImmune
The recently released ICH E9 R(1) Addendum (Aug 2017) highlighted the importance of intercurrent events, e.g., “choosing and defining efficacy and safety variables as well as standards for data collection and methods for statistical analysis without first addressing the occurrence of intercurrent events will lead to ambiguity about the treatment effect to be estimated and potentially misalignment with trial objectives.” The ICH E9 R(1) working group recommended five alternative strategies when estimating treatment effect in clinical trials with intercurrent events, namely, treatment policy (i.e., the true ITT estimand), composite strategy, hypothetical strategy, principal stratification strategy, and while on treatment strategy. However, the Addendum only provided general guidance and considerations for these alternative strategies. Much work needs to be done to resolve the remaining challenges in handling different intercurrent events in different clinical trials for drug development. In this session, speakers from the FDA and the industry will discuss some novel statistical methods and practical approaches for these alternative strategies with application to case studies, followed by discussion from the chair of the ICH E9 R(1) working group. Statisticians from the regulatory agency and industry will bring various and valuable perspectives to this rapidly evolving statistical topic.
Discussant: Tom Permutt, FDA
The ICH E14 Q&A was revised in December 2015 and now enables pharmaceutical companies to use concentration-QTc (C-QTc) modeling as the primary analysis for assessing QTc prolongation risk of new drugs. Because the C-QTc modeling approach is based on using all data from varying dose levels and time points, a reliable assessment of QTc prolongation can be based on smaller-than-usual TQT trials or based on single- and/or multiple- dose escalation (SAD/MAD) studies during early-phase clinical development in order to meet the regulatory requirements of the ICH E14 guideline.
Following the Q&A document R3 on ICH E14, C-QTc modeling is increasingly being used as primary analysis for the assessment of QTc-prolonging potential of a new drug. The statistics on report and protocol reviews received since the revision from FDA QT- interdisciplinary review team will be presented. We will share some common regulatory review issues and recommend good practices using C-QTc modeling in regulatory submission setting. Recent research development in C-QTc modeling may be discussed if time permits.
Geriatric patients are underrepresented in clinical trials in spite contributing to a disproportionate large disease burden and major consumption of prescription drugs and therapies. Geriatric patients have approx. 60% of the disease burden but represent only 32% of patients in phase II/III CTs. (Herrera, et.al., 2010). CT participation has been low in research across all therapeutic areas (e.g., on oncology, CNS, arthritis, and CVS). These failings may limit generalizability, provide insufficient data about positive or negative effects of treatment among this population. Elderly patients are accompanied with comorbidities, multiple drugs for multiple conditions, various disabilities and quality of life issues, organ function differences, and fraility. . Also, the high prevalence of polypharmacy with aging may lead to an increased risk of inappropriate drug use, under-use of effective treatments, medication errors, poor adherence, drug-drug and drug-disease interactions, and most importantly adverse drug reactions. Another consideration is that chronologic age alone is inadequate. Use of a standard geriatric assessment would be helpful and provide groups with different age-related conditions (fit, vulernable, frail) for analysis. The FDA issued guideline for Industry to encourage the fair representation of elderly patients in CTs and made suggestions: 1) modification of I/E criteria, 2) develop outcomes that are relevant; and 3) make adjustments to measurements and adjustments for comorbidity. These considerations may lead to a decision for the elderly population to be a sub-group of a single CT, or separate CT to be able to provide the needed information for submission. The presenters will review the current status of reporting of geriatric patient results in CTs, how do currently summarize geriatric results, how many geriatric patients are needed for a clinical submission for drug approval, and best practice for planning for a geriatric population in CTs.
Discussant: Rick Chappell, Univ of Wisconsin at Madison
This session will be tied to the 2015 FDA draft (or final, if available) guidance on inter-disciplinary, aggregate assessment for IND safety reporting and will serve as a forum to discuss key topics in the guidance. Many companies have been trying to establish internal processes and teams to implement this guidance. Practical challenges and concerns have arisen during implementation. For example, should the unblinded aggregated SAE analysis involving ongoing pivotal studies be performed internally or through an external group? What is an appropriate threshold to trigger a Safety Assessment Committee to further investigate the causal association of an SAE with the investigational drug? How do we determine anticipated SAE rates, especially for studies without a control arm? In addition, controversy over the merits of blinded vs unblinded monitoring still exists. In this session, experience and thoughts on handling these practical issues will be shared and discussed.
Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on data accrued in an ongoing trial. These designs have potential to provide more information about which subpopulations benefit from a treatment; however, there are tradeoffs (e.g., in sample size and trial duration) in using these designs compared to standard designs. The session includes clinical applications and new methodological research involving these adaptive designs.
Our speakers are from diverse backgrounds: industry, the FDA, and academia. Dr. Scott Berry, President and Senior Statistical Scientist at Berry Consultants, LLC, will present the design and analysis for a recently completed Bayesian adaptive enrichment trial for treating severe stroke that demonstrated successful results. Dr. Min (Annie) Lin from the FDA will present case studies and discuss the regulatory challenges associated with adaptive enrichment designs. Dr. Michael Rosenblum from Johns Hopkins University will present new statistical methods for improving power in adaptive enrichment designs, and for computing optimal tradeoffs compared to standard designs.
With the fast merging field of safety assessment during clinical development, safety has become the “new” efficacy. However, safety events of interest are often rare, unpredictable and with no clear underlying causes, which will require further clinical evaluation. This is where the new era of innovative methodology meets the need of clinical development of safety assessment and adapts to the existing practice. In this session, we will focus on recent innovative methodologies for the development of clinical safety assessment. The potential advantages of the Bayesian approach relative to classical frequentist statistical methods include the flexibility of incorporating the current knowledge of the safety profile originating from multiple sources into the decision-making process. The first presentation will be on Bayesian thinking of safety assessment from ASA safety working group Bayesian methodology subteam. Individualized medical decision making is often complex due to patient treatment response heterogeneityand. Pharmacotherapy may exhibit distinct efficacy and safety profiles for different patient populations. An ``optimal" treatment that maximizes clinical benefit for a patient may also lead to concern of safety due to a high risk of adverse events. Thus, to guide individualized clinical decision making and deliver optimal tailored treatments, maximizing clinical benefit should be considered in the context of controlling for potential risk. The second presentation will discuss how to identify personalized optimal treatment strategy that maximizes clinical benefit under a constraint on the average risk using machine learning and artificial intelligence algorithm. The third presentation will be from the regulatory agency discussing recent innovative methodologies for safety assessment from the regulatory perspective.
Discussant: Qi Jiang, Amgen
The developments in precision-based diagnostics devices can provide patients with more accurate results and better care, as well as lower costs. Different technologies and platforms, such as Immunohistochemistry (IHC), Fluorescent in situ Hybridization (FISH), and high-throughput sequencing, have been implemented to develop companion or complementary diagnostic devices for different therapeutic products. The application of these advanced or emerging technologies for diagnostic devices has brought with it many challenges to study design and data analysis. In this session, we will discuss the statistical issues and considerations associated with the emerging diagnostic technologies including the liquid biopsy, IHC assays, Next Generation Sequencing (NGS) onco-panels, Biodosimeters for rapid estimation of radiation exposure.
Multi-regional clinical trials (MRCTs) have been increasingly utilized to support the approval of drugs in multiple regions and to expedite the development of drug products in the global market. However, when differences in response to drug arise across regions, it may become challenging to interpret the trial results. In this session, two speakers will talk about heterogeneity in MRCTs and their suggestions to handle this issue: Dr. Janet Wittes from Statistics Collaborative will discuss heterogeneity observed from data collection and reporting and Dr. Mark Rothmann from the FDA will illustrate the use of Bayesian shrinkage estimation in analyzing regional treatment effects. We will also be joined by panelists from both the industry and FDA, who will share their experiences and insights on how to deal with regional heterogeneity in MRCTs.
Ever since the 21st Century Act (Cures Act) signed into law on December 13, 2016, there is a surge of interests in innovative trial designs and data analyses. The new PDUFA VI also initiated a pilot program “for highly innovative trial designs”. These proposals are mostly aiming at life-threatening conditions and/or rare diseases. The latter is defined as a condition that affects fewer than 200,000 people in the US. This session will focus on rare disease clinical programs, which inherit the usual challenges from small sample trials but also possess some unique hurdles. In particular, besides the study power issue, rare conditions are often difficult to diagnose (that can be translated into difficulties of study population identification), without a readily acceptable efficacy endpoint, and lacking of historical information. A more creative mind is required to design a practical clinical trial that can render robust evidence. Naturally, statisticians are consulted for efficient trial designs which systematically incorporate data from sources outside of clinical trial settings and novel analysis methods to better understand the data collected and inform future trial designs. This session will discuss regulatory experience in rare disease program and present alternative models and trial designs that can be potentially utilized for rare disease clinical program. Both regulatory and industry perspectives will be represented.
Discussant: Forrest C. Williamson, Eli Lilly and Company
It has been more than 5 years since the National Research Council's Panel on Handling Missing Data in Clinical Trials released its report on the Prevention and Treatment of Missing Data in Clinical Trials. This Report has had an appreciable impact on the design, conduct and analysis of clinical trials. Recently ICH E9 was amended to clarify issues related to estimands and sensitivity analyses, two key issues discussed in the Report. In this proposed panel discussion, we would like to bring together FDA, academia and industry statisticians to assess where we are with the prevention and treatment of missing data in clinical trials. We would like to focus the discussion on three areas: quantity of missing data, precision of definitions of estimand, and the role of sensitivity analysis. Our objective is to assess the overall state of our science with regard to the impact on study design and conduct (informed consent process, inclusion of subjects in a study even after they have discontinued study medication), how the data collected from individuals after they stop study-specific treatment but complete study assessments for the entire trial have been used and interpreted, whether there has been a reduction in missing data as a result of using the recommended strategies in the Report, and what choices of statistical models and software to assist in design and analysis of the data to address missing data and sensitivity issues have been found useful.
In December 2016, the “21st Century Cures Act” was signed into law marking a strong commitment to modernize clinical trials. The Prescription Drug User Fee Act VI, passed in August 2017, is complementary legislation that includes many of the themes outlined in the 21st Century Cures Act. One area of great interest to statisticians is the advancement and use of complex adaptive, Bayesian, and other novel clinical trial designs. Under PDUFA VI, the FDA will launch a Complex Innovative Designs (CID) pilot program. CID are those designs which require simulations to understand their operating characteristics, their statistical properties and operational features. This session aims to illustrate examples of innovative designs. One example will briefly describe a Bayesian non-inferiority study in confirmatory settings with augmentation of the active control arm with historic control efficacy data from the literature. Another example will describe a basket program by investigating one drug in multiple indications, as well as using historical control information to reduce the size of the placebo arm. The philosophy of learning from the current trial, borrowing information from the historic data, or combining direct and indirect evidence from multiple trials, and updating our beliefs may provide scientific justification for more cost effective and efficient clinical studies without sacrificing the goal of evidence-based medicine. The session will aim to provide an overview and the vision of the CID pilot program and Bayesian innovative designs also from the regulatory perspective.
Data from “real world” practice and utilization – outside of clinical trials – is regarded as a pragmatic source of evidence with high potential to support clinical development and life cycle management of medical products. Both US and EU regulatory agencies, public-private partnerships and health technology assessment organizations have launched major initiatives to address the concerns and considerations in the use of real world evidence (RWE) to inform regulatory decision making. In PDUFA VI, the FDA has committed to explore enhancing use of RWE in regulatory decision making, to hold public workshops, and to publish draft guidance documents between 2018 and 2021. EMA also has published multiple guidance documents on the usage of RWE (e.g. the 2017 EMA Patient Registry Initiative to support research in understanding natural history of disease and characterizing the effectiveness and safety of products). Most notably, there has been increasing public and private partnership on RWE research. GetReal was a project funded by the Innovative Medicines Initiative, a three-year two-billion-euro public-private consortium in Europe from October 2013- October 2016.
Robust RWE will not only leverage increasing volumes of data, but weave together different sources of data, such as clinical data, registry, and electronic health records, to bridge the gaps between efficacy and effectiveness. Although many challenges and limitations remain with the use of RWE, there have also been many successful case studies. In this session, invited speakers will discuss the roles of RWE in the regulatory environment, provide an overview on utilizing RWE to augment clinical development and life cycle management, share successful RWE studies as well as their challenges, and finally elaborate on developing RWE strategies in clinical development and life cycle management. Potential speakers and discussant will include experts from regulatory agencies, academia, and industry.
Bioequivalence (BE) studies are used to evaluate whether generic drugs are equally bioavailable to their pioneer counterparts in the rate and extent to which the active ingredients are absorbed and become available at the site of drug action. Typical BE studies are conducted at blood level with a two-sequence, two-period, cross-over design. Veterinary drugs share many of the same challenges as human drugs. For example, highly variable drugs, also seen in veterinary drug applications, have different acceptable criteria and statistical analysis method from the regular drug. Additionally, blood level BE studies might not be feasible for some drugs; instead clinical endpoint studies might be more appropriate. Evaluation of BE in animals may be complicated because veterinary drugs are delivered to groups of animals rather than to an animal individually and that all animals are typically enrolled in a study at the same time. Speakers from FDA, industry, and academia will tackle these challenges and propose innovative solutions for designing and analyzing BE studies for animal drugs.
In many clinical applications of survival analysis with covariates, the commonly used semiparametric modeling assumptions, e.g., Proportional Hazards (PH), Proportional Odds (PO), Accelerated Failure Time (AFT), and other similar models, though semi-parametric, may still turn out to be stringent and unrealistic, particularly when there is scientific background to believe that survival curves under different covariate combinations (e.g., comparing two treatments) will cross during the study period. This session brings together speakers from industry, regulatory and academic institutions to explore various practical scenarios where such assumptions are violated. The speakers will present results and examples for estimating conditional hazards and also testing when the underlying model assumptions are likely to be violated. Empirical results based on several simulated data scenarios under various modeling assumptions will be presented and regulatory perspective of the consequences of using wrong assumptions will also be discussed. The session includes confirmed speakers from academia, industry and regulatory agencies making it an ideal forum to bring together experiences and knowledge that would be relevant to the topic of modeling and simulation methods.
In 2016, generic drugs accounted for 86% of the market share in US. Generic drugs have saved the U.S. Health care system $1.67 trillion in the last decade. For certain drugs such as locally-acting drugs, three-arm clinical end-point BE studies are often used to establish BE between a generic drug and an innovator drug. Clinical endpoint BE studies, however, are much more costly than pharmacokinetic studies used for systemic drugs. How to optimize study design and improve effectiveness and efficiency of clinical endpoint BE studies is a pressing task under the Generic Drug User Fee Act II (GDUFA II) instituted in 2017. In this session, speakers from the FDA and industry will discuss novel statistical methods and results from different perspectives, including how to design effective endpoints for an Abbreviated New Drug Application (ANDA) based on the NDA study design, how to optimize the allocation ratio of sample size among the generic drug (TEST), innovator drug (REF), and placebo (PLB) under different scenarios, and how to incorporate adaptive design in clinical endpoint studies. All of these strategies will help cut costs for sponsors and ultimately promote the availability of affordable generic drugs to the American public. This session will include three presentations followed by a discussion. Statisticians from the agency and industry will discuss issues and novel statistical approaches in bioequivalence.
Discussant: Stella Grosser, FDA
Across all areas of drug development, opportunities for statisticians to influence abound and are increasing. Thousands of statisticians working across the spectrum of drug development underscore the relevance of statistical skillset in the pharmaceutical industry. Furthermore, statisticians are integral part of regulatory teams tasked with the responsibility to make decision on the safety and efficacy of pharmaceutical products. Beyond technical competence, specific competencies are required for statisticians to influence decision-making effectively in a multidisciplinary drug-development-team environment. The key leadership competencies for statisticians (e.g. listening, networking and communication) will be reviewed and discussed. This session will feature presentations from Gary Sullivan and John Scott on “Growing your Influence in a Multidisciplinary Drug Development Team”. Gary will speak from industry perspective while John will approach the topic from a regulatory perspective. Both presentations will highlight the importance of statistical leadership, the requirements for statistical leadership and the impacts of statistical leadership. A panel discussion featuring prominent statisticians across regulatory, industry and academia will follow the presentations. The panelists will discuss topics and questions engendered to corroborate the presentations based on their own experiences.