Currently, several challenges are facing the pharmaceutical industry including doubling the cost for research and development but decreasing the productivity (Woodstock, 2005). The pharmaceutical industry is under pressure to provide a clinical trial that answer more questions that are more efficient. The scientific communities are in great need to develop innovative trial designs that can incorporate multiple Tx, multiple Dx and provide results for multiple complex hypotheses. Also, these trials, in general, are more efficient and less costly than to perform the subset of trials individually. The master protocol trial design (Woodcock and LaVange 2017) is the centerpiece of innovation that incorporates the above goals in the clinical trial. The master protocol is defined as one overarching protocol that would answer multiple questions or hypotheses. Master protocols are designed for one Tx across multiple Dx or a single Dx across multiple Tx. These clinical trials are adaptive where group sequential design, sample size adjustment or dropping an arm may be part of the design. These designs include:Umbrella design is to study multiple targeted Tx in a single Dx, Basket design is to study a single targeted Tx in multiple Dx or disease subtypes, and Platform design is to study multiple Tx in single Dx where Tx are added or to leave the platform on the basis of decision algorithms. The master protocol may involve the direct comparison of competing Tx or to evaluate in parallel different Tx relative to their control Tx. After an interim analysis in a basket design, the decision to combine into in to larger Dx groups such as when using Simon’s two stage design for oncology trials. During this session the presenters will discuss the current usage and application of master protocols, what statistical methodology need to be considered when designing and analyzing these trials, and considerations when developing a clinical program with a master protocol included.
Safety evaluation is, albeit critical to a drug approval, often difficult to assess during pre-marketing clinical programs. Regulatory agencies require pre- or post-marketing clinical trials to further investigate specific severe adverse events with pharmacological rationale. These trials usually engage over a thousand subjects for multiple years. The results from these trials, however, are still inconclusive most of the time due to the expected low event rates and other confounding factors caused by the pure magnitude and length of these trials. Recently, efforts have been put in searching innovative safety trial designs and novel statistical approaches that can incorporate both historical and real-world data that may not be collected from pragmatic clinical trials. The foreseeable obstacles include the validity of data sources and the justification of proper approaches to be used. However, should such safety evaluation be streamlined to consolidate all safety information in hand, it could greatly improve regulatory agency’s decision-making process to either reasonably rule out a safety signal or more definitively identify a potential risk of a drug. This session will discuss innovative safety trial designs that take external data into consideration. Both regulatory and industry perspectives will be presented to illustrate lessons learned from past experience and to advance the proper usage of these new approaches in safety evaluation.
Binary endpoints are frequently used in equivalence or non-inferiority studies in many areas of pharmaceutical industry, such as generic drugs, biosimilar products, and vaccines. Even though the methodologies for equivalence and non-inferiority testing have been well established, in certain situations, the commonly used statistical methods may not be most appropriate. Therefore, researchers from the FDA, industry, and academia have continued to develop novel statistical methods for equivalence/non-inferiority evaluation of binary endpoints.
In this session, speakers from both the FDA and industry will present their recent research in equivalence/non-inferiority testing for binary endpoints. Dr. Luan will explore an alternative statistical method for the bioequivalence study with binary clinical endpoint in Abbreviated New Drug Applications (ANDAs) because the current method is not equally sensitive to detect the difference between the test product and the reference product across the response range, particularly when the success rate for the reference product is expected to be low. Dr. Patterson will present the recent development of equivalence/non-inferiority of binary endpoint testing in Vaccines. He will describe statistical analysis methods used to analyze clinical immunogenicity data, summarize the commonly applied methods used to assess efficacy, illustrate these methods using anonymized clinical data, and share his most recent research in his area. As a discussant, Dr. Chow will summarize the advantages, disadvantages, and challenges of these newly developed testing methods. This session will provide the audience the updated methodology and best practice in equivalence/non-inferiority testing for binary endpoint and inspire innovative ideas for this topic.
Discussant: Shein-Chung Chow, FDA
The presence of non-PH treatment effect has been well documented in the context of immuno-oncology (IO) studies, where a delayed separation of the Kaplan-Meier curves have often been observed. However the occurrence of non-PH is not limited to just IO. It can manifest itself in many other clinical settings and can take variety of other forms such as diminishing effect (KM curves joins together after sufficient follow-up) or crossing KM curves. In such cases, summarizing the treatment effect based on hazard ratio alone may not be informative. Hypothesis testing with log-rank test and clinical trials designed based on proportional hazard assumptions are likely to be underpowered to detect treatment differences. A cooperative effort with the Pharmaceutical Industry was initiated by the FDA in late 2016 to take a holistic approach in understanding the impact of non-PH in the design, analysis and interpretation of clinical trials. Both simulations and actual case studies were used to evaluate the pros and cons of different statistical approaches. The Non-Proportional Hazards Working Group has now made recommendations on potential updates to regulatory expectations for studies with non-constant treatment effect over time. This session will present recommendations for design and analysis of studies in this context and the recommendations will be discussed by regulatory, industry and academic representatives.
Kun He from FDA will be the discussant for this session.
Rare diseases pose several challenges for medical product development. Often not much is understood about the natural history of a disease, patients are hard to come by so trials are apt to be small, and it is often hard to convince patients to be randomized to a placebo arm. So we highlight several aspects of rare disease product development by focusing on rare neurological diseases because they are especially hard. Often patients do not present symptoms at birth, are quite variable within a “disease” and being diagnosed is hard. FDA wants clinically meaningful endpoints in evaluating clinical trials for any disease but identifying ones which have sufficient power in small trials can be difficult. We have invited three speakers to talk in this session. David Schoenfeld is a prominent statistician at Massachusetts General and has published extensively on statistical issues with ALS (Lou Gehrig’s disease) and he will talk about when an historical control is a good thing to do and when it is not. His work has been cited in many FDA submissions. Tristan Massie has been an FDA statistical reviewer involved in the evaluation of many rare neurological disease product submissions in CDER and he will comment about what is the same and what is different about rare disease submissions at the FDA. Susan Ward heads up an organization funded by industry allowing them to operate in a precompetitive space to better understand Duchenne’s Muscular Dystrophy. They make use of data outside of clinical trials (“real world evidence”) to support drug development for rare diseases. Their group is modeling patterns of progression in DMD. This work ought to improve subsequent trials for DMD. These kinds of efforts have been encouraged by FDA. DMD & ALS differ a lot but can highlight why understanding the disease is so critical to getting the right clinical trials.
In recent years, there has been a proliferation of regulatory and industry-wide initiatives on structured benefit-risk (BR) assessment. Examples of structured BR frameworks include the PrOACT-URL (Problem for¬mulation, Objectives, Alternatives, Consequences, Trade-Offs, Uncertainties, Risk Attitude and Linked Decisions), the Benefit Risk Action Team (BRAT) framework, and the FDA Implementation Plan on Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. Also, in June 2016 the ICH Expert Working Group finalized the Com¬mon Technical Document Section 2.5.6 on Benefit-Risk Evaluations. As a result of these efforts, the uptake and utilization of structured BR assessments has been increasing. However, the aforementioned frameworks are mostly qualitative in nature, and the utility of quantitative BR approaches has not been systemically explored, creating uncertainty about settings in which quantitative BR assessment could be optimally applied. In this session, speakers will discuss the common understanding and principles of quantitative BR methods, delineate situations where such methods could be utilized via case studies, and give recommendations on the settings for such application.
Graphical approaches and gatekeeping procedures in clinical trials are flexible and interpretable multiplicity adjustment methods that maintain the strong control of the family-wise error rate (FWER). The approaches have gained increasing popularity in confirmatory clinical trials (CCTs) since introduced. Several refined graphical approaches or gatekeeping procedures have been proposed to handle situations when the original assumptions may not hold in CCTs with complex design when there are:
• Multiple primary and secondary hypotheses • Multiple arms • Multiple interim looks • Multiple tests at both subgroup and overall population level
Three speakers will share their experience and ideas in the recent development and application of multiplicity methods in CCTs. • The first speaker will show how to build the correlation between the biomarker subpopulation and overall population into the procedure in oncology trials. This approach will boost the study power while maintain the strong control of the FWER. • The second speaker will discuss about enhanced mixture method to construct more powerful gatekeeping procedures if serial logical restrictions satisfy transitivity condition • The third speaker will provide case studies of applying multiple testing procedures in neuroscience late phase clinical trials.
In vitro studies are essential part of pharmaceutical development from the discovery of new molecular entities to approval and marketing of a new drug. In vitro dissolution in particular is critical to control product quality and to demonstrate bioequivalence. This session consists of four presentations, the first presentation will discuss regulatory perspective for the importance of in vitro bioequivalence while the second and third presentations will be presented by expert statisticians from the industry to discuss traditional and Bayesian methods to assess dissolution profile similarity. Lastly, industry experts will discuss the impact of particle size on dissolution of suspension products and modeling approaches to predict dissolution for a suspension product based on particle size distribution.
The challenge of a clinical trial is to estimate a clinically and regulatory relevant treatment effect in the presence of intercurrent events, such as treatment or study discontinuation, use of rescue medication or death. The draft ICH E9(R1) Addendum on “Estimands and Sensitivity Analysis in Clinical Trials” describes a structured framework that includes the specification of an estimand (i.e. “treatment effect to be estimated”), method of estimation (estimator) in the presence of informative and treatment-related events that occur after randomization, and sensitivity analysis to explore the robustness of inferences from the main estimator to deviations from its underlying assumptions. There are at least five strategies for addressing intercurrent events listed in the draft addendum that lead to different types of estimands. However, the selection of the appropriate estimators for each type of estimand is an open problem that needs further discussion and investigation. This session will share thoughts from both industry and regulatory perspective on the selection of estimators for various types of estimands. Case studies will be considered. The current practice in clinical trials will also be discussed, together with the next steps that could be taken to broaden and make it more efficient.
Discussant: David Petullo (CDER, FDA)
Most medical products are designed and developed based on one-size-fits-all approach. However, it is recognized that not all patients with the same disease would benefit from a specific therapy. Aiming to targeting the right treatments to the right patients at the right dose, Precision Medicine Initiative calls for development of treatment and prevention strategies taking into considerations of individuals’ differences of gene makeup, environment and lifestyle. FDA regulates therapeutic treatments including biologics, devices and drugs. Among them, products with precision medicine implication include autologous cancer vaccines, autologous cell therapies (or regenerative medicine) and companion diagnostic products. Because of unique features embedded in these products, there have been challenges and issues in the product development. For autologous therapies, as the main material for product manufacture comes from each patient’s tissue/cell, cell composition varies from patient to patient which results in a large variability in the total cell number count of the final product. This influences study outcomes of patients. Additionally, time to completion of product manufacture and manufacturing success rate can affect the study design and interpretation of study results. For example, products can take several months to be manufactured and patients become ineligible or die during the time. On the other hand, companion diagnostic products rely on validated testing tools or assays to select the “right” patients for treatments. Issues such as biological basis for selected biomarkers and biomarker validation process during on-going Phase 3 trials could potentially affect the product development and approval.
Dr. Shiowjen Lee from FDA will be the discussant and conclude the session.
The integrity of clinical trial data is critical to the public health mission of the FDA. Fraudulent and poorly collected data jeopardize the ability of FDA scientists to perform their analyses to ensure that safe and effective drugs are approved for the market. Data anomalies indicative of either extreme sloppiness and/or potential fabrication of data have been found in both New Drug Applications (NDA), as well as in Abbreviated New Drug Applications (ANDA). For NDAs, suspect data can often be contributed to problematic clinical sites and therefore the detection of such sites is of primary interest. In the case of ANDAs, data anomalies have been detected for both pharmacokinetic (PK) and clinical endpoint studies. Various efforts have been initiated to address this problem. Some of these efforts aim to improve the efficiency and effectiveness of the existing data anomaly detection tools, whereas others aim to develop novel methodology. Data anomaly detection methodology consists of a variety of tools, including but not limited to, the application of supervised and unsupervised statistical learning methods in clinical trial data where the response is defined as a clinical inspection outcome, the development of computational tools such as R-Shiny applications and several data anomaly detection approaches. The objective of this session is to present and discuss different such efforts initiated from inside and outside the FDA.
The growing use of real world data for medical research and the passing of the 21st Century Cures Act and PDUFA VI have accelerated the growing debate about the appropriate use of real world evidence (RWE) for regulatory decision making. Recent publications from regulatory leadership, draft guidance for use of RWE for devices, many medical & regulatory related conference discussions and white papers – such as from the Duke Margolis conference and NAM (National Academy of Medicine), have put forth the challenges and presented potential scenarios where RWE may be considered as part of the evidence package in support of regulatory decisions. However, less discussion has taken place on specific statistical issues or methodological solutions. Previous research from OMOP (Observational Medical Outcomes Partnership) raised challenging questions regarding the ability of observational research to accurately detect causal effects and avoid false findings. To improve the operating characteristics – and lead to confidence in decision making from such research, improvements are needed in 1) quality and linking of data sources; 2) research designs; and 3) statistical methodology. In this session, speakers from regulatory, industry, and academia will first discuss the statistical issues behind these challenges and ways to improve observational research. Second, these speakers will serve in a moderated panel session and will discuss the statistical issues and potential pathways forward across several presented scenarios for regulatory decisions, such as expanding label claims for an approved medication based on real world evidence.
Moderated Panel Discussion: Topics include challenges in using RWE for label expansion, improving quality of RWE to regulatory grade, pragmatic and other innovative designs, and analytics to improve the operating characteristics of RWE.
Composite endpoints have been widely used in clinical trials and drug development. When designed and analyzed appropriately, composite endpoints can have many advantages such as detecting weak signal of treatment effect that might not be picked up otherwise or achieving study goals with smaller trials with improved efficiency. On the other hand, misconstruction of or missteps in deriving composite endpoints may weaken the ability to reach reliable conclusions and make it difficult to analyze and interpret the endpoint.
Composite endpoint is also an important topic in the recently released draft FDA guidance of “Multiple Endpoints in Clinical Trials Guidance for Industry”. The guidance discusses designating individual elements of a composite endpoint as secondary endpoint with appropriate multiplicity control to reach proper conclusion. However, one situation remains unsettled for the case of a non-significant primary composite endpoint with a hard to ignore favorable effect from a component endpoint. The guidance also discusses how to appropriately conduct and interpret the decomposition analysis of individual elements and influencing factors.
It is worth noting that the wide adoption of composite endpoints across different therapeutic areas data settings (e.g. randomized trials, real-world data) and statistical analysis framework (e.g. longitudinal data analysis, time-event-analysis) requires careful considerations on capability, feasibility and specialty. The customized strategy and methodology can be curated around the implementation with composite endpoints to best fit for the purpose.
We propose to invite experts to discuss the construction and analyses of composite endpoints across different scenarios. It is also expected to generate insightful discussions on the practical implications of the aforementioned draft guidance in drug development and to shed light on some of the unresolved issues indicated in the guidance.
Discussant: Bruce Binkowitz, Ph.D., Vice President, Shionogi, Inc.
The speakers in this session discuss statistical and related considerations for effectively implementing the 2009 FDA Guidance for Industry on Patient-Reported Outcome (PRO) Measures: Use in Medical Product Development to Support Labeling Claims. Recent reviews (Cappelleri 2017) have summarized the number of drugs approved with a PRO as a primary or secondary endpoint appearing in the package inserts.
FDA’s PRO guidance has been described as a "roadmap" for developing a PRO. PROs developed using the guidance, with FDA regulatory and statistical input, are better positioned to inform a primary endpoint in trials that may lead to drug approval and marketing authorization.
The PRO guidance presents specific steps required for developing a PRO de novo and the use of existing PROs to inform a primary endpoint.
At a high level these steps include but are not limited to defining a target population, developing labelling statements as part of the target product profile, eliciting information about the disease and endpoints of interest from patients or caregivers (for example for a pediatric indication), establishing psychometric properties of the instrument, potential validation steps, logistics related to inclusion in clinical trials, data collection, transmission, and management, and the analysis and interpretation of the data.
Discussant: Dennis Revicki, Evidera
The use of tobacco products is the leading cause of preventable disease and death worldwide. In the United States, it accounts for more than 480,00 deaths every year. The Center for Tobacco Products (CTP) at the FDA is authorized to implement the Family Smoking Prevention and Tobacco Control Act (2009). Within this law CTP regulates the manufacturing, distribution, and marketing of tobacco products. Mathematical and statistical modeling is critical to estimate the impact of tobacco product use on the population as a whole and charter new frontiers in tobacco regulatory science.
This session will present novel mathematical computational models to study the impact of tobacco products use on the health of the US population including users and non-users of tobacco products. Presenters are experts in regulatory science and modeling. They were selected from industry, academia, and government to bring a broad range of expertise and experience in tobacco regulatory science.
Arming the immune system against cancer has emerged as a powerful tool in oncology during recent years. Instead of poisoning a tumor or destroying it with radiation, immunotherapy unleashes the immune system to destroy a cancer. The unique mechanism of action poses new challenges in the study design and statistical analysis of immunotherapy trials. The major challenges facing such a trial include: (1) Delayed onset of clinical effects violates the proportional hazard assumption and the conventional testing procedures would lead to a loss of power. Additionally, the duration of delay may vary from subject to subject or from study to study. 2) The development of autologous cancer vaccines may be subject to manufacture failure and not all eligible subjects randomized to the treatment would have a “successful” product being developed. More innovative statistical methodologies are needed to address the unique characteristics of immunotherapy trials.
This session will feature speakers from industry, academia and regulatory agency who will discuss their approaches to meet the challenges. The first speaker will give an overview of immunotherapy development from regulatory perspective; the second speaker will address how to account for delayed effect in the design of immunotherapy trial using an innovative statistical approach; the third speaker will discuss the practical statistical issues encountered in running immunotherapy trials.
ICH E11 Clinical Investigation of Medicinal Products in the Pediatric Population was finalized in early 2001. Although the basic principles in the guidance should still apply, many concepts and approaches are due for refreshment and modification. The recent addendum supplements the current E11 guidance in several areas, including extrapolation, modeling and simulation, and trial methodology. Traditionally extrapolation of efficacy heavily relies on the clinical evidence collected from pharmacokinetic (PK) and pharmacodynamic (PD) studies. Naturally, modeling and simulation are often referring to pharmacologic approaches. More than a decade after the guidance publication, scientists start to rehash the basic concept of extrapolation. It is reflected in the recent addendum where modeling and simulation are expanded to include mathematical and statistical models with additional attention to the model assumptions and validities. The EMA reflection paper published in October 2017 focused on extrapolation and provided more detailed guidance on methodologies. As expected, statisticians have been brought in the discussion for a better extrapolation schema and more efficient and practical trial designs that maximize utilization of existing data sources in a systematic manner. This session will discuss alternative models and trial designs for pediatric clinical program with case examples from both regulatory and industry perspectives based on their respective experiences.
There are three major domains in health care: diagnosis, treatment and prevention of diseases. Vaccines play a critical role in management of public health through prevention of diseases. In fact, CDC’s report ranks vaccination as one of the ten greatest public health achievements of the 20th century. Therefore vaccine development has been, is, and will be important. There are unique aspects in vaccine development that separate it from drug development: multiple endpoints and/or multivalent vaccine create the multiplicity challenge; dosing schedule and/or co-administration with other current vaccines present challenges in design and conduct of clinical trials; regulatory agencies require lot consistency study; and safety assessment/monitoring is very different in vaccine development since vaccines are mostly administered to healthy subjects. All these unique aspects require different statistical strategies. In this session, speakers from industry and regulatory agency will share some of these challenges and discuss innovative study designs and statistical methods encountered in vaccine development.