Recurrent events are repeated occurrences of the same type of event. Endpoints capturing recurrent event information can lead to interpretable measures of treatment effect that better reflect disease burden and are more efficient than traditional time-to-first-event endpoints in the sense that they use the available information beyond the first event.
Recurrent event endpoints are well established in indications where recurrent events are clinically meaningful, treatments are expected to impact the first as well as subsequent events and where the rate of terminal events such as death is very low. Examples include • seizures in epilepsy; • relapses in multiple sclerosis; and • exacerbations in pulmonary diseases such as chronic obstructive pulmonary disease. More recently recurrent event data endpoints have also been proposed in other indications where the rate of terminal events is high, e.g. chronic heart failure, but experience in this setting is limited.
In trials using recurrent event endpoints, interest usually lies in understanding the underlying recurrent event process and how this is impacted by explanatory variables such as treatment. In this context, different endpoints and measures of treatment effect – that is, different estimands - can be considered. Depending on the specific setting some estimands may be more appropriate than others. For example, accounting for the interplay between the recurrent event process and the terminal event process is important in indications where the rate of terminal events is high. The choice of estimands has direct impact on trial design, conduct and statistical analyses.
Sample size re-estimation (SSR) has been the most frequently used adaptive design method in confirmatory trials just after classical group sequential designs. The usefulness of interim SSR is mainly driven by uncertainty about the true effect size during the planning stage of the study. In general, SSR based on blinded interim analyses of aggregate/overall data are considered not to be problematic, as those approaches have very limited potential to introduce bias or impair the validity and interpretability of study results. SSR based on unblinded knowledge of interim treatment effects, on the other hand, can raise issues related to type I error inflation and/or operational bias, and has to be approached with greater caution. In planning any unblinded SSR in regulatory applications, clear analytical derivations or/and statistical justifications to demonstrate the control of type I error rate are expected, as well as strategies/plans to mitigate operational bias. There are additional issues in SSR. For example, it can lead to changing the minimum effect size for which a statistically significant result may no longer be clinically meaningful for the study indication. Also, the timing of SSR is critical as an early SSR may result in an unreliable new sample size because of limited accrual data; while a late SSR may be pointless as planned accrual may be completed by this time.
Disease interception is an emerging field that represents a paradigm shift in disease management: treating a pathogenic disease process before it is clinically detectable. For example, subjects at high risk of developing Alzheimer’s may be given a treatment designed to prevent the disease or delay its onset. As another example, early detection of disease in a preclinical disease phase may expand and/or improve treatment options. Evaluating a diagnostic test for use in a screening program designed for disease interception depends on many factors, including screening frequency, test accuracy, clinical benefits of an accurate test result, clinical consequences of an inaccurate test result, effectiveness of available treatments, and competing risks. Even an accurate test positive result for either slowly-progressing asymptomatic disease or high risk of future disease could have the clinical consequence of long treatment duration with potential for adverse events during a patient’s healthy years. Whether or to what extent these consequences are acceptable to patients or regulators as a result of intercepting disease that might or might not be present in occult form or may or may not occur in the future is unclear. Increasingly popular methods to assess benefit-risk tradeoffs, such as decision analysis with patient preferences and clinical utility measures could prove to be particularly valuable for jointly assessing medical tests and therapies in proposed disease interception programs. In this session, patient, academic, and regulatory perspectives on disease interception programs will be provided and illustrated with case studies. Recent advances in clinical utility measures as well as study designs and statistical evaluations will be discussed.
Most of the commonly used methods to analyze time to event data are built on the main assumption of proportional hazards. The results from traditional analyses such as Cox proportional hazards model and log-rank test may be difficult to interpret if the hazard rates from different groups change over time or cross at certain time points. Even though the Cox model with time-dependent covariate has been used when hazards cross, the interpretation of treatment effects in hazard ratios with covariate adjustment is challenging. Methods developed based on log-rank test can still be used to test for significance of the treatment effect but may not provide the best estimate of treatment effects. The goal is to provide a parsimonious method with easily interpretable treatment effect estimates. In this session, the speakers will present different strategies to handle non-proportional hazards issues in different clinical settings.
Discussants: Lee-Jen Wei, Harvard University ; Jim Hung, FDA/CDER
This session brings together experts in CMC issues but each speaker works on a distinct class of products. The products range from prescription forms of infant formula to biologics to components of in vitro diagnostic products. The session is designed to expose the audience to key scientific considerations in a fundamental question in manufacturing, namely how to best establish product shelf-life.
The 1984 Drug Price Competition and Patent Term Restoration Act, known as the “Hatch-Waxman Act,” established the modern system of generic drugs in the United States. According to 2016 Generic Drug Savings & Access in the United States Report published by Generic Pharmaceutical Association, generic drugs made up 89% of prescriptions dispensed in 2015 but only 27% of total medicine spending. With the passage of Generic Drug User Fee Act Amendments of 2012, there is an increased emphasis on regulatory science research for generic drugs, including the statistical evaluation of generic transdermal delivery systems and topical patches (hereafter referred as TDS products). Office of Generic Drugs in FDA CDER recommends three studies to be submitted in support of TDS products Abbreviated New Drug Applications (ANDAs), including a bioequivalence study with pharmacokinetic endpoints, an adhesion study evaluating the TDS adhesion to skin, and an irritation/sensitization study evaluating the skin irritation and sensitization potential of the TDS. To support regulatory approval, in addition to the bioequivalence of PK endpoints, the test product must adhere at least as well as the reference, be no more irritating than the reference, and be no more sensitizing than the reference. This session will discuss statistical issues, challenges and approaches in the adhesion study and the irritation/sensitization study for generic TDS products.
Successful drug development relies on close collaborations between industry and regulatory agencies. Many interactions take place during the development process to discuss various topics, some of which are statistical. Face to face meetings, teleconferences and written communications all occur. With the US FDA, these interactions can include the pre-IND meeting, end of phase II meeting, pre-submission meeting, and mid-cycle type A or type B review meetings. Cross-functional face to face meetings often have limited time allocated to relevant statistical issues. Decisions made at these meetings are formally documented and generally binding to the clinical development program. In addition type C meetings may also be held to discuss various other topics, such as protocol development, analysis plans, protocol amendments, submission orientation, or trial-specific questions. When a sponsor requests an interaction with a regulatory body, a few months may pass before an actual meeting can occur. As new and innovative products are being developed and made available to patients, more innovative and non-conventional trial designs and statistical analyses are being implemented, such as meta-analysis methodologies, Bayesian designs in drug development, adaptive designs in rare disease settings, pragmatic trials, PK/PD modeling, and inclusion of patient’s perspectives in drug development and decision making. Enhanced communications between sponsor and regulators are critical to ensure successful development programs. In July 2016 FDA released its PDUFA VI goals letter which provided specific important goals including enhancing FDA-sponsor communications, in part through a team of CDER/CBER staff dedicated to working more frequently with sponsors. This town hall session will include a panel and open audience engagement to explore enhanced strategies for more efficient interactions between industry and regulators in light of the PDUFA VI goals.
Randomization in clinical trials represents one of the cornerstones for experimental designs to reduce bias and substantiate the need for independent distributions of the treatment groups outcomes. A variety of randomization designs have been proposed in the literature for cluster randomized trials and individual subject randomized trials.
In individual subject randomized trials, the adaptive randomized design has become increasingly popular. Adaptations in the randomization scheme added weight to the ethical considerations and the need for acceleration in finding effective therapies as early as possible in the drug development process. The adaptive design framework offers an opportunity of making the “right decision early” by learning from the accrued data during the study, and adapting the randomization probabilities to randomly assign more patients to the more promising treatment arms. Practical considerations and challenges in implementing such dynamic designs will be addressed to show how to accelerate quantitative-decision making in drug development.
Cluster randomized trials with relatively few clusters have also been widely used in recent years for evaluation of health-care strategies. On average, randomized treatment assignment achieves balance in both known and unknown confounding factors between treatment groups, however, in practice investigators can only introduce a small amount of stratification and cannot balance on all the important variables simultaneously. Therefore, it is crucial to propose innovative randomization design to meet the challenge.
In this session, a presentation will address an innovative randomization design for cluster randomized trials from a regulatory perspective. In addition, the session will illustrate different response-adaptive randomization procedures in light of a most accurate benefit-risk assessment and of regulatory considerations.
Decisions about treatments are complex and often involve trade-offs between multiple, often conflicting, assessments of benefit and risk. Decision makers or stakeholders choose between alternatives and therefore are the source of preference scores and weights. Bayesian analysis allows formal utilization of prior information with repeated updates of knowledge from new evidence of data and thus is a natural choice to support such Benefit-Risk (BR) trade-offs and decision making. Bayesian BR approaches could allow one to explore the variability of the BR scores and weights in the presence of uncertainty in a sequential manner as information accrues. Perspectives from industry and FDA on the emerging challenges and importance of assessing BR preferences will be presented; speakers and panelist(s) will discuss various research experiences with Bayesian BR methods to offer possible solutions, and to share the lessons they have learned.
High failure rate in clinical trials remains one of the key causes of rising R&D costs in industry. It is therefore essential that sponsors make well-informed Go/No Go (GNG) decisions to move forward promising treatments, or halt ineffective treatments, at several time points during drug development. Utilizing appropriate statistical methods, such as Bayesian methods, is key to this decision making process. This session will showcase recent advancements in statistical methods and tools for GNG decision-making. Richard Simon from NCI will present a novel method for assessing the strength of evidence for making GNG decisions using Bayesian posterior probabilities. Pat Mitchell from AstraZeneca will present a dual target decision making framework and specialized software tool that can accommodate both Frequentist and Bayesian approaches. Jim Bolognese from Cytel will present a case study comparing a Bayesian design with and without an informative prior, to traditional group sequential options.
A Transdermal Delivery System (TDS), is designed to slowly deliver the active substance(s) through the intact skin. To ensure the safe and effective use of transdermal systems, the active substance(s) should be delivered through the skin at an adequate rate that is maintained for an appropriate time during system application and should not irritate the skin. Regulatory guidances [1, 2] are available for development of generic applications. United States Pharmacopeia has a general chapter on product quality tests for topical and transdermal drug products [3], however, there is no regulatory guidance on evaluating adhesion for new drug development. In this session, speakers from both industry and regulatory agency will present and discuss the most frontier knowledge and experience on evaluation of TDS performance in new drug development. In addition, we may also present and discuss the USP Chapter 3 on Topical and Transdermal Drug Products and EMA Guideline on Quality of Transdermal Patches. References: 1. FDA draft guidance for industry, Assessing Adhesion with Transdermal Delivery Systems and Topical Patches for ANDAs, 2016 2. EMA, Guideline on quality of transdermal patches, 2014 3. United States Pharmacopeia, Chapter <3>: topical and transdermal drug products-- product quality tests, 2016.
In addition to analytical and nonclinical studies, clinical PK/PD studies and comparative clinical efficacy studies may be conducted for assessing whether there is clinical meaningful difference between the biosimilar products and the reference product from PK/PD, efficacy and immunogenicity perspective. Bioequivalence margin and estimated statistics for outcome variables, derived based on reference product labeling and literature, are used in designing these biosimilar studies. However, due to limited available information, the reference point estimate and the corresponding variation in the outcome variables may not be reliably estimated. For example, from immunogenicity perspective, the sponsor may not have reliable estimate of anti-drug activity rate for the reference product. Study designed using these unreliably estimated parameters may be over- or under- powered. Adaptive design is often proposed to tackle these limitations. In this session, presenters from FDA and industry will discuss their experience in using adaptive design in biosimilar product development.
This session will be tied to the release of the FDA’s draft guidance on multiple endpoints in clinical trials and will serve as the forum to discuss key topics in the guidance document, including analysis of multiple endpoints, composite endpoints, subgroup analyses, gatekeeping strategies, etc. A summary of the guidance document will be given by Dr. Lisa LaVange (FDA) and the guidance will be discussed by multiplicity experts from academia and industry. This will include an overview of general guidelines for multiplicity adjustment strategies in confirmatory trials with multiple clinical objectives. The section will be aimed at a broad audience of statisticians involved in the design and analysis of confirmatory clinical trials. Speakers: Lisa LaVange, FDA; Ralph D’Agostino, Boston University; Alex Dmitrienko, Mediana Inc
A 'promising zone' design of a clinical trial allows the increase of study's sample size when the unblinded interim estimate of the treatment effect looks promising. In spite of the many years of research and discussion in the regulatory and statistical literature the evaluation of general usefulness of the 'promising zone' design and ranking of the usefulness of the different sample size re-assessment rules are still opened to debate. The goal of the session is to discuss recent methodological work in this area. We will compare relative attractiveness of the different sample size re-estimation rules in different settings.
Bayesian approach is becoming more and more powerful and popular in clinical trial design, monitoring, and analysis. FDA issued guidance on Bayesian statistical methods in the design and analysis of medical device clinical trials. In order to ensure that Bayesian methods are well-understood and broadly utilized for design, analysis and throughout the medical product development process and to improve industrial, regulatory and economic decision making. Bayesian Scientific Working Group (BSWG) of the DIA was formed in 2011 to achieve this version. BSWG hopes to offer clearer solutions to commonly occurring obstacles that have prevented industry confidence in this area through the voluntary contributions of the pharmaceutical company, Contract Research Organization (CRO), academic, and federal agency statisticians who provided their perspectives on improved special drug development program methods that would be more efficient while maintaining statistical integrity.
Composite endpoint combining several outcomes of clinical interest is frequently used as the primary endpoint in clinical trials. A main advantage of using such an endpoint is that the event rate of the composite endpoint is higher than its components (outcomes) alone, resulting in a smaller sample size. However, the conventional statistical methods for composite endpoints suffer from two major limitations. First, all components are considered equally important, and second, in time-to-event data analyses, the first event analyzed may not be the most important component. To address these limitations, some statistical methods have been recently developed to construct and analyze composite endpoints by considering the order of the clinical importance among the outcomes, such as (1) net chance of a better outcome (or proportion in favor of treatment), (2) weighted composite endpoint, and (3) win ratio. This session will focus on the third approach – win ratio.
Win ratio approach was introduced by Pocock et al. in 2012. It compares each patient in the treatment group with every patient in the control group to determine the winner/loser/ties within each pair. Each pair comparison starts with the most important outcome, with lower priority outcomes being used only if higher priority outcomes are missing or result in a tie. The win ratio is the ratio of the winners in the two groups.
During the past few years, the win ratio has been applied in the design and analysis of some clinical trials, and there are also some new methodological developments. This session will focus on (1) the generalized approach published by Dong et al. in 2016 to enable users define winners, losers and ties based on their specific study settings; (2) stratified win ratio to analyze clinical trials with a stratified randomization (to be published) vs the unweighted win ratio and an inverse variance weighted win ratio; (3) statistical issues in using the win ratio for clinical trial designs and analyses as well as interpretations of the win ratio results.
We will illustrate and discuss these recent methodological developments and our practical experiences with clinical trial examples from the perspectives of academia, sponsors, and regulatory agencies.
Discussant(s): Junshan Qiu, FDA/ James Hung, FDA
The development of biosimilar is an emerging area. Although several regulatory guidelines have been issued, associated statistical methodologies continue to evolve. For example, statistical accommodation of limited availability of biological materials and lots has been developed. In this session, statistical challenges and opportunities concerning biosimilar develop will be highlighted. In particular, the thought process that led to the FDA recommendation of the tiered approach to analytical similarity testing and equivalence margin setting and current EMA considerations of statistical methods for comparative assessments of quality attributes will be discussed. This session consists of two presentations, one expert statistician representing an industry perspective and one representing a regulatory perspective, who will together provide fresh insight on the application and issues surrounding statistical approaches to analytical similarity.
Combination therapies have shown great success in recent oncology drug development programs, including combination of immuno-therapies in NSCLC, and combination of PI/IMids with other drugs in multiple myeloma. Yet it is challenging to find the optimal combination of doses with acceptable toxicity profile: combinatorial choices exist with respect to escalation or de-escalating doses of one or more drugs. When multiple drugs are investigational, it becomes difficult to attribute observed safety signals to drugs, hence makes it difficult to make escalation decisions.
Another challenge exists within master protocol designs -- hundreds of clinical trials are needed within the traditional randomized clinical trial paradigm to test all possible combination therapies for different cancer types.
In this session, practical considerations of drug combination studies will be presented. A panel will discuss their perspectives regarding the following questions: 1. What is the panelist experience/recommendations about rule-based and model-based designs in dose finding for combination therapies? 2. Is there any specifics of designing studies for molecularly targeted agents, in particular the validity of the assumption about monotonically increasing relationship between dose and efficacy? 3. What are the special considerations about combination studies in immuno-oncology? 4. Regulatory issues when using novel trial designs (e.g., basket or umbrella) for combination therapies.
In this late-breaking session, Drs. Lisa LaVange (FDA CDER) and Deborah Ashby (Imperial College London) will discuss their journeys of becoming recognized leaders in statistics, a male dominated scientific discipline. They will share the lessons they learned on scientific research, career development and effective leadership. Drs. Weili He (AbbVie) and Telba Irony (FDA CBER) will moderate this inspiring event.