All Times EDT
The ICH E9 (R1) addendum outlines a clearly defined estimand which describes the quantity to be estimated for a specific study objective in clinical trials. Estimand framework requires alignment of the study objectives, study design, study conduct, data analysis, and interpretation of results. A clearly defined estimand provides additional benefits in trial designs, analysis and interpretations. After identifying the trial objective for a clinical study, an estimand is defined with the specifications of the five attributes to address a specific study objective. One important attribute of the construction of the estimand is handling intercurrent events (IEs). Appropriate statistical methods need to be planned as primary analyses to evaluate the estimand which may require certain assumptions for handling the IEs, and then sensitivity analyses are needed to assess the assumptions.
In this session, we will review the estimand framework, give some examples of estimands and some advanced statistical methods for handling IEs, from different aspects and stages of clinical trials. Speakers from pharmaceutical industry and regulatory agency who are highly engaged in this area will share their recent research experiences.
With an increasingly competitive and dynamic oncology space, advancing successful therapies and terminating unsuccessful therapies earlier is crucial. Additionally, traditional oncology clinical trial endpoints such as objective response rate and progression-free survival may not fully characterize the clinical benefit of innovative oncology products due to the atypical patterns of response (e.g., delayed response, durable response) and progression (e.g. pseudo progression) with immune checkpoint inhibitors. These difficulties lead to enormous interest and research in exploring novel early endpoints in oncology clinical trials. With an efficacy endpoint which could be measured early but still be able to fully characterize the clinical benefit, the pharmaceutical industry and the regulatory agency hope to optimize the compound development with go/no-go decisions, minimize the exposure of ineffective therapies to the cancer patients, and expedite the development of oncology drug and biologic product. A lot of research have been done in this area such as (1) developing alternative metrics incorporating the atypical response and progression dynamics, response duration, and optimal bars to fully characterize the clinical benefit with immune checkpoint inhibitors; 2) validating its surrogacy with clinical endpoint overall survival; and (3) evaluating the early endpoint minimal residual disease to help the pharmaceutical industry planning of its use as a biomarker in clinical trials or to support marketing approval in hematologic disease. In this session, we anticipate 2 speakers from regulatory agency and industry with extensive experiences in oncology development to share their insights about new statistical developments and related applications in evaluating efficacy of cancer therapies in the areas discussed above, followed by panel discussions.
Reference intervals (RIs) represent estimated distributions for reference values from healthy populations of comparable individuals and are an integral component of laboratory diagnostic testing and clinical decision-making in human and veterinary medicine. Establishment of RIs involve intricate study design and statistical issues, including how to select individuals for a reference population and how to report results from a reference database (e.g. linear regression, quantile regression, nonparametric estimation of stratified data). There are existing recommendations for the establishment and use of RI from the Clinical Laboratory and Standards Institute (CLSI) and the American Society for Veterinary Clinical Pathology (ASVCP), but challenges remain. This session is designed to present current practices as well as discuss clinical and statistical challenges in the establishment and use of RIs.
This session will start with an introductory presentation by the speaker from University of Wisconsin-Madison to highlight the procedural steps for establishing de novo population-based RIs as well as bring light to alternatives applicable to specific situations. The speaker from FDA’s Center for Veterinary Medicine (CVM) will then share her experiences in using RI in the review of animal drug applications. Topics include the challenges and opportunities associated with the use of RI for the interpretation of clinical pathology data in target animal safety studies in veterinary species. Finally, the speaker from FDA's Center for Devices and Radiological Health (CDRH) will discuss study design and statistical analysis issues for medical tests with RI and give a summary of current developments and challenges on RI.
A clinical trial with an extensive amount of missing data may not provide reliable evidence to support the efficacy or safety of an intervention. Although strategies and statistical methods for dealing with missing data have been available and commonly applied, the need for novel statistical methods in the innovatively designed small-size trials (e.g., rare diseases and pediatric patient populations) brings in increasing attention. For example, in evaluating a drug for rare diseases a single arm study may be considered with an external control. The external control might be formulated from historical trials or real world data (RWD). How to successfully handle the missing data in both the single arm trial and the external control and analyze the trial with missing data is under debate among statisticians. Another challenging example is the handling of missing data in patient reported outcomes. When historical control is considered, the version of PRO instruments may be different and the missing data could have different impact because of the version difference, the assessment of treatment effect may have increased complexity.
In this session, we will invite experts from regulatory agencies, academia and the pharmaceutical industry to share with us their unique experiences and methods in trials with significant amount of missing data. A panel will be followed for sharing experience and explore potential solutions
The COVID19 public health emergency has impacted all clinical trials including ANDA bioequivalence studies. Challenges arose from quarantines, site closures, travel limitations, interruptions to the supply chain for the investigational product, and many more. These challenges led to difficulties in recruitment, meeting protocol-specified procedures including administering or using the investigational product or adhering to protocol-mandated visits and lab/diagnosis testing, as well as statistical analysis and interpretation of the study results. How to minimize the impact of COVID19 on the study conclusion, assure the safety of trial participants, maintain compliance with good clinical practice and minimize risks to trial integrity is an imposing task for everyone involved. In this session, statisticians from the industry and the agency will discuss the statistical considerations in the modification of study design, trial conduct and statistical analysis in ANDA bioequivalence studies in the presence of unprecedented public health emergency. This session will include two presentations followed by a discussion. The first speaker (Pina D' Angelo, VP from Innovaderm) will discuss how to handle missing data caused by the pandemic in efficacy and equivalence studies. Various methods of imputation will be compared and case studies will be presented with the impact of different imputation methods on the efficacy and bioequivalence result. The second speaker (Wanjie Sun, Lead Mathematical Statistician from FDA) will discuss the challenge of COVID19 on bioequivalence studies and general recommendations, from study design to statistical analysis in PK and clinical bioequivalence studies. In particular, Dr Sun will focus on adaptive designs for clinical endpoint BE studies to cope with the impact of the pandemic. Dr. Stella Grosser (Division director of OB/DBVIII, FDA) will provide a discussion of the two speakers' presentation from regulatory and statistical perspectives.
Adaptive clinical trial designs have been employed to clinical development for a few decades. In 2010, FDA issued the first guidance for industry, which signified the booming and the demand of innovative adaptive designs. While the drug development community has benefited from this innovation, the complexity of the adaptive design also poses great challenges. In response to the challenges, FDA released two new guiding documents for adaptive designs and complex innovative designs in 2019. In a review of the EMA and FDA advice letters over the last >5 years regarding proposed adaptive phase II or phase II/III trials, 20% of the 59 studies were not accepted. Concerns that led to the rejections by the agencies were insufficient justifications for the adaptation, inadequate type I error rate control, and study bias (Van Norman, 2019; ElsaBer et al, 2014). Time required for reviews of adaptive trials for the FDA and EMA was a median of 12.2 and 14 months, respectively, which were 6 to 7 weeks longer than the review times of non-AD studies by the FDA and EMA. Frequently cited issues in reviews of AD were inadequate statistical power, risk of ineffectively evaluating doses, type I errors, and inadequate blinding (Van Norman, 2019).
In this session, well-known experts in adaptive designs from both industry and regulatory agencies will share their real-world experiences regarding the design implementation and sponsor-regulatory interactions. Participants will benefit from the successful stories as well as the lessons learned from each presenter. A comprehensive review of adaptive designs and case studies in variety of therapeutic areas will be presented. Future directions of adaptive design will also be discussed.
Unique challenges and opportunities are presented for drug development involving small populations such as orphan disease and pediatric. Although there are several incentive programs to expedite the drug approval process of such development, the same regulatory (FDA) approval standard applies as for common conditions i.e. substantial evidence regarding the effectiveness and safety of the drug. However, recruiting pediatric patients with rare conditions is challenging due to limited number of patients, and some other ethical concerns. Given the requirement for substantial evidence coupled with limited access to patients in small population, innovative design and analyses are much needed to leverage historical data when appropriate, optimize the value of patient data, and minimize patient risk. For some rare diseases, patients experience recurrent episodes of the outcome or event of interest. Drug developers occasionally consider a design which incorporates multiple episodes. However, within patient correlation and its impact on the efficacy evaluation are often not considered or are underestimated. We will discuss several innovative methods of borrowing historical data and use of Bayesian sequential methods in small populations as well as highlight applications. A presentation will present a simulation study to explore the operating characteristics when the design and analysis include recurrent episodes. Correlation structures, and recurrence rates were considered, and their impacts were assessed. In this presentation, the gains and the concerns of such design will be discussed; An academic presenter will discuss Bayesian sequential monitored trial that allows early stopping for efficacy or futility. Another representative will present their research and experience on using a Bayesian two-stage adaptive design to enrich the trial decision making by borrowing information from adult trial through a case example.
As new statisticians just start their careers or when statistical leaders progress to new and more senior levels in their careers, there is a need to learn and adapt their leadership skills. Often, the skills that enabled a statistician’s success in his or her current role will not enable him or her be fully effective at a level with greater scope and responsibility without some adjustments to the skill set and the way those skills are applied. Statistical thought leaders from academia, industry, FDA and consulting services will share their experience and perspectives by looking at some of the key characteristics that leaders need across different leadership levels and discussing how those characteristics evolve across various levels of leadership to enhance career development, including the balance and blending of technical and “softer skills.” This townhall session will include a panel and open audience engagement to raise the awareness of the importance of soft skills for career advancement and to explore practical guidance for how to lead in one’s current role while looking ahead for opportunities to prepare/develop leadership skills for the next role at different stages of one’s career.
Questions to be discussed: 1. What skills are foundational to leadership as a statistician? In other words, what skills need to be developed in one’s early career that will continue to be important as the statistician progresses to more senior levels? 2. How the communication style has needed to change as one progresses to more senior levels as a statistician? 3. What leadership level is the most difficult transition? 4. What leadership skills do more senior leaders need than leaders at earlier levels didn’t
Nonalcoholic steatohepatitis (NASH) is associated with an increased risk of liver-mortality and cardiovascular disease. Currently, a liver biopsy is the only generally acceptable method for the diagnosis of NASH and accessment of its progression toward cirrhosis. No approved drug treatments are approved for NASH. Therefore, a therapeutic agent needs to be developed as quickly as possible using expediated regulatory pathway and innovative study design. Many challenges exist in the clinical development of a therapeutic for NASH such as a majority of patients are undiagnosed, uncertainties in assessment of disease progression, and the requirement of sequential liver biopies. In the paper published (C. Filozof, et. al. 2017), the authors present an approach for the clinical development of a therapeutic for NASH. The approach involves adaptive seamless clinical trial design, consideration of surrogate endpoint that is “reasonable likely to be associated with outcomes”, and accelerated approval pathway. Three adaptive seamless design approaches have been proposed - Phase 2/3, 3/4 and Phase 2/3/4. These designs may include interim analyses, and/or sample size readjustments. The seamless design may reduce the overall sample size of the trial while allowing patients to continue after each one of the phases. These designs can take advantage of the limited the number of patients willing to have multiple liver biopies. The adaptive seamless design would need to include a process that aimed to control the overall Type 1 error rate due to the multiple stages. The procedure for review of biopsies (independent read) will need addressed, specifically reading criteria, how to handle discrepancies and their impact on analysis. The presenters for this sessionIn the presentation, we will review the study design options for NASH including interim analysis, the clinical endpoints including surrogate endpoints, and sample size considerations at each phase of adaptive seamless design.
Over its long history, the Biopharmaceutical (BIOP) Section of the American Statistical Association (ASA) has fostered community and created a shared sense of purpose among statisticians in the medical product industry. BIOP was founded as the Pharmaceutical Subsection of the Biometrics Section in 1968. As a response to the subsection’s tremendous growth in membership over the succeeding 12 years, it was granted full section status in February 1980. This approval came with one condition, that at least 500 ASA members would join the new Section. Once the membership milestone was reached, the Biopharmaceutical Section (BIOP) formally came into existence on January 1st, 1981. Since its inception, the Biopharmaceutical Section has been one of the largest and most active sections within the ASA. BIOP sponsors the annual Regulatory-Industry Statistics Workshop and the biennial Nonclinical Biostatistics Conference which have been very successful at creating community among statisticians from across government, industry, and academia. BIOP members receive valuable benefits such as web-based training courses; paper, poster, and scholarship awards; membership to scientific working groups; leadership training; and mentoring services. This panel discussion will celebrate the 40th Anniversary of the Biopharmaceutical Section and discuss how the Section will continue to lead the way to address the challenges of medical product development in the 21st century. Six past chairs will participate in this panel discussion.
Heterogeneous diseases are medical conditions which have several etiologies, phenotypes or present a high variability in clinical manifestation. Heterogeneity adds complexity for clinical researchers to conduct clinical studies particularly in the choice of endpoints. In COVID-19, for example, severe hospitalized patients may have difficulty recovering within 14 days upon receipt of treatment but may benefit from not progressing into ventilator use or death. In general, patients for a specific phenotype may perform well in one specific endpoint, but the exact endpoint may not have a strong effect for other patients in the overall population exhibiting other phenotypes. In this case, if a single endpoint is used, the benefit gained for other individual outcomes is not counted and are diminished. The standard approach in this case is to define a composite outcome based on the individual outcomes. Using the COVID-19 as an example, the composite outcome might be time to either ventilator use of death. However, such an approach treats all individual outcomes equally, which ignores the difference in severity between them (e.g., death is worse than ventilator use). In addition, a less-important outcome may “mask” a more-important outcome. In contrast, a win ratio analysis accounts for relative priorities of the components. More precisely, the win ratio is defined as the ratio of the number of patients on the investigational arm who fared better versus worse (wins versus losses) in comparison to the control arm over a set of individual outcomes ranked by clinical importance (i.e., the odds of ``winning'). In this session, we will focus the discussion on different application examples and different research angles on the win ratio and its application in heterogenous disease or diseases where benefit is derived in multiple domains. We will also discuss novel/open statistical issues including confidence intervals, impact of censoring, and competing risks in this area.
Since 2014, synthetic controls, defined as a statistical method used to evaluate the comparative effectiveness of an intervention using a weighted combination of external controls, in successful regulatory submissions in rare diseases and oncology is growing. In the next few years, the utilization of synthetic controls is likely to increase within the regulatory space owing to co-existing improvements in medical record collection, statistical methodologies and sheer demand.
As fully powered randomized controlled trials are generally infeasible, novel and efficient analytic strategies are sought after with every challenge encountered in these applications. Some of the existing methods or application evolve around using synthetic control patients from control arms of earlier phase trials or patient registries, incorporating prior information from early phase trials, computing shrinkage estimators of effects in rare subsets of disease, sharing of control groups across protocols, etc. In this session, we will focus on new defensible strategies arising from existing or novel applications of synthetic controls. The focus is on practical relevance and real applications rather than theoretically elegance of the method. Examples of these strategies include, dynamic adaptive designs with a calibration on the use of synthetic controls at the interim analysis, statistical methods for use of multiple controls such as multiple treatment propensity scores and matching within quantiles in serial propensity scores, composite likelihoods and use of propensity score weights, enhanced doubly robust methods for PS weighting of synthetic controls. Perspectives are also included to discuss examples which have been seen in the regulatory setting and to provide perspective on what are pitfalls when using these methodologies.
In precision medicine that targets the right treatments to the right patients at the right time, a tri-center draft guidance by CDRH, CDER and CBER recommends that therapeutic products be co-developed with their IVD companion diagnostics (CDx). Due to the nature of a drug and its CDx often developed by different companies in different phases, a lag may occur when a market-ready CDx test is unavailable for the pivotal trial of a drug to utilize for enrollment, which may eventually impact on the drug approval due to lack of a cleared CDx. In such a situation, a clinical trial assay (CTA, a prototype of CDx) is in use to enroll patients, while a proposed CDx assay is undergoing analytical validation. Hence, there’s a need to design an IVD bridging study between the CTA and CDx assays to demonstrate that the performance characteristics of the proposed CDx are similar to the CTA, particularly in analytical concordance and clinical utility. The involved validation steps are recommended to reflect an unbiased intent-to-treat population by CDx results. In pivotal clinical trials, the original trial samples can serve as a sample pool for the validation. Challenges may arise on estimating efficacy for CDx defined intend-to-treat population using the results from a pivotal trial enrolled by CTA status and a bridging study, given that the efficacy data of the pivotal trial by CTA can be missing or biased for a CDx defined population due to the discrepancy between CTA and CDx, and need to be adjusted for CDx status. Such an adjustment depends on many factors, e.g. prevalence of a biomarker status (e.g. CDx+), level of agreement between CDx and CTA, and trial sample retest availability. These challenges call for special attentions to trial designs when concurrently validating a drug and its CDx. In the session, we’ll present an overview of IVD CDx bridging study designs, and address the challenges and possible solutions. The speakers are from FDA, pharma and device companies.
Real World Evidence (RWE) has been highlighted by regulatory decision makers as an important source to complement clinical trials, and bring new insights for drug development and postmarket research. Recently, the underlying data supporting the generation of RWE has been rapidly expanding from traditional clinical care and electronic health record (EHR) sources to now mobile devices, biosensors, or other wearables. The breadth and heterogeneity of real world data, and its position within the big data landscape, unlocks an important opportunity to consider new analytic capabilities that assist in drug development, particularly in the realm of machine learning (ML) and its role in the generation of RWE. In 2019, the FDA published a discussion paper highlighting its approach towards premarket review for ML-driven analytics. This paper envisions a regulatory framework for evaluation of future contributions of ML, and in particular, focuses on ML’s unique property of adapting to data dynamics over time in ways that traditional statistical approaches fail to reveal. Recent discussions between the FDA and the patient engagement advisory committee (PEAC) surfaced a need to concretely define the standards related to model validation, evaluating the training data population, and defining the level of communication and transparency necessary when submitting filings that leverage ML algorithms. This proposed session seeks to discuss validation and best practices critical to the assessment of ML methods for regulatory use, while also surfacing ML methods that have successfully assisted regulatory decision making in the past. These methods can include increasing sample sizes and efficiency of cohort selection, assisting in novel variable development and missing data, and ensemble approaches to improving comparative effectiveness estimates.
Randomized clinical trials are traditionally conducted in the context of demonstrating efficacy and safety of medical products. For many disease indications, high placebo response in clinical trials can hamper the detection of treatment effect. Enrichment designs and other innovative designs have been proposed to address these issues and concern over inadequate recruitment of patients for pediatric and rare disease trials.
Enrichment designs, such as sequential parallel comparison designs (SPCD) and sequential multiple assignment randomized trials (SMART) by means of re-randomization, have been implemented in clinical trials recently and have shown success and efficiency. However, the advantages of re-randomization in enrichment designs and its practical use still need to be further explored.
This session is intended for audiences who are interested in clinical trials that implement re-randomization and who want to gain a clear understanding of the utility and technical challenges. Speakers from regulatory agencies, pharmaceutical companies, and academia will share their latest research, practical trial examples, and potential solutions.