Online Program

Search by Last Name:

Keyword:

   

Wed, Sep 28

SC1 Short Course 1: Writing Clinical Trial Simulators

09/28/16
8:30 AM - 12:00 PM

Organizer(s): Tom Parke, Berry Consultants

Instructor(s): Scott Berry, Berry Consultants; Anna McGlothlin, Berry Consultants

The use of simulation is growing in the design of clinical trial designs as an augmentation of, or replacement for, sample size calculations. Usually this is in order to be able to: estimate operating characteristics of the planned trial other than simply power, to be able to estimate actual type-1 error and power taking into account factors such as the effect of dropouts, and to be able to use trial designs that are too complex for sample size calculation such as adaptive designs. This course will introduce statisticians to the software and engineering skills required to write and use a trial simulator. It is intended for statisticians with some experience of programming in R, and introduce them to the design considerations of writing a simulator in R and the testing skills necessary to show that the simulator is correct. It will also introduce statisticians with some experience of trial design to how to think like and engineer and use simulations to test and guide design decisions - how important it is to be able to present single simulations and how to present operating characteristics from many simulations.

Download Presentation Download Material
 

SC2 Short Course 2: Bayesian Biopharmaceutical Applications Using SAS®

09/28/16
8:30 AM - 12:00 PM

Organizer(s): Fang Chen, SAS Institute Inc.

Instructor(s): Fang Chen, SAS Institute Inc.; Guanghan Frank Liu, Merck & Co. Inc.

This half-day tutorial is intended for statisticians who are interested in Bayesian computation. We introduce the general-purpose simulation MCMC procedure in SAS (SAS/STAT® 14.1) first, then present a number of pharma-related data analysis examples and case studies in the second part. The objective is to equip attendees with useful Bayesian computational tools through a series of worked-out examples drawn from situations often encountered in the pharmaceutical industry. The MCMC procedure is a general-purpose simulation tool designed to fit a wide range of Bayesian models, including linear and nonlinear models, multilevel hierarchical models, models with a nonstandard likelihood function or prior distributions, and missing data problems. The first part of the tutorial briefly introduces PROC MCMC and demonstrates its use with simple applications, such as Monte Carlo simulation, regression models, and random-effects models. The second part of the tutorial takes a topic-driven approach to explore a number of case studies in the pharmaceutical field. Topics include posterior predictions, use of historical information, survival analysis, analysis of missing data, and Bayesian design and simulation. Attendees should have a basic understanding of Bayesian methods and experience using the SAS language. The tutorial does not cover basic concepts of Bayesian inference. Outline Part I: Introduction to the MCMC Procedure (1–1.5 hours) A. Background and How the Procedure Works ? Discusses the ins and outs of the procedure. B. Basic Tools ? Explains the syntax and distributions that form the basic building blocks of PROC MCMC. C. Improving Markov Chain Mixing ? Discusses convergence and some tricks to improve MCMC convergence. D. Some How-To Applications Using PROC MCMC ? Covers examples and How-Tos in Monte Carlo simulation, regression models, multi-level random-effects models, etc. Part II: Applications and Case Studies (2–2.5 hours) A. Evaluating Biomarker Cutoffs ? Shows how to use Bayesian simulation to evaluate putative cutoffs for a biomarker in predicting a clinical response. B. Bayesian Predictive Probability Designs for Phase IIA Trials ? Illustrates how to construct a Bayesian predictive probability design for Phase IIA clinical trials. C. Posterior Predictions ? Introduces the concept of Bayesian posterior predictions, and illustrates model checking and prediction with new covariates using PROC MCMC. D. Using Historical Information ? Discusses various ways of borrowing information from historical data and constructing appropriate priors in the current analysis. Approaches include parametric and nonparametric approximation, meta-analytic prior, power prior with fixed weight, and normalized power prior. E. Evaluating a Basket Clinical Trial Design ? Explores a basket adaptive design that evaluates the drug response in a multi-arms study and selects cohorts for further study. F. Repeated Measurement Models in Drug Safety Evaluation ? Presents a case study of a three-level hierarchical model that examines drug safety evaluation. G. Case Studies of Missing Data Analysis ? Reviews case studies of fitting selection models and pattern mixture. ? Presents an analysis with a control-based imputation method using PROC MCMC.

Download Presentation Download Material
 

SC3 Short Course 3: An Overview of Methods to Assess Data Integrity in Clinical Trials

09/28/16
8:30 AM - 12:00 PM

Organizer(s): Marc Buyse, CluePoints; Jeffrey L Joseph, Chiltern International; Richard C Zink, JMP Life Sciences, SAS Institute

Instructor(s): Marc Buyse, CluePoints; Paul Schuette, FDA; Richard C Zink, JMP Life Sciences, SAS Institute

The quality of data from clinical trials has received a great deal of attention in recent years. Of central importance is the need to protect the well-being of study participants and maintain the integrity of final analysis results. However, traditional approaches to assess data quality have come under increased scrutiny as providing little benefit for the substantial cost. Numerous regulatory guidance documents and industry position papers have described risk-based approaches to identify quality and safety issues. An emphasis on risk-based approaches forces the sponsor to take a more proactive approach to quality through a well-defined protocol and sufficient training and communication, and by highlighting those data most important to patient safety and the integrity of the final study results. Identifying problems early allows sponsors to refine procedures to address shortcomings as the trial is ongoing. The instructors of this short course will provide an overview of recent regulatory and industry guidance on data quality, and explore issues involving data standards and integration, sampling schemes for source data verification, risk-indicators and their corresponding thresholds, and analyses to enrich sponsor insight and site intervention. In addition, statistical and graphical algorithms used to identify patient- and investigator trial misconduct and other data quality issues will be presented, and corresponding multiplicity considerations will be described. To supplement concepts, this course will provide numerous practical illustrations and describe examples from the literature. The role of statisticians in assessing data quality will be discussed. 1. Regulatory Landscape (Schuette) 2. Background (Zink) a. Recent history b. TransCelerate c. Classification of risk-based approaches d. Definitions e. Data sources and data standards f. Prospective approaches to quality g. Role of the statistician and why we should care h. Sampling Approaches for Source Data Verification (Zink) 3. Supervised Methods (TransCelerate) (Zink) a. Risk indicators and thresholds b. Graphical approaches c. Advanced analyses 4. Unsupervised Methods of Statistical Monitoring (Zink) a. Patient- and site-level analyses b. Graphical approaches c. Multiplicity, power and sample size considerations 5. Unsupervised Methods Using All Data (Buyse) a. Patient-, site-, and country level analyses b. Graphical approaches c. Multiplicity and power considerations d. Scoring and prioritizing centers for audits e. Experience with these methods 6. Conclusions (Zink) a. Review cycle b. Models for centralized review 7. References

Download Presentation Download Material
 

SC4 Short Course 4: Statistical methods and software for multivariate meta-analysis

09/28/16
8:30 AM - 12:00 PM

Organizer(s): Yong Chen, University of Pennsylvania

Instructor(s): Yong Chen, University of Pennsylvania; Haitao Chu, University of Minnesota

Outline/Description: Comparative effectiveness research is aiming at informing health care decisions concerning the benefits and risks of different diagnosis and treatment options. The growing number of assessment instruments and treatment options for a given condition, as well as the rapid escalation in their costs, has generated the increasing need for scientifically rigorous comparisons of diagnostic tests and multiple treatments in clinical practice via multivariate meta-analysis. The overall goal of this short course is to give an overview of the cutting-edge and robust multivariate meta-analysis methods to enhance the consistency, applicability, and generalizability for meta-analysis of diagnostic tests and multiple treatment comparisons. A number of case studies with detailed annotated SAS and R codes will be presented. The outline is as follows: A. Methods and Software for Meta-analysis of Diagnostic Tests (2 hour) (1) When the reference test can be considered as a gold standard (1 hour) • Bivariate general linear mixed and generalized linear mixed models • Trivariate model that accounts for disease prevalence • Alternative parameterizations (2) When the reference test cannot be considered as a gold standard (1 hour) • Random effects models • Bayesian HROC Models • Unification of the previous two models B. Methods and Software for Meta-analysis of Multiple Treatments Comparisons (2 hour) (1) Contrast-based network meta-analysis: (1 hour) (2) Arm-based network meta-analysis: (1 hour)

Download Presentation Download Material
 

SC5 Short Course 5: Introduction to clinical trial optimization to enable better decision making

09/28/16
1:30 PM - 5:00 PM

Organizer(s): Alex Dmitrienko, Mediana Inc.

Instructor(s): Alex Dmitrienko, Mediana Inc.

This half-day course focuses on a broad class of statistical problems related to optimizing the design and analysis of Phase II and III trials (Dmitrienko and Pulkstenis, 2016). This general topic has attracted much attention across the clinical trial community due to increasing pressure to reduce implementation costs and shorten timelines. The Clinical Scenario Evaluation (CSE) framework (Benda et al., 2010) will be described in this short course to formulate a general approach to clinical trial optimization and decision making. The CSE framework facilitates the comparison of competing options for clinical development programs and clinical trial designs/analyses. The concept includes three different elements, namely, the set of underlying assumptions (data models), the options to be assessed (analysis models) and the metrics used for the assessment (evaluation models). Using the CSE approach, main objectives of clinical trial optimization will be formulated, including selection of clinically relevant optimization criteria, identification of sets of optimal and nearly optimal values of the parameters of interest, and sensitivity assessments. Key principles of clinical trial optimization will be illustrated using a number of problems that often arise in Phase II and III clinical trials. This includes optimal selection of analysis strategies that involve multiplicity adjustments (Dmitrienko et al., 2009; Dmitrienko, D’Agostino and Huque, 2013), optimal selection design elements in adaptive clinical trials (Dmitrienko et al., 2016) and selection of patient subgroups in enrichment designs. The short course will focus on a frequentist perspective but will also introduce the Bayesian approach to clinical trial optimization (including probability of success or assurance calculations). Clinical trial optimization methods make heavy use of clinical trial simulations. Simulation-based approaches to evaluating trial designs and analysis methods will be discussed. Practical solutions that the participants can quickly apply to address real-life challenges in clinical trials will be emphasized throughout this short course. Multiple case studies based on real Phase II and III trials will be used, e.g., clinical trials with multiple endpoints and dose-placebo comparisons, trials with adaptive and enrichment designs. Software tools for applying optimization methods will be presented, including R software (Mediana package) and Windows application with a graphical user interface (MedianaPro application). References Benda, N., Branson, M., Maurer, W., Friede, T. Aspects of modernizing drug development using clinical scenario planning and evaluation. Drug Information Journal. 44, 299-315, 2010. Dmitrienko, A., Tamhane, A.C., Bretz, F. (editors). Multiple Testing Problems in Pharmaceutical Statistics. Chapman and Hall/CRC Press, New York, 2009. Dmitrienko, A., D’Agostino, R.B., Huque, M.F. Key multiplicity issues in clinical drug development. Statistics in Medicine. 32, 1079-1111, 2013. Dmitrienko, A., Paux, G., Pulkstenis, E., Zhang, J. Tradeoff-based optimization criteria in clinical trials with multiple objectives and adaptive designs. Journal of Biopharmaceutical Statistics. 2016. To appear. Dmitrienko, A., Pulkstenis, E. (editors). Clinical Trial Optimization Using R. Chapman and Hall/CRC Press, New York (expected to be published in 2016).

Download Presentation Download Material
 

SC6 Short Course 6: Use of Biomarkers for Surrogacy and Personalized Treatment Selection

09/28/16
1:30 PM - 5:00 PM

Organizer(s): Haiwen Shi, FDA/CDRH; Changhong Song, FDA

Chair(s): Changhong Song, FDA

Instructor(s): Tianxi Cai, Harvard T.H. Chan School of Public Health; Layla Parast, RAND Corporation

Novel biomarkers have the great potential to dramatically change the decision making process of modern medicine. Recently there has been increased interest in using markers as a surrogate for assessing a treatment effect, for diagnosing disease or for predicting the risk of future clinical events. In the first part of the short course, we will introduce robust measures to assess the value of potential surrogate markers by quantifying the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. We will discuss the estimation procedure, inference, advantages over previously proposed model-dependents approaches, and use of these measures to identify valid surrogate markers in a time-to-event outcome setting. In recent years, biomarkers have also been actively studied as tools for optimizing treatment selection. While standard clinical trials that evaluate treatment benefit focus primarily on estimating the average benefit for the entire patient population, it is now widely accepted that a treatment reported to be effective may not be equally beneficial to all patients. For example, the benefit of giving chemotherapy prior to hormone therapy with Tamoxifen in the adjuvant treatment of postmenopausal women with lymph node negative breast cancer depends on the estrogen receptor status. Due to the associated cost and toxicity, it is crucial to identify patients who will and will not benefit from chemotherapy. This gives rise to the need of accurately predicting treatment benefit based on important patients’ characteristics. In the second half of the short course, we will discuss systematic procedures using information from multiple biomarkers to identify subgroups that may or may not benefit from a new treatment. We will emphasize the statistical methods for model estimation, selection, calibration and validation with the ultimate objective of developing standard operating procedure (SOP) for establishing effective personalized medicine procedure.

Download Presentation Download Material
 

SC7 Short Course 7: Structured Benefit-Risk Evaluation and Emergent Issues

09/28/16
1:30 PM - 5:00 PM

Organizer(s): Weili He, Merck & Co. Inc.; Qi Jiang, Amgen; John Scott, CBER FDA

Instructor(s): Weili He, Merck & Co. Inc.; Qi Jiang, Amgen; John Scott, CBER FDA

In recent years, pharmaceutical companies have increasingly used structured benefit-risk (BR) assessments as part of their internal decision-making processes. On the regulatory side, partly in response to a 2006 Institute of Medicine report on drug safety, the 2012 Food and Drug Administration Safety and Innovation Act (FDASIA) called on the FDA to develop a structured approach to benefit-risk assessment in regulatory decision-making. FDA has begun to develop such approaches, both for drugs/biologics and for medical devices. B-R assessment is often multi-faceted and complex, and the B-R landscape is still evolving. These developments have spurred increased interest and effort from companies, regulatory agencies, and other interest groups to further enhancing structured BR assessments. The Quantitative Sciences in the Pharmaceutical Industry Benefit-Risk Working Group (QSPI BRWG) has done a great deal of work in this area in recent years, summarizing existing BR methods, recommending general principles and specific approaches, and researching into important emerging topics. This BR short course will be different from the BR short course given in 2014 workshop and will contain materials in a BR book to be published by CRC Press in summer 2016. We will begin with a refresher on the basic concepts of BR evaluations and the evolving BR landscape. Then we will go into more details on a few important BR emerging topics and key regulatory considerations. Lastly, we will illustrate the key BR considerations via a few case studies. We summarize the key learnings below: 1. The current status of B-R assessment, including an overview of key approaches and methods, and a global look at the regulatory environment for incorporating benefit-risk assessments into decision-making. 2. Emerging topics in benefit-risk assessment, including approaches for endpoint selection, weight selection, quantifying uncertainty, visualization, subgroup analysis, and the use of different data sources. 3. The adoption of B-R framework and utilization is best illustrated through case studies. We will describe a few case studies in details, focusing on the key considerations in BR assessment. The instructors for this course will include benefit-risk experts from industry and FDA.

Download Presentation Download Material
 

SC8 Short Course 8: Design and Statistical Analysis of Biosimilars

09/28/16
1:30 PM - 5:00 PM

Organizer(s): Min (Annie) Lin, FDA CBER; Feng Liu, GlaxoSmithKline Inc

Instructor(s): Shein-Chung Chow, Duke University

Design and Statistical Analysis of Biosimilars The design and analysis of a biosimilar product is of increasing importance to pharmaceutical/biotech industry, since many biological products face losing patents and the pharmaceutical/biotech industry needs a regulatory pathway for approval of follow-on versions of biologic products, also referred as biosimiliars. There are many scientific challenges due to the complexity of both the manufacturing process and the structures of biosimilar products. FDA requires biosimilar products or exchangeable biosimilar product to meet its rigorous standards of safety and efficacy (FDA guidance 2015). This short course will discuss the study design and analysis of a biosimilar development and cover most of the statistical questions encountered in various study designs at different stages of research and development of biological products including regulatory requirements for Assessing follow-on biologics, statistical methods for assessing average biosimilarity, sample size and statistical test for biosimilarity in variability, and impact of variability on biosimilarity limits for assessing follow-on biologics; lastly, the CMC requirements and test for comparability for biological products are also discussed. The short course is based on a recently published book on Biosimilars: Design and Analysis of Follow-on Biologics, which is the first book entirely devoted to the statistical design and analysis of biosimilarity and interchangeability of biosimilar products. The presenter is Dr Shein-Chung Chow from Duke University Reference: FDA Guidance, 2015, Scientific Considerations in Demonstrating Biosimilarity to a Reference Product Shein-Chung Chow, 2014, Biosimilars: Design and Analysis of Follow-on Biologics, CRC Press

Download Presentation Download Material
 

Thu, Sep 29

Welcome Address from Biopharmacuetical Section and Workshop Co-Chairs

09/29/16
8:00 AM - 8:15 AM

Chair(s): Freda Cooner, FDA/CDER; Ed Luo, PTC Therapeutics

 

Plenary Session 1

09/29/16
8:15 AM - 9:45 AM
Thurgood Marshall Ballroom

Chair(s): Freda Cooner, FDA/CDER; Ed Luo, PTC Therapeutics

Plenary Presentation 1 - Is There a Future for Clinical Trials: An NIH Perspective
View Presentation View Presentation Michael S Lauer, NIH Deputy Director for Extramural Research

Plenary Presentation 2
View Presentation View Presentation Dr. Robert M. Califf, Commissioner of Food and Drugs

 

Plenary Session 2

09/29/16
10:00 AM - 11:30 AM

Chair(s): Freda Cooner, FDA/CDER; Ed Luo, PTC Therapeutics

Statistical Innovation: Better Decisions through Better Methods
View Presentation View Presentation Mike Krams, Janssen Research & Development, LLC

Panel Discussion - Statistical Innovation: Better Decisions Through Better Methods
View Presentation View Presentation Brenda Gaydos, Eli Lilly; Frank Harrell, Vanderbilt University; Lisa LaVange, FDA; John Scott, CBER FDA ; Steven Snapinn, Amgen; Ram Tiwari, FDA/CDRH

 

Roundtable Discussions

09/29/16
11:45 AM - 1:00 PM

TL01: Multi-Arm Multi-Stage (MAMS) Designs in Clinical Drug Development
Cyrus Mehta, Cytel Inc.

TL02: Sequential parallel comparison design (SPCD) for trials with high placebo response
Anastasia Ivanova, University of North Carolina at Chapel Hill

TL03: Using expert elicitation to support decision making in drug development – are you sceptical or enthusiastic?
Timothy H Montague, GlaxoSmithKline

TL04: Challenges Facing Observational Studies Based on Rare Disease Registry Data
Mohammad Bsharat, Vertex Pharmaceuticals

TL05: Utilizing Real World Data (RWD) to produce Real World Evidence in Support of Regulatory Decisions
Coen Bernaards, Genentech, Inc.

TL06: Statistical Considerations in Trial Design and Sample Size Estimation for Assessing Biosimilarity and Interchangeability
Shuhong Zhao, Inventiv Health

TL07: Clinical Development of Predictive Biomarkers
Glen Laird, Sanofi Pharmaceuticals

TL08: Multistage adaptive biomarker-directed design for randomized clinical trials
Zhong Gao, CBER/FDA

TL09: Combination Products as Medical Tests
Bipasa Biswas, FDA

TL10: Design of drug combination studies in oncology
Sergei Leonov, ICON Clinical Research

TL11: Successive Binomial Probability Computation Function
Sunday Popoola Oyelowo, Waziri Umaru Federal Umaru Federal Polytechnic, Birnin Kebbi

TL12: Investigator Assesement vs. Blinded Adjudecation of Clinical Endpoints
Andrei Breazna, Pfizer inc

TL13: Understanding the Dose-Response Relationship in Practice
Melanie Lai-Shan Chan, Eli Lilly Inc.

TL14: Preparing Effective Interim Reports for a Data Monitoring Committee
Melissa K Schultz, University of WIsconsin

TL15:T wo Commonly Used Study Designs in a Phase I Oncology Study: modified continual reassessment method (mCRM) vs. Accelerated Titration Design
Kyounghwa Bae, Janssen R&D

TL16: Comparing efficacy and survivals of initial treatments for elderly patients with newly diagnosed multiple myeloma: A Bayesian network meta-analysis of randomized controlled trials
Colin K He, Orient Health Care

TL17: Implementations of tipping point analysis in assessing impact of missing data
Susan Wang, Boehringer-Ingelheim

TL18: On estimands of sensitivity analysis models for longitudinal clinical trials with missing data
Guanghan Frank Liu, Merck & Co. Inc.

TL19: Analytic Approaches to Handling Missing Data in Observational Studies
William Hawkes, Quintiles RWLPR

TL20: Missing Values: Is there a difference between "Do Not Know" or "Choose not to Answer" and Responses left Missing and Should these responses be treated Differently?
Tammy Massie, National Institute of Health

TL21: Missing Data: Can More Be Done During the Conduct of a Clinical Trial to Limit Missing Data?
Rosanne Lane, Janssen Research & Development, LLC

TL22: Bias Correction Method for A Misclassified Binary Outcome in the Presence of A Gold Standard
Dewi Gabriela Rahardja, DOD\WHS

TL23: Prediction of Medication Adherence Using Different Predictors (Medical & Rx claim-based Attributes, Socioeconomic Attributes, etc.)
Ogi Konstantinov Asparouhov, LexisNexis Risk Solutions Health Care

TL24: Applications of Multidimensional Time Model for Probability Cumulative Function to Biopharmaceutical Industry
Michael Fundator, National Academies, DBASSE

TL25: Setting a Priori Phase 2 to 3 Go/No Go Decision Criteria
Ih Chang, Biogen

TL26: The closure principle revisited
Dror Rom, Prosoft Clinical

TL27: Challenges on Design and Analysis of Multi-regional Clinical Trials
Weining Z Robieson, Abbvie

TL28: A novel cluster randomized pragmatic research study design for evaluating interventions
U Vijapurkar, Janssen

TL29: Propensity Score Model Development – Please Share Your Experience and Lessons Learned
Jie (Jack) Zhou, FDA CDRH

TL30: Endpoints in Oncology: Overall Survival (OS), PFS and Overall Response Rate (ORR)
Yanqiong Zhang, Vertex; Helen Zhou, GSK

TL31: Statistical Challenges in Immuno-oncology Drug Development
Feng Xiao, Medimmune

TL32: Oncology: Imaging Endpoints in Clinical Trials
Grace-Hyun Jung Kim, UCLA

TL33: Challenges and Potential Solutions in Study Design and Analysis of Pediatric Studies
Ying Grace Li, Eli Lilly and Company

TL34: Crossover and the Acclerated Approval Process
Daniel Sargent, Mayo Clinic; Jonathan Siegel, Bayer HealthCare Pharmaceuticals Inc.

TL35: Frailty models in analyzing recurrent events in the presence of terminal event
Chul H Ahn, FDA-CDRH

TL36: Statistical Issues in n-of-1 Trials
Emelita M de Leon-Wong, Novella Clinical, a Quintiles Company

TL37: Writing Contracts
Philip T Lavin, Lavin Consulting LLC

TL38: Blinded Safety Assessment
Sammy Yuan, Merck

TL39: CDRH patient-reported outcomes (PRO) working group
Pablo E. Bonangelino, FDA/CDRH/OSB

TL40: Protocol Deviation in the Clinical Trial
Zhiheng Xu, FDA/CDRH

TL41: Cognitive Function Assessment: Challenges in analysis and interpretation of outcomes
Kim Cooper, Janssen, R&D

TL42: Composite Endpoints in Randomized Trials
Cynthia M DeSouza, Vertex Pharmaceuticals; Chenkun Wang, Vertex Pharmaceuticals

TL43: Psychometric Methods for the Development, Evaluation, and Interpretation of Clinical Outcome Assessments
Cheryl Coon, Outcometrix; Stacie Hudgens, Clinical Outcomes Solutions

TL44: Targeted Subgroup Identification in Clinical Trials
Isaac Nuamah, Janssen Research & Development

TL45: Radiological Progression in Rheumatoid Arthritis: Design and Analysis of Clinical Trials
Bei Zhou, Jassen R&D

TL46: Use of propensity score stratification in non-randomized studies
Vandana Mukhi, FDA/CDRH/OSB

TL47: Organizing a Therapeutic Area Scientific Working group for Alzheimer's
Hong Liu-Seifert, Lilly

TL48: Trial Designs for Outbreaks of Emerging Infectious Diseases
Amelia Dale Horne, FDA

 

PS1a Parallel Session: Using Historical Information in Clinical Trials: How Much Can We Gain?

09/29/16
1:15 PM - 2:30 PM
Thurgood Marshall North

Organizer(s): Manuela Buzoianu, FDA/CDRH; Margaret Gamalo, Eli Lilly and Company; Junshan Qiu, FDA CDER; Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals

Chair(s): Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals

The planning of clinical trials to test new drugs involves substantial resources. Enriching control arm of new trial with relevant trial external information holds the promise of more efficient trial design in many occasions. This information is often known as “historical data” or “codata”. Use of historical data allows effective trial design including smaller sample size or unequal randomization (places more subjects on the treatment arm in a study). In addition this enriches the amount of information for both efficacy or safety of the current novel treatment including important secondary endpoints. One appeal of Bayesian approach is incorporation of historical data into the statistical analysis in the form of “prior” for powerful design. The advantages and disadvantages include increased power, decreased sample, and effects on type I error. There are many ways of borrowing from historical data. Generally all these methods act to “pull” or “shrink” estimates from the current control arm toward point estimates from the historical study(s). These methods have some parameters governing the borrowing, and can be set by the user to either borrow extensively or minimally. But a fundamental question is the amount of information contained in the prior. Understanding the strength of a prior distribution relative to the likelihood is a fundamental issue when applying Bayesian methods. This issue may be addressed directly by quantifying the prior information in terms of a number of hypothetical observations known as prior effective sample size (ESS). A prior ESS allows one to judge the relative contributions of the prior and the data to the final conclusions. For many commonly used models, the calculation of ESS seems straightforward. However, for complex models (e.g. logistic regression) ESS calculation requires further investigation. Some notable work in this area includes Morita et. al. (2008, 2010) and Neuenschwander et al., 2010. But in practice, these methods for quantification of historical data prior are not well understood in terms of benefits, effects, and regulatory ramifications. This session will focus on different approaches to quantify historical information in historical data prior along with their advantage and disadvantages. This session will feature four prominent participants (three speakers and one discussant) from industry, academia and regulatory agency discussing borrowing information in different framework.

Incorporating historical data in Bayesian phase I trial design: Analyzing differences between patient populations
Satoshi Morita, Kyoto University Graduate School of Medicine

Design Considerations for Bayesian Clinical Studies: Prior Effective Sample Size and Type 1 Error Level
View Presentation View Presentation Gene Anthony Pennello, FDA/CDRH

Predictive evidential threshold scaling (PETS): does the actual evidence meet confirmatory standards?
View Presentation View Presentation Beat Neuenschwander, Novartis Pharma AG

 

PS1b Panel Session: An Adaptive Design Case Study --- Detailed Review and Discussion

09/29/16
1:15 PM - 2:30 PM
Thurgood Marshall South

Organizer(s): Greg Cicconetti, AbbVie Inc.; Inna Perevozskaya, Pfizer; Xiting (Cindy) Yang, FDA/CDRH; Jie (Jack) Zhou, FDA CDRH

Chair(s): Xiting (Cindy) Yang, FDA/CDRH

Panelist(s): Gerry Gray, FDA; Weili He, Merck & Co. Inc.; Lisa LaVange, FDA; Min (Annie) Lin, FDA CBER; Inna Perevozskaya, Pfizer

In addition to careful planning, logistical issues and appropriate implementation of the study also play important roles in the success of an adaptive design. In this session, a complex adaptive design on infantile hemangioma will be reviewed with respect to study design and implementation. What will be shared include what was learned at the interim and final analyses, changes to the study design, and interactions with the EMA and FDA. This study was successful in efficiently bringing a drug to market, but there were important lessons learned. A panel discussion will then follow.

A Case-Study of A Confirmatory Adaptive Trial for Infantile Hemangioma
View Presentation View Presentation Eva Miller, Independent Biostatistical Consultant

 

PS1c Panel Session: Standards for evidence of effectiveness: evaluating compelling single-trial evidence versus benefits of replication

09/29/16
1:15 PM - 2:30 PM
Thurgood Marshall East

Organizer(s): Somesh Chattopaehyay, FDA; Yeh-Fong Chen, US FDA; Paul Gallo, Novartis; Jonathan Siegel, Bayer HealthCare Pharmaceuticals Inc.

Chair(s): Paul Gallo, Novartis

Panelist(s): Eric Gibson, Novartis; Hsien-Ming James Hung, FDA; Daniel Sargent, Mayo Clinic; Bob Temple, FDA; Janet Wittes, Statistics Collaborative

A 1998 FDA guidance document described standards for providing substantial evidence to establish a drug’s effectiveness. It presented the legal, historical, and scientific bases for the common practice of requiring at least two adequate and well-controlled studies to justify approval. It also described scenarios in which FDA could grant approval based on a single adequate and well-controlled study, and presented attributes that such a trial would be expected to possess. The ethical challenges of conducting a replicate study when an effect on a serious endpoint has been demonstrated is acknowledged, and the single study would presumably be very large, show extremely compelling and statistically persuasive results, and a high degree of internal consistency. There certainly are numerous examples of approvals that have been granted in single pivotal trial settings. The statistical literature contains a number of discussions of single-trial strength-of-evidence standards which, for example, might quantify a program-wise false positive rate relative to that of a conventional two-trial paradigm. Some of these have quantified added statistical efficiency of a single large trial analysis or a pooled analysis, using a much smaller significance level, relative to the conventional practice. But the issue is clearly much more complex and multi-faceted; in addition to statistical quantifications of evidence, there will be clinical, ethical, and strategic factors involved. This session will address the views of the expert panelists regarding when a single pivotal trial program can suffice for regulatory approval, including descriptions of the settings where this is most appropriate and the attributes the trial and treatment should possess; and this will be contrasted with situations where replication (or at least relevant supportive information, not necessarily an exact replicate) should be viewed as necessary for approval or would provide an important more accurate characterization of a treatment’s benefits. Statistical and clinical perspectives will be described, and examples will be cited where relevant.

 

PS1d Parallel Session: Statistical Issues in Clinical Endpoint Studies of Bioequivalence

09/29/16
1:15 PM - 2:30 PM
Thurgood Marshall West

Organizer(s): Charles Bon, Biostudy Solutions, LLC; Mark Shiyao Liu, Mylan Inc; Julia Jingyu Luan, FDA/CDER; Mengdie Yuan, FDA

Chair(s): Mengdie Yuan, FDA

Generic drugs account for approximately 86% of the market share in the U.S. The 1984 Drug Price Competition and Patent Term Restoration Act, known as the “Hatch-Waxman Act,” established the modern system of generic drugs in the United States. Under this system, generic drug products are considered therapeutically equivalent to the reference listed drug (RLD) product if they meet regulatory criteria of pharmaceutical equivalence and bioequivalence. With the passage of Generic Drug User Fee Act (GDUFA), there is an increased emphasis on the regulatory research for generic drugs. In particular, statistical research in this area needs more attention, for example, how to analyze correlated ordinal outcomes and how to evaluate the robustness of the commonly used methods. This session will focus on the clinical endpoint studies of bioequivalence. It will include two presentations followed by a discussion. Confirmed speakers and the discussant from the agency and industry will present and discuss issues and statistical approaches in this area.

Statistical Issues in Clinical Endpoint Studies of Bioequivalence
View Presentation View Presentation Julia Jingyu Luan, FDA/CDER

Statistical Analyses and Issues in the Testing of Means and Proportions in Clinical Endpoint Studies for Evaluation of Generic Products
View Presentation View Presentation Pina D'Angelo, Novum Pharmaceutical Research Services

Discussant(s): Stella Grosser, FDA

 

PS1e Parallel Session: Emerging topics in Benefit-risk assessment

09/29/16
1:15 PM - 2:30 PM
Lincoln 5

Organizer(s): Weili He, Merck & Co. Inc.; Qi Jiang, Amgen; Xuefeng Li, FDA CDRH; John Scott, CBER FDA

Chair(s): Weili He, Merck & Co. Inc.; Xuefeng Li, FDA CDRH

Panelist(s): Thomas E Gwise, FDA, CDER; Joe Heyse, Merck & Co., Inc.; Martin Ho, Center for Devices and Radiological Health, FDA; Chunlei Ke, Amgen; Janet Turk Wittes, Statistics Collaborative

Since formed in early 2013, the Quantitative Sciences in the Pharmaceutical Industry (QSPI) Benefit-Risk Working Group (BRWG) has been actively pursuing several important emerging topics in BR assessment, including identification and evaluation of uncertainties, commonly used graphic displays in clinical development, identification of different data sources, and issues to consider for BR assessment in subgroups. The work by the BRWG members resulted in several manuscripts to be included in a BR book by CRC Press to appear in June 2016. The BRWG is now continuing its work on additional areas of interests, including study design considerations, BR analysis method for different data sources, BR metrics and methods, and subgroup identification. This session will present the most current work from this working group.

Comparing Apples to Oranges- or Thoughts on Benefit- Risk Assessment
View Presentation View Presentation Janet Turk Wittes, Statistics Collaborative

Some Challenges in Structured Benefit-risk Assessment Across the Lifecycle of Products
View Presentation View Presentation Chunlei Ke, Amgen

 

PS1f Parallel Session: Statistics Methodology for Safety Monitoring and Confirmatory Safety in Clinical Development

09/29/16
1:15 PM - 2:30 PM
Lincoln 6

Organizer(s): Qi Jiang, Amgen; Judy X Li, FDA; Estelle Russek-Cohen, CBER FDA ; Bill Wang, Merck

Chair(s): Judy X Li, FDA

Safety concerns have often been the primary driver in stopping development for many potential drug candidates. Thus, identifying potential safety concerns early in the clinical development program is an important consideration in the characterization of the safety profile of a drug. The ASA Biopharmaceutical Section has embarked on some initiatives on the assessment of safety in clinical trials and the safety working group has been formed resulting in two subgroups: one is on cardiovascular (CV) Safety and the other one is on Safety Monitoring. In our session, we will have two presentations from each of the safety sub working groups followed by a panel discussion. The first presentation will discuss safety monitoring in clinical development which, in a broad sense, spans from individual trial level to program level, from unblinded to blinded, from expected to unexpected adverse events and from patient profiles to aggregate safety monitoring tables and figures, supporting DMC and benefit-risk assessments. It also serves to lay out the foundation for Integrated Safety Summary preparation and benefit-risk analysis in the Clinical Overview and possible Advisory Committee Meeting. Various statistical methodological approaches for safety monitoring in clinical development will be discussed. These include methods for blinded versus unblinded analyses, methods for static versus dynamic approaches, frequentist versus Bayesian methods, and various post marketing methods that can be applied to the premarketing setting. The presentation will also include an overview of current industry practice in the premarketing setting and various graphical methods and tools for safety monitoring. The second presentation will discuss statistical challenges encountered at the design and analysis stages of cardiovascular outcome trials (CVOTs) and will share some solutions to address these challenges. In particular, the speaker will discuss statistical challenges and strategies for testing multiple endpoints, populations, and doses; performing various types of analyses; addressing pre-marketing and post-marketing requirements efficiently; designing a CVOT for non-inferiority and superiority testing; assessing effects in different subgroups; and addressing patient’s retention and missing data challenges.

Statistical Considerations for Cardiovascular Outcome Trials in Patients with T2DM
View Presentation View Presentation Olga V Marchenko, Quintiles

Statistical Methodology in Safety Monitoring
View Presentation View Presentation Melvin Munsaka, Takeda

Discussant(s): Qi Jiang, Amgen; Mark Levenson, FDA; John Scott, CBER FDA

 

PS2a Parallel Session: Bayesian approaches in quantitative-based decision making for drug and devices development

09/29/16
2:45 PM - 4:00 PM
Thurgood Marshall North

Organizer(s): Cassie Dong, FDA; Cristiana Mayer, Johnson & Johnson; Min Min, FDA; Timothy H Montague, GlaxoSmithKline

Chair(s): Cristiana Mayer, Johnson & Johnson

Bayesian statistics has gained a more prominent role in drug development in the recent years. Quantitative-decision making process is often relying on the powerful approaches for combining data and other knowledge from previous studies with data collected in a current trial. The philosophy of learning from the current trial, borrowing information from the historic data, or combing direct and indirect evidence from multiple trials for updating our beliefs may provide scientific justification for more cost effective and efficient development program without sacrificing the goal of evidence-based medicine. In addition, go/no-go decisions can be constructed with the benefit of incorporating the variability of the treatment effect estimates and move away from the limited approach solely based on pvalues. The planning of the next stage studies can be expressed in terms of power, probability of success, probability of program success and sample size. Another area of increased interest is the Bayesian network meta-analysis that incorporates findings from multiple studies to address the comparative effectiveness and safety of interventions. In this session, we will focus on the use of Bayesian methods and modeling in drug development, via a wide set of different applications to illustrate challenging issues from the industry and regulatory perspectives.

Bayesian Adaptive Clinical Trial Methods for Incorporating Auxiliary Data and Identifying Interesting Subgroups
View Presentation View Presentation Brad Carlin, School of Public Health University of Minnesota

Bayesian and Decision Analysis Approaches for Regulation of Medical Products
View Presentation View Presentation Telba Irony, FDA

A Bayesian approach in Proof-of-Concept study design: a case study
View Presentation View Presentation Michael Lee, Janssen R&D

Discussant(s): David Ohlssen, Novartis

 

PS2b Parallel Session: Futility Assessments in Late Phase Drug Development: Statistical, Operational and Regulatory Perspectives

09/29/16
2:45 PM - 4:00 PM
Thurgood Marshall South

Organizer(s): Thomas Birkner, FDA; Richard Davies, GlaxoSmithKline; George Kordzakhia, CDER, FDA; Susan Wang, Boehringer-Ingelheim

Chair(s): Richard Davies, GlaxoSmithKline; Susan Wang, Boehringer-Ingelheim

There are many challenges in the clinical research of new drug development, perhaps the biggest of which is that research & development costs keep increasing while discovering reimbursable medicines has become more elusive. Recent studies have shown that most clinical trials fail to meet their primary objective and most compounds fail to get to the next stage of drug development with the primary reason being insufficient levels of efficacy [1]. Indeed at the all-indication level, the industry-wide success rate for compounds entering Phase 2 which go on to be approved medicines was estimated at just 16%. Statisticians can play a leading role in helping to manage finite R&D investment more wisely, in particular through the routine evaluation of futility interim analysis options. Performing a futility test at interim analyses can potentially stop a failing drug faster, enabling resources to be diverted to more promising projects and preventing continued exposure of patients to compounds that have little chance of becoming medicines. These objectives are in line with the FDA’s Critical Path Initiative which advocates using innovative design to improve drug development productivity. However, in addition to considerations related to trial integrity and advancing scientific understanding, futility interims need careful design to avoid losing substantial power while also providing an opportunity to limit the extent of investment from sponsors, investigative sites and patients towards a failing study. The proposed session will present case studies of futility frameworks used in recently designed phase 3 clinical trials, highlighting how challenges were addressed in terms of the trade-offs between power, probability of continuing to failure and managing committed investment before the first opportunity to stop. Included in the session will be a discussion of the novel two-stage LATITUDE TIMI 60 trial [2], which successfully adopted an unconventional approach. Further discussion will focus on statistical methods to guide selection of appropriate futility rules, under the framework of optimization of cost and revenue of the program. The factors that could influence the decision of whether or not to have a futility analysis, as well as the timing and boundary of futility analyses will also be discussed. Speakers: Adam Crisp, GSK; Qiqi Deng, Boehringer-Ingelheim Discussants: Dr. Jim Hung, FDA CDER; Dr. Norman Stockbridge, FDA CDER [1] Hay, Michael, et al. ‘Clinical development success rates for investigational drugs’. Nature review January 2014. [2].O ’Donoghue ML, Glaser R, Aylward PE, Cavender MA, Crisp A, Fox KAA, Laws L, Lopez-Sendon JL, Steg PG, Theroux P, Sabatine MS and Morrow DA. Rationale and design of the LosmApimod To Inhibit p38 MAP kinase as a TherapeUtic target and modify outcomes after an acute coronary syndrome trial. American Heart Journal 2015; 169 (5): 622 – 630.

Futility assessments in late-phase drug development: a novel two-stage outcomes trial in acute coronary syndrome
View Presentation View Presentation Adam Crisp, GlaxoSmithKline R&D

Choosing timing and boundary for futility analysis based on cost-effectiveness assessment
View Presentation View Presentation Qiqi Deng, Boehringer-Ingelheim

Discussant(s): Paul Gallo, Novartis; Jim Hung, FDA CDER; Martin Rose, FDA CDER

 

PS2c Panel Session: Best practice in modeling and simulation – what should it be and how will it change what we do?

09/29/16
2:45 PM - 4:00 PM
Thurgood Marshall East

Organizer(s): Jonathan D Norton, MedImmune; Michael O'Kelly, Quintiles; Dionne L. Price, CDER, FDA; John Scott, CBER FDA

Chair(s): Michael O'Kelly, Quintiles; Dionne L. Price, CDER, FDA

In the last decade, there are have been a number of initiatives to define best practices for projects involving modeling and simulation, but no proposal for best practices in modeling and simulation has been generally and consciously adopted in the pharmaceutical world. In this panel session, recent proposals for best practices in the U.S. and the European Union will be described. Regulatory participants will discuss general considerations for use of modeling and simulation in the drug development and approval process. Industry participants will propose elements that are essential for effective modelling and simulation and explore changes that would result if an agreed best practice was required for modeling and simulation work. The panel will further discuss good and bad practices in modeling and simulation, the role of modeling and simulation in the approval process, how best practice fits with the inherently iterative nature of modeling and simulation projects, whether a single best practice could apply to the wide range of projects in pharmaceutical research that use modeling and simulation, and benefits that could accrue from wider use of best practices in modeling and simulation.

Good Practices in Model-Informed Drug Discovery and Development: Practice, Application, and Documentation
Sandra Visser, Merck & Co

Considerations of clinical trial design simulation
View Presentation View Presentation Boguang Zhen, FDA CBER

 

PS2d Parallel Session: Statistical Issues in Establishing Therapeutic Clinical Bioequivalence or Biosimilarity

09/29/16
2:45 PM - 4:00 PM
Thurgood Marshall West

Organizer(s): Eric M. Chi, Amgen Inc.; Jeffrey L Joseph, Chiltern International; Fairouz Makhlouf, FDA/CDER; Wanjie Sun, FDA

Chair(s): Wanjie Sun, FDA

The design and analysis of Therapeutic Equivalence trials are challenging when the clinical endpoint is non-parametric or semi-parametric such as time to event or proportions or have an endpoint with a high variability. Therapeutic bioequivalence trials in the therapeutic areas of Oncology, Respiratory, Neurology, or Psychiatry have these issues. For the analysis of therapeutic clinical equivalence, the following needs to be performed: 1) efficacy of the Test and Reference treatments against placebo separately (test of the Reference treatment is needed to verify quality of the trial); the tests need to be significant at 0.025 level as a one sided test, 2) equivalence test (two one sided test at alpha level of 0.05) of the ratio of Test to Reference Treatment and 90% confidence interval between 0.8 and 1.25. In this session, we will discuss the statistical considerations in the development of biosimilars as well as generic drugs including endpoint selection and statistical analysis. We also will discuss statistical methods in establishing equivalence margins for highly variable therapeutic products and the development of test statistic for equivalence testing in these situations. References: Draft Guidance on Ciprofloxacin Hydrochloride. Jun 2012. Guidance for Industry: Scientific Considerations in Demonstrating Biosimilarity to a Reference Product. April 2015 Wellek, S. A log-rank for equivalence of two survivor functions. 1993. Biometrics 49:877-881. Su, J.Q. and Wei, L.J. Nonparametric estimation for the difference or ratio of median failure times. 1993. Biometrics 49: 603-607.

Statistical Considerations for Clinical Studies Supporting Biosimilar Applications
View Presentation View Presentation Thomas E Gwise, FDA, CDER

Reinforcement of the Biosimilarity Evaluation in the Phase 3 Study by Incorporating the Phase 1 PK Similarity Evidence - a Bayesian Approach
View Presentation View Presentation Nan Zhang, Amgen

Some Thoughts on Drug Interchangeability
View Presentation View Presentation Shein-Chung Chow, Duke University

 

PS2e Parallel Session: Current practice and challenges in utilizing existing data in pre-market evaluation of medical devices

09/29/16
2:45 PM - 4:00 PM
Lincoln 5

Organizer(s): Theodore Lystig, Medtronic, Inc.; Ying Yan, Incyte; Ying Yang, FDA/CDRH; Yu Zhao, FDA/CDRH

Chair(s): Yunling Xu, FDA/CDRH

In medical device field, sometimes it is impractical and/or unethical to conduct large-scale randomized and well-controlled clinical trials. Recently, there has been growing interest in utilizing existing data, such as historical data collected in previous medical device trials, high-quality registry data, etc., to facilitate the pre-market evaluation of new medical devices. In this session, we will present with example the current practice in prospectively designing, conducting and reporting medical device clinical studies utilizing existing data in pre-market setting. We will also discuss the challenges and current considerations in minimizing biases and making reliable statistical inference in such clinical studies.

Utilizing Existing Data for Pre-market Medical Device Clinical Studies
View Presentation View Presentation Lilly Q Yue, FDA/CDRH

Real World Data: Generation, Analysis, and Interpretation
View Presentation View Presentation Theodore Lystig, Medtronic, Inc.

The Use of Real World Evidence in the Premarket Regulatory Environment
View Presentation View Presentation Greg Campbell, GCStat Consulting

 

PS2f Parallel Session: Robust Decision Making In Early Stage Clinical Development

09/29/16
2:45 PM - 4:00 PM
Lincoln 6

Organizer(s): Huanyu (Jade) Chen, FDA; Dalong Huang, FDA/CDER; Xiaobai Li, MedImmune/AstraZeneca; Yanli Zhao, MedImmune/AstraZeneca

Chair(s): Yanli Zhao, MedImmune/AstraZeneca

It is well established that the Phase 3 clinical trial failure rate is still very high in many therapeutic areas. As a result, both financial and intellectual resources that could have been applied to other therapies or experiments may often result in minimal return. One driver of this paradigm is that major investment or strategy decisions are often made in the presence of less robust early/mid-stage data containing considerable uncertainty while trying to address multiple important research questions. Statisticians play an important role in helping to quantify decision making and risk from either a development or a regulatory perspective. In this session, industry, academic and FDA researchers will share perspectives and examples related to decision making in early stage development. Topics covered will include the following: (1) a novel method for optimizing Phase 2/3 seamless designs based on a user-specified flexible utility function where the decision made after the Phase 2 component is which of 2 subpopulations (e.g., defined by a biomarker or risk score at baseline) to enroll or which of 2 treatments to include in Phase 3; (2) a prospectively defined robust and statistically grounded Go/No-Go (interim or final) decision making framework linked to a target product profile and allowing a personalized healthcare approach that is driven by operating characteristics that inform Phase 2 study design; and (3) regulatory perspectives regarding success factors related to decision making at key points during early stage development. Case studies will be reviewed and discussed.

Regulatory Perspective on Decision Making at Early Phase Product Development
View Presentation View Presentation Shiowjen Lee, FDA

A paradigm for Go/No-Go Decision making in Phase 2
View Presentation View Presentation Erik Pulkstenis, MedImmune

Bayesian Approach for Decision Making in Early Clinical Development based on Multiple Endpoints
View Presentation View Presentation Charlie Cao, Takeda

 

PS3a Parallel Session: Personalized Medicine: How Bayesian Subgroup Analysis Plays Its Role

09/29/16
4:15 PM - 5:30 PM
Thurgood Marshall North

Organizer(s): Wei-chen Chen, FDA; Shuya Lu, FDA; David Ohlssen, Novartis; Ravi Varadhan, The Johns Hopkins Center on Aging and Health

Chair(s): Boguang Zhen, FDA CBER

How subgroups of patients react heterogeneously to treatment plays an important role in personalized medicine. However, identifying the distinct subgroup and interpreting the pre-planned or post-hoc exploratory subgroup for the development of individualized treatment rules has been recommended and yet challenging. The risk of overlooking an important subgroup and making a decision based on a false discovery becomes very crucial. At the meantime, the limited sample size, multiplicity, lack of power, and information borrowing make the solutions not so easy. From a statistical perspective these types of problems can be divided between the need to estimate an effect of interest accounting for a potential selection or random high bias and dealing with multiplicity when examining numerous potential signals or subgroups. When considering the former, the Bayesian framework provides the ability to incorporate priors with a degree of skepticism, a natural framework for forming models with exchangeability or shrinkage and the possibility to form realistically complex models allowing synthesis of information from a variety of sources. While in the case of the latter, Bayesian approaches to hypothesis testing and extensions of false discovery rate provide potential techniques to handle multiplicity. In our session, we will discuss those challenges and solutions in detail followed by series of illustration examples of clinical studies. The session will feature three speakers from industry, FDA and academia. The first presentation provides an overview of subgroup problems occurring in medical product development, with a brief review of some key techniques from the Bayesian framework. The second presentation introduces an empirical Bayesian meta analytical predictor approach to quantify the treatment effect among different subgroups by addressing the random high or low problem of pre-specified subgroup treatment effects in in the context of confirmatory clinical trials. The third presentation introduces a Bayesian approach that utilizes a potentially large number of patient-specific covariates to identify subgroups of patients that may receive substantial treatment benefit, and to investigate the extent of treatment effect differences across study subjects.

Bayesian approaches to subgroup analysis and selection problems in drug development
View Presentation View Presentation David Ohlssen, Novartis

Precision Medicine: How to Assess Subgroup Effects with Empirical Meta-analytical Predictive Priors in Clinical Trials
View Presentation View Presentation Judy X Li, FDA

Bayesian Nonparametric Accelerated Failure Time Models for Analyzing Heterogeneous Treatment Effects
View Presentation View Presentation Nicholas Henderson, Johns Hopkins University

 

PS3b Parallel Session: Multiple endpoint evaluation for medical devices: analyses, labeling, and claims

09/29/16
4:15 PM - 5:30 PM
Thurgood Marshall South

Organizer(s): Gene Anthony Pennello, FDA/CDRH; Zhiying Qiu, Sanofi; Alicia Y. Toledano, Biostatistics Consulting, LLC

In a pivotal, medical device clinical study in which multiple endpoints are evaluated, questions often arise as to which endpoint analyses need to be controlled for the effects of multiplicity and which analysis results should be provided in product labeling. These important questions arise in pre-market clinical studies of medical devices both therapeutic and diagnostic. The endpoints may be organized according to co-primary aims, secondary aims about which the company may or may not hope to make marketing claims, analysis of important subgroups, and supportive analyses. Once a determination is made regarding which analyses are inferential (intervals and/or hypothesis tests) and which are descriptive (measures of center and spread), the goal is to pre-specify multiplicity adjustments for statistical inferences that are both flexible and reproducible and that meet clinical and regulatory objectives. Graphical methods and Bayesian methods can be useful toward achieving this goal. Also, product labeling should balance caution about making unsupported claims and transparency in reporting all of the evidence that a study does provide. In this session, speakers from FDA and Industry will present research and perspectives on these questions for diagnostic and therapeutic devices.

Multiplicity Considerations in Medical Device Clinical Studies
View Presentation View Presentation Zengri Wang, Medtronic

Multiple Hypothesis Testing: Graphical methods for sequential rejective tests and tests for correlated sensitivity statistics with application to diagnostic imaging devices
View Presentation View Presentation Berkman Sahiner, USFDA/CDRH/OSEL

Control Of Type I Error With Hierarchical Modeling of Multiple Endpoints
View Presentation View Presentation Scott Berry, Berry Consultants

 

PS3c Panel Session: Moving Pharmacometrics and Statistics Beyond a Marriage of Convenience - Improving Discipline Synergy and Drug Development Decision Making

09/29/16
4:15 PM - 5:30 PM
Thurgood Marshall East

Organizer(s): Alan Hartford, AbbVie; Misook Park, FDA; Dionne L. Price, CDER, FDA; Matthew David Rotelli, Eli Lilly and Company

Chair(s): Alfred H. Balch, University of Utah, Department of Pediatrics

Panelist(s): Jeffrey Barrett, Sanofi; Jeffry Florian, FDA/CDER; France Mentre, Pr of Biostatistics

Pharmacometrics and Biostatistics play vital complimentary role in bringing together prior knowledge of physiological and pharmacological processes (effect of disease on endpoints, mechanism of action for a drug class, evidence of effectives etc.) towards hypothesis testing, evidence generation, and robust regulatory decision making. These two disciplines have involved interactions throughout all phases of drug development, from first in-human studies, dose-ranging studies in patients, pivotal efficacy studies, post-marketing assessments, and dosing decisions in pediatrics/special populations. This talk will highlight a few examples of such interactions from regulatory experience, focusing on cooperation between these two disciplines as they have worked towards advancing public health. Transforming Drug Development: Benefits of Collaboration between Statistics and Pharmacometrics in getting better therapies to patients faster. Renowned researchers interfacing modeling and statistics from Industry, FDA and Academics to discuss the interaction of Pharmacometric Modeling and Biostatistical approaches to Pharmaceutical Data. Some of the key discussion areas are: How good collaboration can impact internal and external decision making and also drive learning about drug properties. Also key tools and terminology used by each discipline and how they can enable and disable collaboration. The session will focus on impact on developing drugs in, for example, First in Human, Phase II/III dose-finding, Thorough QT, Integrated Safety and Special Populations. Productive and unproductive experiences will be shared as well as learnings and ideas about moving to a more productive and collaborative world.

An Industry Perspective on Statistics and Pharmacometrics Pharmacometrics
View Presentation View Presentation Jeffrey Barrett, Sanofi

Statisticians and Pharmacometricians, what they can still learn from each other
View Presentation View Presentation France Mentre, Pr of Biostatistics

Pharmacometrics and Biostatistics Interactions at the FDA
View Presentation View Presentation Jeffry Florian, FDA/CDER

 

PS3d CMC Session: FDA guidance on statistical approaches for evaluation of analytical similarity

09/29/16
4:15 PM - 5:30 PM
Thurgood Marshall West

Organizer(s): Aili Chen, Pfizer; Tsai-Lien Lin, FDA; Steven J Novick, GlaxoSmithKline; Yu-Ting Weng, FDA

Chair(s): Stan Altan, JNJ; Cassie Dong, FDA

The development of biosimilar is an emerging area. Although several regulatory guidelines have been issued, associated statistical methodologies continue to evolve. For example, statistical accommodation of limited availability of biological materials and lots has been developed. In this session, statistical challenges and opportunities concerning biosimilar develop will be highlighted. In particular, the thought process that led to the FDA recommendation of the tiered approach to analytical similarity testing and equivalence margin setting will be discussed. This session consists of two presentations, one expert statistician representing an industry perspective and one representing a regulatory perspective, who will together provide fresh insight on the application and issues surrounding statistical approaches to analytical similarity.

 

PS3e Parallel Session: Statistical Issues and Challenges in Regulatory Animal Drug Studies

09/29/16
4:15 PM - 5:30 PM
Lincoln 5

Organizer(s): Jing Li, Boehringer Ingelheim Vetmedica, Inc.; Kyunghee K Song, FDA/CVM; Christopher I. Vahl, Kansas State University; Xiongce Zhao, FDA/CVM

Chair(s): Virginia Recta, FDA/CVM

Many of the statistical issues encountered in studies intended for animal drug approvals are similar to those in regulatory human clinical trials. However, there are also statistical issues and challenges unique to regulatory animal drug studies, often related to various experimental designs to support drug indications for specific animal species. In this session, we will present animal drug studies reviewed by the Center for Veterinary Medicine (CVM) and discuss statistical issues and challenges associated with these studies.

An Assessment of Denominator Degrees of Freedom Approximations in the Analysis of Binary Endpoints in Veterinary Clinical Efficacy Trials
View Presentation View Presentation Christopher I. Vahl, Kansas State University

Fixed or Random and the Effect of Zero Variance Components on Mixed Models Analyses
View Presentation View Presentation George A. Milliken, Milliken Associates, Inc

Experimental Design and Analysis of Efficacy studies for Anti-Parasitic Drug Products
View Presentation View Presentation Sean Patrick Mahabir, Zoetis

 

PS3f Parallel Session: ICH E14 and Concentration-Response Modeling

09/29/16
4:15 PM - 5:30 PM
Lincoln 6

Organizer(s): Qianyu Dang, FDA/CDER; Dalong Huang, FDA/CDER; Qi Tang, AbbVie; Jiao Yang, Takeda

Chair(s): Qianyu Dang, FDA/CDER

The ICH E14 guidance requires drug sponsors to complete a ‘thorough QT/QTc TQT)study’ to evaluate the effect of a drug on cardiac repolarization. Exposure-response (ER) model analysis of concentration-QTc data plays an important role in E14 and has recently been suggested to serve as an alternative to E14 primary analysis and TQT study. This session discusses statistical issues in concentration-QTc modeling, presents case studies, and makes recommendations. If any updates from ICH E14 become available prior to the workshop, we will have a discussion on those updates as well.

Some Statistical Issues in Concentration-QTc Modeling
View Presentation View Presentation Dalong Huang, FDA/CDER

Enabling Robust Assessment of QTc Prolongation in Early Phase Clinical Trials
View Presentation View Presentation Devan V Mehrotra, Merck Research Laboratories

Discussant(s): Christine Garnett, FDA

 

Fri, Sep 30

PS4a Parallel Session: Protocol Deviations and Prescreening Bias Handling in Clinical Trials of Personalized Medicine

09/30/16
8:30 AM - 9:45 AM
Thurgood Marshall North

Organizer(s): Pablo E. Bonangelino, FDA/CDRH/OSB; Eva R Miller, Independent Biostatistical Consultant; Weining Z Robieson, Abbvie; Laura M Yee, FDA, CDRH

The gold standard in clinical trial practice is to follow what is pre-specified in the study protocol. However, there are situations where deviations from the protocol occur. For example, the investigator may develop new inclusion/exclusion criteria during the course of the trial. It is difficult to maintain integrity in trial conduct when trial implementation deviates from the protocol. The type I error rate may not be controlled as planned and bias could be introduced. In clinical trials for personalized medicine, prescreening tests are used to increase clinical trial enrollment. The use of prescreening tests may lead to bias which will inflate the reported drug efficacy. In this session, we will discuss the issues of protocol deviation and provide approaches to minimize bias in clinical trials through study design, monitoring and data analysis.

Deviations, Violations, and Other Departures from the Plan
View Presentation View Presentation Martin King, Senior Director, AbbVie Clinical Statistics

Protocol deviation in medical device clinical trials
View Presentation View Presentation Zhiheng Xu, FDA/CDRH

Regulatory Implications of Protocol Deviations and How to Avoid the Pitfalls
View Presentation View Presentation Carolyn Finkle, InVentiv Health Clinical

 

PS4b Parallel Session: MCP-Mod: recent advances in methodology and application

09/30/16
8:30 AM - 9:45 AM
Thurgood Marshall South

Organizer(s): Lei Gao, Sanofi; Yahui Hsueh, FDA; Bo Li, FDA; Oleksandr Sverdlov, EMD Serono, Inc.

Chair(s): Jose' C. Pinheiro, Johnson & Johnson; An Vandebosch, Johnson & Johnson

How to identify the correct dose for future phase III trials and make go/no go decisions have always been the core questions in early drug development. Traditionally either comparison of multiple doses or dose response modeling methods have been used in dose finding studies. Bretz, Pinheiro, Branson (2005) proposed the MCP-Mod framework by combining multiple comparison procedures and modeling techniques in dose ranging studies. Since then, the MCP-Mod methodology has gained increasing popularity in clinical trial practice. Notably, the EMA CHMP published a positive qualification opinion on the methodology in 2014, attesting its adequacy for designing and analyzing dose finding studies. This session will feature speakers and discussants from industry and FDA discussing some recent progress in MCP-Mod, including advances of the methodology and its range of application in dose ranging studies. Reference Bretz, F., Pinheiro, J. C. and Branson, M. (2005). Combining multiple comparisons and modeling techniques in dose-response studies. Biometrics 61 738–748.

Program-wise trial simulation assessing design and analysis options from proof of concept to dose ranging
View Presentation View Presentation Zhaoling Meng, Sanofi

Closed MCP-Mod
View Presentation View Presentation Bjorn Bornkamp, Novartis AG

A Model-Based Permutation Test for Evaluation of Dose-Response Signal
View Presentation View Presentation Min Fu, Janssen R&D

Discussant(s): Vikram Sinha, FDA

 

PS4c Parallel Session: Advance the use of patient reported outcome measures (PROMs) in regulatory decision making

09/30/16
8:30 AM - 9:45 AM
Thurgood Marshall East

Organizer(s): Chul H Ahn, FDA-CDRH; Qi Jiang, Amgen; Wenting Wu, AstraZeneca; Zhiwei Zhang, FDA/CDRH

Chair(s): Chul H Ahn, FDA-CDRH

Patient reported outcome measures (PROMs) have become increasingly important in measuring the effectiveness and safety of a medical device or drug. More and more phase-3 clinical trials have PROMs as primary, secondary, and exploratory endpoints. This session will discuss (1) the PROMs used in the past clinical trials from all therapeutic or diagnostic areas, (2) statistical methodologies promoting the use of PROMs in regulatory decision making. Possible speakers: Dr. Michelle Tarver at FDA, One FDA organizer (to be removed from the organizer list later because of the presentation role), Two more spearkers from industry to be determined

Advance the use of patient reported outcome measures (PROMs) in regulatory decision making
View Presentation View Presentation Xin Fang, CDRH, FDA

PROMs and Clinically Meaningful Effects
View Presentation View Presentation Steven Snapinn, Amgen

 

PS4d Parallel Session: Immune-related Clinical Endpoints for Cancer Immunotherapy – Are they ready for prime time?

09/30/16
8:30 AM - 9:45 AM
Thurgood Marshall West

Organizer(s): Thomas E Gwise, FDA, CDER; Jeffrey L Joseph, Chiltern International; Daniel Li, Juno Therapeutics; Yuqun (Abigaill) Luo, FDA

Chair(s): Jeffrey L Joseph, Chiltern International

A large amount of interest and excitement within the oncology community as seen at the 2015 ASCO Annual Meeting has occurred with the recent approvals of novel cancer immunotherapeutic agents and promising results of these agents from recent clinical trials. In particular, encouraging clinical trial data have emerged from a class of immune checkpoint modulation namely anti-cytotoxic T lymphocyte-associated antigen -4 (CTLA-4), and anti T-cell co-receptor programmed cell death protein -1 (PD-1) and its ligand PD-L1. A few noticeable recent approved immunotherapeutic agents with indications in the treatment of unresectable or metastatic melanoma and metastatic non-small cell lung cancer (NSCLC) by FDA are: 1) Yervoy (ipilimumab): a CTLA-4 blocking antibody (approved 2011), 2) Keytruda (Pembrolizumab): a PD-1 blocking antibody (approved 2014, 2015), 3) Opdivo (nivolumab): a PD-1 blocking antibody (approved 2014, 2015). As opposed to chemotherapy, which acts directly on the tumor, cancer immunotherapies exert their effects on the immune system and demonstrate new kinetics that involve modulating cellular immune response followed by change in tumor burden or patient survival. Cancer immunotherapies are agents aimed at promoting the antitumor response by either actively inducing the patient’s immune response to recognize the tumor cells, or by passing the patient’s immune system and directly administering components of the immune system to the patient. Due to the different mechanism of action, immunotherapies may result in atypical response patterns. Traditional response criteria and endpoints developed based on cytotoxic drugs may not be well suited to evaluate the benefit of immunotherapies. Novel endpoints considerations for immunotherapy trials were clinical patterns of antitumor response for immunotherapeutic agents through immune-related response criteria (irRC) which was adapted from RECIST and WHO response criteria. In addition, it is also well recognized that different statistical methods for trial design and analysis is needed to address the unique challenges in immunotherapy clinical trials, for example, delayed survival benefit and long-term survival for cured patients. In this session, we would first review the clinical endpoint of tumor burden assessment and why do we need to assess by both RECIST criteria and irRC. Also in this session, we will discuss about when the comparator antitumor agent is a chemotherapy agent and how to ‘fairly’ compare responses in tumor burden and progression-free survival. Next we would review overall survival as it is the “gold” standard for approval of anti-cancer agents. As stated, commonly randomized immunotherapy clinical trials have a delayed separation of Kaplan-Meier curves (4-8 months) and long-term survival (“cured” patients). When these situations occur, the assumptions of proportional hazards and exponential distribution are violated. A possible model scenario could be to separate the curve into two components [A Hoos, 2012]. Another possible suggestion was to use the cure rate model and weighted log rank test to account for the long term and delayed separation of KM curves [TT Chen, 2013; T Saegusa, 2014]. The delayed clinical effect and long-term survival could also lead to loss of statistical power or prolonged study duration, and therefore, the initial sample size should be carefully planned and a sample size reassessment during interim may be valuable. Other topics that would be addressed are: 1) possible intermediate or surrogate endpoints that could be used for accelerated clinical development of cancer immunotherapy agents, and 2) how to conduct an interim analysis for cancer immunotherapy agents. In regards to the interim analysis, consideration is warranted on timing/information fraction, necessity, and type of interim analysis to account for the delayed clinical effect and long-term survival. References: A Hoos. Evolution of end points for cancer immunotherapy trials. Journal of Oncology, 23 (suppl. 8), viii47- viii52, 2012. A Hoos, S Topalian, TT Chen, R Ibrahim, L Shi, S Rosenberg, J Allison, M Gorman, R Canetta. Facilitating the Development of Immunotherapies Intermediate Endpoints For Immune Checkpoint Modulators. 2013 Friends-Brookings Conference on Clinical Cancer Research, Presented Nov. 2013. TT Chen. Statistical issues and challenges in immune-oncology. Journal of Immunotherapy of Cancer , 1: 18, 2013. Guidance for Industry: Clinical Considerations for Therapeutic Cancer Vaccines. FDA: CBER. October 2011. Statistical Review and Evaluation: BLA 125513 MK-3475 (Pembrolizumab). 2014. Statistical Review and Evaluation: BLA 125554 Opdivo (Nivolumab). 2015. T Saegusa, C Di, YQ Chen. Hypothesis Testing for an Extended Cox Model with Time-Varying Coefficients. Biometrics 70, 619-628, 2014

Design and Analysis considerations due to delayed treatment effects observed with Immuno-Oncology agents
View Presentation View Presentation Brent McHenry, Bristol-Myers Squibb

A trial-level and patient level analyses to study the association between the efficacy endpoints, ORR, PFS and OS, for immunotherapies
View Presentation View Presentation Sirisha Mushti, FDA, CDER

Assessing a treatment effect on remaining life in presence of long-term cure fraction
View Presentation View Presentation Ying Qing Chen, Fred Hutchinson Cancer Research Center

 

PS4e Parallel Session: Agreement and Precision studies

09/30/16
8:30 AM - 9:45 AM
Lincoln 5

Organizer(s): Hope Knuckles, Abbott Laboratories; Vicki Petrides, Abbott Laboratories; Changhong Song, FDA; Xuan Ye, FDA

Chair(s): Vicki Petrides, Abbott Laboratories

Often in-vivo diagnostic devices are cleared through 510(k) pathway where the subject device is compared to an existent predicate device which acts as a comparator device. This involves the device output whether it is numerical score, continuous valued, ordinal or qualitative which requires that the predicate device to provide an output on the same scale as the subject device. Ordinarily to show substantial equivalence the subject device output has to show good agreement to the output from the predicate device and show similar precision. This session will discuss agreement and precision studies.

Agreement Statistics Based on Total Error Per CLSI EP21-A And CLIA 1171 Using Web Tools
View Presentation View Presentation Lawrence I Lin, JBS Consulting Services Inc.

Statistical Considerations in the Design and Analysis of Precision Study for in vivo Diagnostics Devices
View Presentation View Presentation Haiwen Shi, FDA/CDRH

Exact Method for Statistical Equality and Equivalence of Coefficients of Variation
View Presentation View Presentation Peter J. Costa, Hologic

 

PS4f CMC Session: Setting Acceptance Criteria for Assay Transfer

09/30/16
8:30 AM - 9:45 AM
Lincoln 6

Organizer(s): Sungwoo Choi, FDA Center for Drug Evaluation and Research; Fu Rong, FDA; Amy Zhang, GSK

Chair(s): Meiyu Shen, FDA; Harry Yang, MedImmune LLC

Panelist(s): Andrew Rugaiganisa, Pfizer; Yaji Xu, FDA/CDRH/OSB/DBS/DSBII; Binbing Yu, MedImmune; Lanju Zhang, Abbvie

Acceptance criteria are an integral part of control strategies to ensure the quality of the products. Establishment of acceptance limits is also a regulatory requirement. Appropriate application of statistical methods is the key to robust acceptance criteria for assay transfer and comparability studies due to process changes. This session will focus on discussion of several advanced statistical methods for setting acceptance criteria so that analytical methods and new manufacturing processes can be demonstrated fit for their intended purpose.

 

Poster Session

09/30/16
9:45 AM - 10:30 AM

Early evaluation of efficacy in group-sequential clinical trials with two time-to-event outcomes
Toshimitsu Hamasaki, National Cerebral and Cardiovascular Center

Ratio of Means vs Difference of Means as Measures of Average Bioequivalence, Non-inferiority, and Superiority
Wanjie Sun, FDA

Individualized Dose for Optimizing Pharmacokinetic Exposure Via Joint Modeling for AUC and Cmax
Bifeng Ding, Abbvie

A Bayesian Approach to Responder Analysis
Greg Cicconetti, AbbVie Inc.

Bayesian Hierarchical Joint Modeling Using Skew-Normal/Independent Distributions
Geng Chen, GlaxoSmithKline

Individual Bioequivalence and Interchangeability: Graphical Approaches
Tie-Hua Ng, FDA/CBER

Clinical Trial Transparency: An Industry Perspective and Case Studies
Charles L Liss, AstraZeneca Pharmaceuticals

A Note on Test for Non-inferiority on a Left Truncated Normal Endpoint in Studies of Drug Eluting Stents
Yunling Xu, FDA/CDRH

Tree-based recursive partitioning methods for treatment selection
Un Jung Lee, National Center for Toxicological Research

Assessment of precision in companion diagnostics: past, present, and future methods
Crystal Schemp, Roche Tissue Diagnostics

The Impact of Clinical Design on patient participation in a hypothetical Clinical Trial of the treatment for DCIS
Lea Rebecca Meir, UCF College of Medicine

A study of the extension of the regulatory three batch stability approach to larger number of batches
Sungwoo Choi, FDA Center for Drug Evaluation and Research

A Comparison of Primary Analysis Methods for Chronic Pain data
Jennifer Nezzer, Premier Research

Subgroup Analyses for Count Data using Bayesian Empirical Meta-Analytical Predictive Priors
Wei-chen Chen, FDA

Quantitative decision making in early phases of drug development with correlated endpoints
Shyla Jagannatha, Janssen Pharmaceuticals, Inc.

Genome-Wide Association Study in Asthma Subjects from Dulera Phase III Studies
Lingkang Huang, Merck & Co.

Interim Reports for Data Monitoring Committees of Clinical Trials
Ryan Zea, University of Wisconsin - Madison, Department of Biostatistics and Medical Informatics, SDAC

Statistical Challenges and Considerations in Algorithm Development for Predictive Markers
Jie Pu, Ventana Medical Systems, Inc.

An Adaptive Seamless Phase II/III Design for Alzheimer’s Trials with Subpopulation Selection
Richann Liu, Pfizer

Evaluation of Adverse Events from Pooled Studies of Different Durations
Ellen S Snyder, Merck & Co., Inc

 

PS5a Parallel Session: New Perspectives and Approaches in Precision Medicine

09/30/16
10:45 AM - 12:00 PM
Thurgood Marshall North

Organizer(s): Xiaohong Huang, Vertex Pharmaceuticals; Zhen Jiang, CBER/FDA; Haiwen Shi, FDA/CDRH; Yanli Zhao, MedImmune/AstraZeneca

Chair(s): Zhen Jiang, CBER/FDA

President Obama announced the Precision Medicine Initiative® (PMI) in his 2015 State of the Union address, and called for a $215 million investment in his 2016 budget. The goal of precision medicine is to develop individualized treatment rules that maximize treatment benefit and avoid possible harm for each individual patient. This can be done by taking into account the heterogeneity of treatment outcomes in patient characteristics such as genes, environment and lifestyle. In this session, we will explore some new perspectives and approaches in precision medicine.

Decision Trees for Precision Medicine
View Presentation View Presentation Heping Zhang, Yale School of Public Health

Statistical Methods for Bridging Studies of Companion Diagnostic Devices in Precision Medicine
View Presentation View Presentation Meijuan Li, US Food and Drug Administration

Adaptive biomarker subpopulation and tumor type selection in Phase III oncology trials for personalized medicines
View Presentation View Presentation Cong Chen, Merck

 

PS5b Parallel Session: Considerations in pediatric trial designs and analysis

09/30/16
10:45 AM - 12:00 PM
Thurgood Marshall South

Organizer(s): Meehyung Cho, Sanofi; Hui Quan, sanofi; Yun Wang, DB5/OB/OTS/CDER/FDA; Yute Wu, FDA

Pediatric trials are often conducted to obtain extended marketing exclusivity or to satisfy regulatory requirements. There are many challenges in designing and analyzing pediatric efficacy and safety trials arising from special ethical issues and the relatively small accessible patient population. The application of conventional phase 3 trial designs to pediatrics is generally not realistic in some therapeutic areas. In this session speakers from industry, regulatory agency and academia will share their thoughts and research results on methodologies and applications in pediatric efficacy and safety trials. These include weighted combination methods utilizing available adult data such as James-Stein shrinkage estimates, empirical shrinkage estimates and Bayesian methods as well as modeling and simulation approach. In addition, applying the idea of concept of consistency assessment that has been used in multi-regional trials to design and analysis of a pediatric trial will be also discussed. Moreover, the application of adaptive designs for pediatric multisite comparative trials will be considered.

Considerations in pediatric trial designs and analysis
Meehyung Cho, Sanofi

Challenges and Strategies in Assessing Pediatric Efficacy
View Presentation View Presentation Min Min, FDA

Utility of Adaptive Designs for Pediatric Multisite Comparative Trials
View Presentation View Presentation Mi-Ok Kim, Cincinnati Children's

 

PS5c Panel Session: Developing PRO Instruments in Clinical Trials: Issues, Considerations, and Solutions

09/30/16
10:45 AM - 12:00 PM
Thurgood Marshall East

Organizer(s): Michelle Campbell, FDA/CDER; Weining Z Robieson, Abbvie; Marian Strazzeri, US Food and Drug Administration; Shuyan Wan, Merck

Chair(s): Cheryl Coon, Outcometrix; Laura Lee Johnson, FDA

Panelist(s): Wen-Hung Chen, FDA; Lisa Kammerman, Astra Zeneca; Dennis Revicki, Evidera

This session aims to address when and how to evaluate the psychometric properties (including meaningful change and responder definitions) of a patient-reported outcome (PRO) or other clinical outcome assessment (COA) used to construct an endpoint in a clinical trial. The focus of this session will be identifying the risks and ramifications of evaluating (for the first time) the psychometric properties of a PRO in the same pivotal clinical trial in which the instrument is used to construct a primary, co-primary, or key secondary endpoint. In particular, the session will consider issues that can arise when these psychometric properties are assessed at an interim analysis within a clinical trial (i.e., within a “psychometric substudy”) and the interim findings are then applied to the interpretation of the final results.

Developing PRO Instruments in Clinical Trials: Issues, Considerations, and Solutions
Wen-Hung Chen, FDA; Cheryl Coon, Outcometrix; Laura Lee Johnson, FDA; Lisa Kammerman, Astra Zeneca; Dennis Revicki, Evidera

 

PS5d Parallel Session: Design and analysis of cancer immunotherapy trials

09/30/16
10:45 AM - 12:00 PM
Thurgood Marshall West

Organizer(s): Matthew Hoblin, Quintiles; Lin Huo, FDA; Qi Tang, AbbVie; Zhenzhen Xu, CBER/FDA

Arming the immune system against cancer has emerged as a powerful tool in oncology during recent years. Instead of poisoning a tumor or destroying it with radiation, immunotherapy unleashes the immune system to destroy a cancer. The unique mechanism of action poses new challenges in the study design and statistical analysis of immunotherapy trials. The major challenges facing such a trial include: (1) Delayed onset of clinical effects violates the proportional hazard assumption and the conventional testing procedures would lead to a loss of power. 2) The development of autologous cancer vaccines may be subject to manufacture failure and not all eligible subjects randomized to the treatment would have a “successful” product being developed. More innovative statistical methodologies are needed to address the unique characteristics of immunotherapy trials.

Designing therapeutic cancer vaccine trials with random delayed treatment effect
View Presentation View Presentation Zhenzhen Xu, CBER/FDA

Continual Reassessment Method for Late-Onset Toxicities Using Bayesian Data Augmentation
View Presentation View Presentation Suyu Liu, MD Anderson Cancer Center

Novel Endpoints and Designs in Immuno-Oncology, Ram Suresh, Bristol-Myers Squibb
View Presentation View Presentation Ram Suresh, Bristol-Myer Squibb

 

PS5e Parallel Session: Statistical Considerations in Evaluating Diagnostic Devices Needed for Precision Medicine

09/30/16
10:45 AM - 12:00 PM
Lincoln 5

Organizer(s): Deepak B. Khatry, MedImmune; Daniel Li, Juno Therapeutics; Haiwen Shi, FDA/CDRH; Jincao Wu, FDA, CDRH

Chair(s): Yaji Xu, FDA/CDRH/OSB/DBS/DSBII; Laura M Yee, FDA, CDRH

An ongoing paradigm shift in biopharmaceutical research and development aims to incorporate use of novel biomarkers and diagnostic devices to improve safety and efficacy of drugs in more precisely targeted patient populations. Diagnostic devices not essential for the corresponding drugs based on safety and efficacy, but ones which may provide additional useful information of risk/benefit to patients, have begun to be proposed recently. Similar to companion diagnostics (CDx) that require regulatory approval alongside a novel therapy, these complementary diagnostic tests may also aid in identifying a biomarker-defined subset of patients who respond particularly well to a drug. Work in both CDx and complementary diagnostics tests are relatively new and evolving. In this session we will discuss FDA’s current thinking on the subject from the regulatory perspective. Statistical methodologies will incorporate academic and industrial perspectives, and discussions will focus on what supporting statistical evidence will need to be generated to demonstrate overall validity of diagnostic accuracy. A real-life case study will be used.

Statistical consideration: clinical validation studies in precision medicine
View Presentation View Presentation Jincao Wu, FDA, CDRH

Precision Medicine for Continuous Biomarkers
View Presentation View Presentation Robert Abugov, FDA Center for Drug Evaluation and Research

Selection the optimal treatment with conditional quantile treatment effect curve
Xiao-Hua Zhou, University of Washington

 

PS5f Parallel Session: Statistical Software and Computation for Regulatory Applications

09/30/16
10:45 AM - 12:00 PM
Lincoln 6

Organizer(s): Kun Chen, AbbVie Inc; Arkendra De, FDA/CDRH; Yuqing Tang, FDA/CDRH; Hongtao Zhang, Abbvie Inc.

Chair(s): Qin Li, FDA/CDRH

Developing statistical programs to evaluate/verify the findings and implement additional analyses for a submission is one of the major tasks for regulatory agency statisticians. Although such program developments are often on a case-by-case basis, there are general tasks shared by different submissions (e.g. missing data sensitivity analysis and multi-reader multi-case analysis). It is therefore important to develop centralized, well validated and easy-to-use software tools for such common tasks in order to improve efficiency and reduce the risk of making errors in the program. Moreover, the design of such softwares may consider utilizing the high performance computing facilities the regulatory agencies have to conquer the computation challenges faced by many statisticians. In this session, speakers from regulatory agencies, academia and industry will introduce with live demonstration examples of statistical software tools that are designed to be easily accessed, or specifically for regulatory reviews and researches. The necessary software validation and verification steps for developing such softwares will also be discussed.

Statistical Computing Challenges at FDA
View Presentation View Presentation Paul Schuette, FDA

Development of open source R packages for supporting regulatory submissions of in vitro diagnostic tests
View Presentation View Presentation Fabian Model, Roche

MISSUITE: A Web Application for Missing Data Analysis.
Chenguang Wang, Johns Hopkins University

 

PS6a Parallel Session: Moving beyond the Hazard Ratio Paradigm in the Design and Analysis of Clinical Trials with Time-to-event Endpoints

09/30/16
1:15 PM - 2:30 PM
Thurgood Marshall North

Organizer(s): Kun Chen, AbbVie Inc; Elena Rantou, FDA/CDER; Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals; Yifan Wang, FDA

Chair(s): Kun Chen, AbbVie Inc

In designing and analyzing randomized clinical trials (RCTs) with right-censored time-to-event outcome, the model-based hazard ratio estimate is routinely used to quantify the treatment difference. The clinical meaning of such a quantity can be rather difficult, if not impossible, to interpret when the underlying model assumption (i.e., the proportional hazard assumption) is violated. Even if the PH assumption is correct, the hazard ratio has some shortcomings for quantifying the treatment difference, e.g., the lack of summary measure of the baseline risk for clinical interpretation, incoherency between stratified and un-stratified analyses, lack of power in non-inferiority study with rare events, and so on. Although the issues of the hazard ratio have been discussed extensively and robust alternatives have been proposed in the statistical literature, such critical information does not seem to have reached the community of health science researchers or pharmaceutical industry. We propose a session focuses on these issues of the hazard ratio and the alternative, such as restricted mean survival time (RMST) and risk difference (RD).

Assessing Alternatives to the Hazard Ratio for Non-inferiority
View Presentation View Presentation Jyotirmoy Dey, Abbvie, Inc.

Alternatives to hazard ratios for comparing efficacy or safety of therapies in noninferiority studies
View Presentation View Presentation Hajime Uno, Dana-Farber Cancer Institute

 

PS6b Parallel Session: Regulatory Pathways and Case Studies in Orphan Drug Development

09/30/16
1:15 PM - 2:30 PM
Thurgood Marshall South

Organizer(s): GQ Cai, GSK; Lei Hua, Vertex Pharmaceuticals; Chia-Wen Ko, FDA; Andrejus Parfionovas, FDA

Drug development for rare (a.k.a. orphan) diseases faces the same issues as the other drug development programs, however they are often complicated by the lack of medical experience and statistical challenges coming from the small sample size. FDA advises sponsors to evaluate the depth and quality of existing natural history knowledge early in drug development. So that when there is not enough knowledge about the disease to guide clinical development, a well designed natural history study may help in designing an efficient drug development program. In this section, we will discuss the statistical and regulatory challenges in drug development for rare diseases where an adequately powered study is not feasible and /or parallel control arm is not ethical. The case studies will be provided to showcase how the natural history and other relevant data were collected and how different statistical methods, including Bayesian modeling and statistical graphical techniques, were applied to support the evaluation of efficacy and / or safety of the treatment in rare diseases area.

The Strimvelis® Case Study: Submission and Regulatory Interactions
View Presentation View Presentation Younan Chen, GlaxoSmithKlines

Design and Analysis Considerations for Rare Disease Studies
View Presentation View Presentation Yeh-Fong Chen, US FDA

The Blincyto Case Study: Advancing a Breakthrough Therapy in Acute Lymphoblastic Leukemia
View Presentation View Presentation Chris Holland, Amgen

 

PS6c Parallel Session: Sensitivity analyses under the spirit of the ICH E9 addendum

09/30/16
1:15 PM - 2:30 PM
Thurgood Marshall East

Organizer(s): Shanti Gomatam, FDA/CDER; Achim Guettner, Novartis Pharma AG; Fanhui Kong, FDA; Feng Liu, GlaxoSmithKline Inc

Chair(s): Craig Mallinckrodt, Eli Lilly

The final concept paper “Addendum to Statistical Principles for Clinical Trials on Choosing Appropriate Estimands and Defining Sensitivity Analyses in Clinical Trials” emphasizes the importance for having a framework on sensitivity analyses. Sensitivity analyses can cover the presentation of results of different estimands or, for example, of results for analyses in which different choices and departures of assumptions are considered. These analyses include assumptions made on missing data across the different endpoints. In this session, a case study will be presented from a sponsor, with focus on the role of missing data in sensitivity analyses. The FDA’s position will be elucidated with respect to possible learnings from sensitivity analyses and areas to be evaluated by the sponsor within a sensitivity analysis. The sponsor’s case study will show how a tipping point analysis was conducted and discussed with HAs: A biologic was submitted for regulatory approval in US/EU for two indications that both used a binary response variable as primary endpoint. The primary analysis for missing data was non-responder imputation for binary variables. One of the questions received during regulatory review was to examine the potential effects of missing data and rescue on the results using tipping point analysis (Yan et al., 2009; Campbell et al., 2011). The goal of the tipping point analysis was to identify assumptions about the missing data under which the conclusions change, i.e., under which there was no longer evidence of a treatment effect. Results of the tipping point analysis for both submissions will be presented. The panel discussion will cover the subjectivity that underlies the interpretation of sensitivity analyses in the spirit of the planned ICH E9 addendum, in particular if the primary analysis and the sensitivity analyses have contradictory results as well as other frameworks, opportunities and possibilities of different methods of handling missing data in sensitivity analyses. References: Gregory Campbell , Gene Pennello & Lilly Yue (2011): Missing Data in the Regulation of Medical Devices, Journal of Biopharmaceutical Statistics, 21: 180-195. Xu Yan, Shiowjen Lee, and Ning Li (2009): Missing data handling methods in medical device clinical trials, Journal of Biopharmaceutical Statistics, 19: 1085–1098.

Tipping Point Analysis in Handling of Missing Data
View Presentation View Presentation Ruvie Martin, Novartis Pharmaceuticals

Sensitivity Analysis Framework and a Novel Tipping Point Approach
View Presentation View Presentation Gregory Levin, Food and Drug Administration

 

PS6d Parallel Session: Challenges and Opportunities for Acceleration in Oncology Drug Development: Patient Enrichment, Dose-finding, Combination Therapy and Regulatory Acceleration

09/30/16
1:15 PM - 2:30 PM
Thurgood Marshall West

Organizer(s): Adam Hamm, Chiltern; Jingjing Ye, FDA; Mengdie Yuan, FDA; Helen Zhou, GSK

Chair(s): Helen Zhou, GSK

This session will be designated to discuss the statistical issues in oncology drug development. Recent years there have seen tremendous therapeutic innovation, particularly as the new molecularly targeted agents and immune-oncology agents got approval with expedited manner, offering the new high benchmark for timeline of oncology drug development. As our knowledge of target therapy and cancer immunology advances, it is important to evaluate the efficiency of the conventional drug development process and consider novel endpoints, statistical designs, and analyses so that the benefit of these new therapeutic agents can be fully captured. Traditional clinical trial phases in oncology development are increasingly becoming blurred, leading to more operationally or statistically seamless study designs with the goal of supporting NDA/BLA submission in the shortest possible time. Despite the clinical development speed and efficiency that seamless design and phase I expansion cohort studies offer, FDA staff, academic protocol reviewers and patient advocates say design shortfalls raise concerns about study conduct, data interpretation and patient protections. In addition, multiple and complex drug combinations has been intensively studied to improve the treatment response, and bring clinically meaningful improvements for certain groups of patients with cancer.

Patient enrichment and regulatory acceleration from an industry perspective
View Presentation View Presentation Keaven Anderson, Merck Research Laboratories

A Seamless Phase 1/2 Clinical Trial Designed to Expedite Oncology Drug Development
View Presentation View Presentation Jeff Wetherington, GSK

Seamless Oncology-Drug Development
View Presentation View Presentation Marc Theoret, FDA

Discussant(s): Raji Sridhara, FDA/CDER/OTS/OB

 

PS6e Parallel Session: Advances in Risks Prediction and Prognostic Biomarker with Time-to-Event Data: Power calculation, Meta-Analysis, and Recurrent Time-to-Event Models

09/30/16
1:15 PM - 2:30 PM
Lincoln 5

Organizer(s): Adam Hamm, Chiltern; Justin Rogers, Abbott Laboratories; Haiwen Shi, FDA/CDRH; Tinghui Yu, FDA

Chair(s): Justin Rogers, Abbott Laboratories

Time-to-event data is widely encountered in clinical studies for drug and many medical devices development and validation and poses many analytical challenges. This session will present many advances in current outcome prediction research, providing novel estimation tools for appropriate hypothesis test, quantifying time-dependent accuracy of a biomarker, and accounting for analytical challenges in multiple cohort studies. The goal is to provide practitioners with robust and rigorous tools for the evaluation of medical devices. Novel biomarkers as devices for predicting time to adverse outcome such as recurrence or death can dramatically change clinical decision making in selection of patient-specific treatments. Design a validation study to rigorously assess the clinical utility of the markers is important. The validation of risk markers requires either exploiting data from either prospective cohort studies, or optimally synthesizing multiple studies while accounting for heterogeneity in baseline risks of subpopulations and in study design. In some disease area, it is not uncommon to have recurrent time-to-event data, such as occurrence of hypoglycemic events for diabetes. The validity of statistical models applied to this recurrent time-to-event data is also very critical, e.g. how to control type 1 error appropriately in all scenarios.

Design and Analysis of Prognostic Biomarker Validation Studies.
View Presentation View Presentation Yingye Zheng, Fred Hutchinson Cancer Research Center

A Regulatory Perspective on the Clinical Validation for Prognostic and Risk Prediction Biomarkers
View Presentation View Presentation Yuying Jin, FDA/CDRH

Estimate variable importance for recurrent event outcomes
View Presentation View Presentation Haoda Fu, Eli Lilly and Company

 

PS6f Parallel Session: Sharing patient-level clinical trial data in the era of big data: current state and future possibilities

09/30/16
1:15 PM - 2:30 PM
Lincoln 6

Organizer(s): Jessica Lim, GSK; Jingyi Liu, Eli Lilly and Company; Andrejus Parfionovas, FDA; Boguang Zhen, FDA CBER

Clinical development modernization efforts have become essential as programs have experienced increased regulatory expectations, costs and design complexity. The use of patient-level historical clinical data can enhance drug research and development by refining study design, conduct and analysis. However, historical clinical data is underutilized in the era of big data. There are a number of challenges such as patient privacy protection, access to the clinical data, inconsistent data standards, and planning, analysis, and interpretation of historical data. Nevertheless, a few initiatives are underway by Pharma, academia, research institutes, and regulatory agencies which aim to improve the utilization of historical clinical data. In this session we will discuss the progress and implementation of some key data sharing initiatives in study design, safety signal detection, disease modeling etc. Some data sharing projects will be discussed in detail such as TransCelerate (9 disease areas) and Project DataSphere (Oncology) or CAMD (Alzheimer’s). Statistical methodology which can enhance the design will also be discussed. Lastly, a panel from regulatory and industry will share insights on the challenges and future of this untapped goldmine.

Project Data Sphere® initiative overview
View Presentation View Presentation Liz Zhou, Sanofi

Issues in the incorporation of historical data in clinical trials
View Presentation View Presentation Kert Viele, Berry Consultants

Discussant(s): Brad Carlin, School of Public Health University of Minnesota; Pandurang M Kulkarni, Eli Lilly; Anne Pariser, FDA/CDER

 

PS7a CMC Session: Method Validation on the revised ICH guidance

09/30/16
2:45 PM - 4:00 PM
Thurgood Marshall North

Organizer(s): Charles (Yinkkiu ) Cheung, FDA/CBER; David Christopher, Merck; Tianhua Wang, FDA/CDER; Harry Yang, MedImmune LLC

Chair(s): Lei Huang, FDA/CBER; Jyh-Ming Shoung, J&J

Analytical methods are an integral part of dug research and development. As critical enablers, analytical methods allow for process understanding and conformance of quality standard. Many key decisions in biopharmaceutical development and manufacturing are made based on results from analytical methods. In this section, we discuss how adoption of the QbD, risk-based, life-cycle development paradigm can lead to better understanding of sources of variations that affect an analytical method performance, thus resulting in better design of the method, control of variability, and increased reliability.

Method Validation based on Total Error
View Presentation View Presentation Jason Zhang, MedImmune

Validation Criteria for Analytical Procedures
View Presentation View Presentation Rick Burdick, Elion Labs

Discussant(s): Stan Altan, JNJ

 

PS7b Parallel Session: Development in Evaluating Cardiovascular Risk: Pre- and Post-marketing

09/30/16
2:45 PM - 4:00 PM
Thurgood Marshall South

Organizer(s): Alfred H. Balch, University of Utah, Department of Pediatrics; Yong Ma, FDA; Junshan Qiu, FDA CDER; Zhiying Qiu, Sanofi

Chair(s): Yong Ma, FDA; Junshan Qiu, FDA

Seeking new effective therapies for reducing risk of cardiovascular disease via conducting cardiovascular outcome trials (CVOTs) has been pursued in decades. In the meanwhile, for safety purposes, evaluating cardiovascular risk via a COVT has been required for certain new therapies. Even though the purposes for evaluating the cardiovascular risk are different, common evaluation metrics can serve both. Time-to-event endpoints have been widely used as metrics for evaluating cardiovascular risks. However, the analyses of this type of endpoints have been complicated by the occurrence of multiple types of events with competing risk. In addition, the large number of events needed to meet the statistical power also lengthens the drug development process. In this session, we will review and discuss some innovative approaches for evaluating cardiovascular risk. Particularly, we will focus on the recent development in the win/loss approach, initially proposed by Pocock et al. (2012), weighted composite (Bakal, 2012), and area under the survival function curve with censoring data points (Zhao, 2015).

Weighted win loss statistics
View Presentation View Presentation Xiaodong Luo, Sanofi R&D US

Applying Novel methods to the assessment of clinical outcomes in cardiovascular clinical trials.
View Presentation View Presentation Jeffrey Bakal, University of Alberta

Statistical Approaches for Multiple Event-Time Outcomes
View Presentation View Presentation Ionut Bebu, The George Washington University

Design and Analysis of Clinical Trials using Restricted Mean Survival Time
View Presentation View Presentation Lihui Zhao, Northwestern University

 

PS7c Town Hall Session: Clinical Trials with Missing Data, Estimands Selection, and Analysis Methods

09/30/16
2:45 PM - 4:00 PM
Thurgood Marshall East

Organizer(s): Laura Lu, CDRH; Julia Jingyu Luan, FDA/CDER; Cristiana Mayer, Johnson & Johnson; Elena Polverejan, Janssen R&D

Chair(s): Laura Lu, CDRH

Panelist(s): Mouna Akacha, Novartis; Julia Jingyu Luan, FDA/CDER; Craig Mallinckrodt, Eli Lilly; Thomas Permutt, U.S. Food & Drug Admin.; Estelle Russek-Cohen, CBER FDA ; Dan Scharfstein, Johns Hopkins

Missing data are a common challenge in clinical trials. There are strong links between the choice of the trial objectives, estimands (i.e. "what is to be estimated") and the statistical methods for the primary and sensitivity analyses in presence of missing data. The statistical principles and challenges of selecting the estimand and the most appropriate analytical methods were emphasized in the missing data report released by the National Academy of Science (NEJM 2012, Oct. 4, 367(14); 1355-60) as well as in the ICH E9 (Revision 1): “Addendum to Statistical Principles for Clinical Trials on Choosing Appropriate Estimands and Defining Sensitivity Analyses in Clinical Trials”. Many advances in the field of missing data analysis for choosing the estimand together with the most appropriate innovative approaches have been made. In addition, the interplay of estimands and trial data reflecting the intake of rescue medication or in presence of ‘treatment switching’ introduces new questions around the appropriate estimand selection and analysis method. Much work still needs to be done to reach the necessary consensus in resolving the remaining challenges and in understanding the implications and interpretation of different statistical methods to handle different choices of estimands in drug development. The collaboration of statisticians among industry, regulatory agencies and academia can bring various and valuable perspectives to this rapidly evolving area of statistical methodology. This town hall session will address the issues of selection and interpretation of estimands and the corresponding methods for the primary and sensitivity analyses to handle missing data and for other statistical challenges. Panelists will discuss common issues, share their experiences, provide concrete examples to illustrate the industry, academia and regulatory perspective and their recommendations.

 

PS7d Parallel Session: Statistical Considerations for Evidentiary Standards in Biomarker Qualification

09/30/16
2:45 PM - 4:00 PM
Thurgood Marshall West

Organizer(s): Aloka Chakravarty, FDA, CDER; Feng Liu, GlaxoSmithKline Inc; Yong Ma, FDA; Robin Mogg, Janssen Research and Development

Chair(s): Robin Mogg, Janssen Research and Development

Motivation: In 2014, FDA issued guidance on a qualification process for biomarkers as drug development tools (DDTs) intended to expedite development of publically available DDTs that can be widely employed within a specific context of use (COU). Various consortia of nonprofit organizations, industry, academic, and regulatory partners have been established to accelerate the development and qualification of specific safety and efficacy biomarkers. Recent efforts to define evidentiary standards for biomarker qualification have unveiled the lack of clear regulatory “goal posts” for qualification. To better define the evidence needed for qualification, the FDA developed the context of use (COU), which defines what the biomarker will be used for and what decisions it will drive in a drug development program. However, defining the framework of evidentiary criteria for biomarker qualification needs continued discussion among the scientific community. This session will focus on recent efforts to define the framework of evidentiary criteria, important statistical principles associated with biomarker qualification, a summary of where we are in the process and a panel discussion on tangible next steps to define a clear process for scientific rigor around the development and qualification of biomarkers. The format of the session will be two primary talks (each 20-25 minutes in length), followed by a panel discussion. The panel discussion will begin with a summary presentation of a clinical case study that has achieved qualification through the CDER Biomarker Qualification Program (10-15 minutes in length), and then continue with a discussion of the panel facilitated by the Session Chair (15-20 minutes in length) and end with an open floor discussion (time remaining, 10-15 minutes in length). Speakers: 1. Aloka Chakravarty, FDA, CDER will present and introduce statistical considerations for determining evidentiary standards for biomarker qualification. 2. Lisa McShane, NCI will present on assay validation and reproducibility considerations for biomarkers used in drug development. Case study presentation: Sue-Jane Wang, FDA, CDER will present a clinical case study that has achieved qualification through the CDER Biomarker Qualification Program. Panelists: 1. Aloka Chakravarty, FDA, CDER 2. Lisa McShane, NCI 3. Sue-Jane Wang, FDA, CDER 4. Suzanne Hendrix, Pentara Corporation All listed presenters and panelists have agreed to participate in the proposed session. We intend to add 1-3 additional panelists from industry. Robin Mogg, Janssen Research and Development will chair the session and facilitate the panel and open floor discussions.

Evidentiary Considerations for Integration of Biomarkers in Drug Development: Statistical Considerations
View Presentation View Presentation Aloka Chakravarty, FDA, CDER

Assay validation and reproducibility considerations for biomarkers used in drug development
View Presentation View Presentation Lisa McShane, NCI

Statistical evidences observed and review experiences of clinical biomarkers that led to FDA CDER qualifications
View Presentation View Presentation Sue-Jane Wang, US FDA

 

PS7e Parallel Session: NGS Era – Genomic Diagnostics in Precision Medicine

09/30/16
2:45 PM - 4:00 PM
Lincoln 5

Organizer(s): Michael Fundator, National Academies, DBASSE; Yaji Xu, FDA/CDRH/OSB/DBS/DSBII; Laura M Yee, FDA, CDRH; Helen Zhou, GSK

Chair(s): Wei Wang, FDA/CDRH; Jincao Wu, FDA, CDRH

High-throughput sequencing technology makes it possible to generate an unprecedented amount of genetic data for an individual in an affordable timeframe at an affordable cost today. Unlike traditional in vitro diagnostic tests that are used to detect a single or a defined number of markers to diagnose a limited set of conditions, next-generation sequencing (NGS) tests can identify millions of genetic variants in a single run, and those can be used to diagnose or predict the likelihood of an individual developing a set of variety of diseases. For example, an NGS-based assay may serve as a universal companion diagnostic test for numbers of diseases/drugs. However, how to properly evaluate these assays remains very challenging. This session will provide an overview on NGS test evaluation from regulatory perspective and discuss related statistical issues.

Bioinformatics approaches for functional interpretation of genome variation
View Presentation View Presentation Kai Wang, USC

Regulatory Challenges and Opportunities for Companion Diagnostics
View Presentation View Presentation Yun-Fu Hu, FDA/CDRH/OIR/DMGP

Metrics to benchmark sequencing integrity and fidelity of data in targeted NGS.
View Presentation View Presentation Bonnie LaFleur, HTG Molecular Diagnostics, Inc

 

PS7f Parallel Session: Data Integration in Pharmaceutical Research: methods and applications

09/30/16
2:45 PM - 4:00 PM
Lincoln 6

Organizer(s): Steven Bai, U.S. Food and Drug Administration; Ling Lan, FDA; Zhaoling Meng, Sanofi; Hui Quan, sanofi

Chair(s): Ling Lan, FDA

Addressing questions in clinical and health care field often requires the collection and synthesis of huge amount of data. With the recent data transparency initiative, we can expect greater access to more data. Fully utilizing all available internal and external information will save tremendous resources, timely deal with questions that cannot be answered with data from a single trial and increase the efficiency of pharmaceutical research. What researchers need are scientifically and statistically sound approaches to integrate available data. In this session, speakers from industry, academia and regulatory agency will share their thoughts and research results on methodologies and innovative applications of meta-analyses and integrated data analyses. These include network meta-analysis, multivariate meta-analysis, model-based meta-analysis, adaptive repeated cumulative meta-analysis as well as methods of borrowing historical data. In addition, new approaches for rare adverse event analysis will be discussed. These approaches allow the inclusion of studies with zero events and need no continuity adjustment in the analysis. Moreover, considerations on the applications of meta-analysis/integrated data analysis from regulatory perspective will be presented. Real data examples and simulation results for comparing different approaches are also provided.

Integrated Data Analysis for Assessing Treatment Effect through Combining Information from All Sources
View Presentation View Presentation Hui Quan, sanofi

Meta-analysis of rare events in drug safety studies: A unifying framework for exact inference
View Presentation View Presentation Dungang Liu, University of Cincinnati

Meta-analysis of Clinical Trials with Sparse Binary Outcomes using Zero-inflated Binomial Models
View Presentation View Presentation Yueqin Zhao, U.S. Food and Drug Administration