Online Program

Search by Last Name:

Keyword:

   

Wed, Sep 16

SC1 Short Course 1: An Overview of Statistical Considerations in Personalized Medicine: Concept and Methodology

09/16/15
8:30 AM - 12:00 PM

Organizer(s): Yuqing Tang, US Food and Drug Administration

Instructor(s): Meijuan Li, US Food and Drug Administration

The term "personalized medicine" is often described as providing "the right patient with the right drug at the right dose at the right time." More broadly, "personalized medicine" may be thought of as the tailoring of medical treatment to the individual characteristics, needs, and preferences of a patient during all stages of care, including prevention, diagnosis, treatment, and follow-up. This short course will provide a general overview of the concept and statistical methodology related to personalized medicine. The course will begin with a general discussion of statistical issues related to personalized medicine with the second part of the course focusing on concepts and principals for evaluating and planning companion diagnostic device studies. A key component of personalized medicine is companion diagnostics that measure biomarkers e.g. protein expression, gene amplification or specific mutations. For example, most of the recent attention concerning molecular cancer diagnostics has been focused on the biomarkers of response to therapy, such as KRAS mutations in metastatic colorectal cancer, EGFR mutations in advanced Non-small cell lung cancer, and BRAF mutations in metastatic malignant melanoma. The presence or absence of these markers is directly linked to the response rates of particular targeted therapies with small-molecule kinase inhibitors or antibodies. Therefore, testing and evaluating for these markers has become a critical step in the target therapy of the above-mentioned tumors. For companion diagnostics devices, we will discuss device’s indications for use, study designs, performance measures, and statistical methods of data analysis for both clinical and analytical validation studies of companion diagnostic devices. Real case examples will also be discussed.

Download Presentation Download Material
 

SC2 Short Course 2: Handling Missing Data in Clinical Trials

09/16/15
8:30 AM - 12:00 PM

Organizer(s): Richard C Zink, JMP Life Sciences, SAS Institute

Instructor(s): Sonia Davis, University of North Carolina at Chapel Hill; Michael O'Kelly, Quintiles

This half-day course looks at missing data in clinical trials. The course is based on Clinical trials with missing data: a guide for practitioners, a recently published book to which both trainers contributed. The course starts with an illustrative example of clinical data related to Parkinson’s disease. Basic concepts pertaining to missing data are explained. The assumption of missing at random (MAR) is described and shown in action for this data. The idea of missing not at random (MNAR) is also introduced. Regulatory thinking on missing data is summarised. This leads to the three main pillars of the course. The first pillar is a full introduction to the direct likelihood approach known as mixed models for repeated measures (MMRM). Examples of SAS code are presented that implement MMRM for the same Parkinson’s disease data. The MMRM approach is used to implement the MAR assumption. The second pillar of the course covers multiple imputation. The key ideas of multiple imputation are described. The workings of multiple imputation are illustrated with SAS code. This multiple imputation methodology will be applied to data from a clinical trial in Major Depressive Disorder that is downloadable from www.missingdata.org.uk. Like MMRM, multiple imputation can be used to implement the MAR assumption; but multiple imputation is very flexible and can implement a wide variety of assumptions about missing data. The third pillar of the course shows how, by controlling the steps of multiple imputation, the statistician can in a very simple way implement rather sophisticated assumptions about missing data. One type of assumption is described in further detail – the assumption that post-withdrawal outcomes from the experimental arm may be modelled by observations from the control arm. Attendees will gain from this course a thorough knowledge of issues pertaining to missing data in clinical trials. Attendees will also gain an understanding of a variety of statistical approaches for handling missing data, and how to implement those approaches in practice.

Download Presentation Download Material
 

SC3 Short Course 3: Equivalence and Similarity Testing

09/16/15
8:30 AM - 12:00 PM

Organizer(s): Yi Tsong, US FDA CDER

Instructor(s): Shein-Chung Chow, Duke University; Yi Tsong, US FDA CDER

Instructors: Shein-Chung Chow, Ph.D, Professor Department of Biostatistics and Bioinformatics Duke University Yi Tsong, Ph.D., Director Division of Biometrics VI/Office of Biostatistics/Office of Translational Science, Center for Drug Evaluation and Research, Food and Drug Administration The objective of an equivalence study is to demonstrate if the two drug products are equivalent. However, “equivalence” may be defined in various ways. It may mean that the two drugs lead to similar responses; or to similar efficacies (over placebo) within a defined margin of equivalence. Depending on which of the two objectives, margin determination, study design, measurement and test may be different. In a study with only teat and reference products or treatments, the equivalence measure may be difference in means or ratio of means. Equivalence of two products/treatments is then demonstrated by showing that the measurement of difference is between the pre-specified lower and upper margins of equivalence. Therefore, the null hypothesis of interest is that the difference measurement is either lower than the lower margin or larger than the upper margin. In another word, one may demonstrate equivalence by showing that the difference is laying between the two margins. The complete test may be performed with two sets of hypotheses. In a three products/treatment (placebo, test and reference) study, in addition to the equivalence test, the assay sensitivity of reference is established by showing reference is superior to placebo; efficacy of test is demonstrated by showing test is superior to placebo. Equivalence test is used in in-vivo studies for equivalence assessment of a generic product or a post-market change of a new product. It is also used in in-vitro studies for equivalence assessment of chemical or physical parameters. It is also used in clinical trials for equivalence assessment of patient treatment responses of the two products. In this presentation, we will give an overview of the equivalence tests in terms of in vivo, in vitro or therapeutically equivalence; bioequivalence and biosimilarity. Course Outline: Section 1: Background and Introduction - SUPAC, ANDA and Biosimilar - In-vivo, in-vitro, analytical and therapeutically equivalence - Measurements of equivalence - Objectives and hypotheses Average equivalence Scaled average equivalence Population equivalence Individual equivalence Interchangeability Profile equivalence Section 2: In vivo bioequivalence - Fundamental assumption - Study design and criteria - Statistical methods - Sample Size Determinations Section 3: Therapeutically equivalence - Fundamental assumption - Study design and criteria – bioequivalence and biosimilarity - Statistical methods - Sample Size Determinations Section 4: Analytical, in-vitro and profile equivalence - In vivo versus in vitro equivalence testing - profile vs non-profile analysis - Analytical similarity - Dissolution profile comparison - Particle size distribution comparison About the Instructors: Shein-Chung Chow, PhD is a Professor at the Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA. Prior to joining Duke University, he was a Distinguished Professor and Executive Director of National Clinical Trial Network Coordination Center in Taiwan. Prior to that, Dr. Chow also held various positions including senior/research statistician, manager, director, executive director, and vice president in the pharmaceutical industry. Dr. Chow is the Editor-in-Chief of Journal of Biopharmaceutical Statistics and Drug Designing. In addition, he is also Editor-in-Chief of Biostatistics Book Series, Chapman and Hall/CRC Press. Dr. Chow is a Fellow of the American Statistical Association (ASA) and an elected member of the International Statistical Institute (ISI). Dr. Chow has published over 250 methodology papers and 23 books, which include the following books in clinical research and development: Statistical Design and Analysis in Pharmaceutical Sciences, Design and Analysis of Clinical Trials, Sample Size Calculations in Clinical Research, Adaptive Design Methods in Clinical Trials, Translational Medicine – Strategies and Statistical Methods, Design and Analysis of Bridging Studies, and Biosimilars: Design and Analysis of Follow-on Biologics. Dr. Chow’s current research interest includes the application of adaptive design methods in clinical trials, statistical methodology development for assessment of biosimilarity of biosimilar studies, and statistical methods for traditional Chinese medicine. Yi Tsong, Ph.D., is currently a mathematical statistician and Director of Division of Biometrics VI, Office of Biostatistics, Office of Translational Science, at CDER, FDA. He received his Ph.D. in Mathematical Statistics from The University of North Carolina at Chapel Hill in 1979. Since he joined FDA in 1987, he was given the responsibilities to review and research in pharmacoepidemiology and postmarketing safety assessment studies, efficacy and safety clinical trials, non-inferiority test, generic drug bioequivalence trials, quality control and quality assurance testing, drug abuse liability trials, thorough QT clinical trials and biosimilarity throughout the years. He received various CDER and FDA level awards for excellence in leadership and regulatory researches. Dr. Tsong is currently an associate editor of Journal of Biopharmaceutical Statistics and of Statistics in Medicine. He served as President of ICSA in 2006.

Download Presentation Download Material
 

SC4 Short Course 4: Introduction to PK/PD Modeling for Statisticians

09/16/15
8:30 AM - 12:00 PM

Organizer(s): Yaming Hang, Biogen Idec; Alan Hartford, AbbVie

Instructor(s): Yaming Hang, Biogen Idec; Alan Hartford, AbbVie

Pharmacokinetic/Pharmacodynamic (PK/PD) modeling using nonlinear mixed-effects models, also commonly called Pharmacometrics, has been performed for many years. It is important for decision making in drug development and has impact on drug labels. This modeling is usually performed by scientists from a variety of backgrounds with very different levels of statistical training. However, statisticians should play an equal and important role in PK/PD Modeling; there is a broad range of statistical issues in this field that can benefit from statistician’s input. With wider acceptance of Model-based Drug Development, statisticians need to at least be able to review modeling and simulation plans, results, and inferences to ensure correct implementation of statistical methodology and to appreciate the value of such analyses in the drug development process. In an effort to make this field more accessible to statisticians, this short course introduces concepts and methods for using nonlinear mixed-effects models for examining relationships between PK and PD endpoints while bridging the differences in terminology.

Download Presentation Download Material
 

SC5 Short Course 5: Dose Finding in Drug Development: Methods and Implementation, With Focus on MCP-Mod

09/16/15
1:30 PM - 5:00 PM

Organizer(s): Cristiana Mayer, Johnson & Johnson

Instructor(s): Frank Bretz, Novartis; Jose' C. Pinheiro, Johnson & Johnson

The revolutionary advances in basic biomedical science that occurred over the past decade have, so far, failed to translate into comparable improvements in clinical therapies and drugs. In fact, the number of new drug applications (and approvals) has shown a decline over the same period, leading to the so-called “pipeline problem” of the pharmaceutical industry. In response, different initiatives, such as FDA’s Critical Path, have been put in place to identify key drivers of poor performance in translating basic science into successful therapies, and to propose ways to address them in clinical drug development. A well-known problem is poor dose selection for confirmatory trials resulting from inappropriate knowledge of dose response relationship (efficacy and safety) at the end of the learning phase of drug development. This course will discuss, and propose methods to address, the key statistical issues leading to the problems currently observed in dose finding studies, including a review of basic multiple comparisons and modeling methods, as traditionally used in these studies. A unified strategy for designing and analyzing dose finding studies, denoted MCP-Mod, combining multiple comparison and modeling, will be the focus of the course. MCP-Mod received a positive CHMP qualification opinion in January 2014, as an efficient statistical methodology for model-based design and analysis of phase II dose finding studies under model uncertainty. It will be discussed in detail, including a step-by-step description of its practical implementation. Case studies based on real clinical trials, together with concrete examples of code in R, will be used to illustrate the use of the methodology. The extension of this framework will be described for count data and time-to-event endpoints and situations involving generalized non-linear models, linear and non-linear mixed effects models, and Cox proportional hazards models. A short reference will be made to another extension of the comprehensive multiple comparisons and modeling framework to confirmatory testing in dose-response studies using MCP-Mod. Outline 1. Introduction (10 min) a) Motivation: the pharmaceutical industry pipeline problem b) Initiatives to address it: Critical Path and PhRMA WGs c) Course overview 2. Multiple comparison procedures (MCP) for dose finding: (25 min) a) Definitions: commonly used single and multiple contrast tests b) Examples c) Software implementing MCP in SAS and R 3. Modeling approach for dose finding (25 min) a) Modeling the dose-response (DR) profile b) Common DR models used in practice c) Software for model fitting in R 4. MCP-Mod methodology combining MCP and modeling (75 min) a) Motivation and overview b) Candidate DR models c) Testing for DR signal d) Model selection and target dose estimation e) Extension to non-normally distributed data: count data and time-to-event f) Confirmatory dose finding with Type I error rate control using MCP-Mod g) Examples 5. The DoseFinding R package (25 min) a) Design: candidate models, sample size, power b) Analysis: tests, dose estimation, confidence intervals 6. Case Study: General Anxiety Disorder dose finding trial (25 min) a) Step-by-step use of MCP-Mod with focus on trial design b) Illustrate use of R package. 7. Concluding remarks (10 min) Learning outcomes At the end of the course, attendees should understand: a. the goals and characteristics of dose-finding trials and why traditional methods tend to lead to poor dose selection and dose-response estimation; b. alternative approaches based on multiple comparisons and modeling for target dose selection and dose-response estimation; c. the key concepts of the MCP-Mod methodology and how to implement it in practice for designing and analyzing dose finding trials; d. the capabilities of the DoseFinding R package and how to use its functions to implement the MCP-Mod methodology;

Download Presentation Download Material
 

SC6 Short Course 6: Statistical strategies for clinical development of personalized medicines

09/16/15
1:30 PM - 5:00 PM

Organizer(s): Cong Chen, Merck

Instructor(s): Cong Chen, Merck

The future of oncology drug development lies in identifying subsets of patients who will benefit from particular therapies, using putative predictive biomarkers. These technologies offer hope of enhancing the value of cancer medicines, and reducing size, cost, and failure rates of clinical trials. However, inappropriate use of the biomarkers adds cost, complexity, and time to drug development. This short course presents advanced statistical methodologies and strategies for improving the efficiency of and mitigating the risk in late stage development of personalized medicines. The first part of the short course is devoted to the conventional development paradigm. We will present methods to optimize the design of Phase 2 studies, and adaptively integrate predictive biomarkers into Phase 2 and Phase 3 clinical programs in a data driven manner in which these biomarkers are emphasized in exact proportion to the evidence supporting their clinical predictive value. The resulting program is designed to optimally harvest the value from predictive biomarkers. The second part of the short course is devoted to the expedited development paradigm in that Phase 3 randomized confirmatory trials are initiated at risk after significant preliminary anti-tumor activities are observed in small Phase 1/2 single arm studies. We will present an informational design strategy for risk mitigation. The strategy is applied to address a wide range of issues including de-selection of non-performing biomarker subpopulations. Students taking this short course will have an opportunity to learn the relevant state-of-art statistical techniques, exchange ideas and immediately apply the learning to practice.

Download Presentation Download Material
 

SC7 Short Course 7: Bayesian Adaptive Phase I Oncology Trials: Methodology and Implementation

09/16/15
1:30 PM - 5:00 PM

Organizer(s): Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals

Instructor(s): Beat Neuenschwander, Novartis Pharma AG; Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals

Phase I trials in Oncology are usually small adaptive dose-escalation trials. The aim is to approximately understand the dose-toxicity profile of a drug, and, eventually, to find a reasonably safe dose for future testing. A lot of statistical research for Phase I trials has accumulated over the past 25 years, with modest impact on statistical practice. The vast majority of trials still follow the 3+3 design, despite the fact that it often misses the targeted dose (poor operating characteristics) and fails to provide a real understanding about true toxicity rates (no statistical inference).In this course we present a comprehensive and principled statistical approach. The implementation is Bayesian, with the following main parts: a parsimonious model for the dose-toxicity relationship; the possibility to incorporate contextual information (“historical data”) via priors; and, safety-centric metrics (overdose probabilities) which inform dose adaptations under appropriate overdose control. After some basic clinical and statistical considerations, we introduce the statistical methodology for the single-agent setting, and then extend it to dual- and triple-combinations. Applications and a discussion about implementation (such as basic WinBUGS code) issues complement this training and provide practical insights into Phase I trials.

Download Presentation Download Material
 

SC8 Short Course 8: Designing Observational Comparative Studies Using Propensity Score Methodology in Regulatory Settings

09/16/15
1:30 PM - 5:00 PM

Organizer(s): Nelson T Lu, FDA/CDRH; Yunling Xu, FDA/CDRH

Instructor(s): Donald Rubin, Harvard University; Lilly Q Yue, FDA/CDRH

Although well-controlled and conducted randomized clinical trials (RCT) are viewed as gold standard in the safety and effectiveness evaluation of medical products, including drugs, biological products and medical devices, observational (non-randomized) comparative studies play an important role in medical product evaluation, due to ethical or practical reasons, in both pre-market and post-market regulatory settings. However, various biases could be introduced at every stage and into every aspect of the observational study, and consequently the interpretation of the resulting statistical inference could be of concern. Among existing statistical techniques for addressing some of the challenging issues, propensity score methodology is one increasingly used in regulatory settings, due to its unique future of separating “study design” and “outcome analysis”. This course will introduce the causal inference framework and propensity score methods (e.g. matching, stratification, and weighting), and highlight the principle and importance of prospective design of observational comparative studies to increase the integrity and the interpretability of outcome analysis results. Practical issues encountered in the application of the methodology in the regulatory settings will be presented, including but not limited to study design process in regulatory submissions, specification of treatment effects of interest in treatment comparisons (average treatment effect (ATE) or average treatment effect on the treated (ATT)), covariate identification and inclusion, control group selection/formation (a concurrent control, historical control or a control group extracted from national/international registry), sample size and power consideration. Some differences for implementing propensity score methodology will be delineated for studies with different purposes, for regulatory submissions or general comparative effectiveness research. For example, exclusion of treated patients with an investigational product should be discouraged in studies aimed at pre-market regulatory submissions. These topics will be illustrated with examples based on regulatory review experience.

Download Presentation Download Material
 

Thu, Sep 17

Opening Remarks from the Workshop Co-chairs

09/17/15
8:00 AM - 8:15 AM

Chair(s): Wei Zhang, FDA/CVM; Richard C Zink, JMP Life Sciences, SAS Institute

 

Plenary Session 1 - The Future of Precision Medicine

09/17/15
8:15 AM - 9:45 AM

Organizer(s): Wei Zhang, FDA/CVM; Richard C Zink, JMP Life Sciences, SAS Institute

Precision Medicine Initiatives at FDA
Lisa LaVange, FDA

Micro-randomized Trials & mHealth
View Presentation View Presentation Susan Murphy, University of Michigan

 

Plenary Session 2 - Panel Discussion on the Future of Precision Medicine

09/17/15
10:00 AM - 11:30 AM

Organizer(s): Wei Zhang, FDA/CVM; Richard C Zink, JMP Life Sciences, SAS Institute

Panelist(s): Greg Campbell, formerly of FDA/CDRH; Cong Chen, Merck; Lisa LaVange, FDA; Susan Murphy, University of Michigan; Estelle Russek-Cohen, CBER FDA ; Richard Simon, National Cancer Institute

 

Roundtable Discussions

09/17/15
11:45 AM - 1:00 PM

TL1: Logistics and Implementation of Adaptive Trial Designs
Eva R Miller, inVentiv Health Clinical

TL2: Enriching patient population by response: placebo run-in, randomized withdrawal, sequential parallel comparison design (SPCD) and twice enriched design (TED).
Anastasia Ivanova, University of North Carolina at Chapel Hill

TL3: Key Characteristics in Bayesian Adaptive Design
Xin Fang, CDRH, FDA

TL4: Challenges and opportunities for statisticians in planning/implementing adaptive trial designs.
Nan Shao, Covance, Inc.

TL5: H0 P-value based futility decision and other seemingly inappropriate methods
Stan Lin, FDA/CBER

TL6: Adaptive Designs in Unblinded Studies?
Jie (Jack) Zhou, FDA CDRH

TL7: Are statisticians ready to implement increasing number of platform trials?
Emelita M. de Leon-Wong, PPDI

TL8: Adaptive design in medical device trials
Peter Lam, Boston Scientific

TL9: Statistical issues and methods in biosimilar
Jin Xu, Merck

TL10: Precision Medicine: Statistical issues, trial design and regulatory aspects
Amir A Handzel, Astrazeneca

TL11: Relevance and Data Accessibility for Network Meta-Analyses for Comparative Effectiveness Research Using Patient-level Randomized Clinical Trial Data
Leiya Han, PPD

TL12: Statistical Assessment of Comparative Effectiveness in Clinical Trials
Isaac Nuamah, Janssen Research & Development

TL13: Precision studies for In-vivo devices
Bipasa Biswas, FDA

TL14: Incorporating Futility into a Phase 3 Outcomes Trial Governed by a Data Monitoring Committee
Richard Davies, GlaxoSmithKline

TL15: Robust Decision Making In Early Stage Clinical Development
Erik Pulkstenis, MedImmune; Yanli Zhao, MedImmune/AstraZeneca

TL16: Best practices in next-generation sequencing methodology with impact on high-dimensional findings
Justin Wade Davis, AbbVie

TL17: How to treat site in clinical trials - fixed or random?
Chul H Ahn, FDA-CDRH

CANCELLED - TL18: Limit of Quantitation by Mean Inverse Model Approach
Kuang-Lin He, Fujirebio Diagnostics, Inc.

TL19: Making sense of sensors
Vadim Zipunnikov, Johns Hopkins University

TL20: Bayesian Meta-Analysis and Meta-Analysis for Stroke and Myeloma
Xiaoping Liu, Department of Hematology, Zhongnan Hospital of Wuhan University

TL21: The Prevention and Treatment of Missing Data in Clinical trials – How far have we come?
Gosford Aki Sawyerr, Janssen Pharmaceuticals

TL22: Practical Issues with MMRM
Dalong Patrick Huang, FDA/CDER

TL23: Impact of missing data and their imputations in long-term treatment of chronic auto-immune diseases
Achim Guettner, Novartis Pharama AG; Carin Kim, FDA; Karthinathan Thangavelu, Genzyme

TL24: Missing Data Analysis Planning in Late-State Clinical Trials: A Check-up on Current Practices
Davis Gates, Merck

TL25: Investigating Product Complaints: Pitfalls of Working with Manufacturing Data
Thomas Richardson, BIOVIA; Aaron M Spence, BIOVIA

TL26: Validation of Predictive Modeling in Observational Studies
Rui Li, Quintiles; Zhaohui Su, Quintiles

TL27: The Interface between Statistical and PKPD Modeling and Simulation
Matthew David Rotelli, Eli Lilly and Company

TL28: Analyses of Longitudinal Clinical Data with Time-Varying Covariates
Rong Liu, Eli Lilly and Company; Qianyi Zhang, Eli Lilly and Company

TL29: Bayesian and Frequentist Approaches to Non-Inferiority Clinical Trials
Carl DiCasoli, Bayer Healthcare Pharmaceuticals

TL30: Non-inferiority Trial with Survival Endpoints
Elena Rantou, FDA/CDER; Mengdie Yuan, FDA

TL31: The Impact of EU Post-Approval Safety Surveillance Studies (PASS)
Charles L Liss, AstraZeneca Pharmaceuticals

TL32: Statistical Considerations for Handling Treatment Switches in Observational Studies
William Hawkes, Quintiles RWLPR

TL33: PFS - central vs local
Lihui Zhao, Novartis

TL34: Non-inferiority in cancer trials
Tingting Yi, Novartis

TL35: Challenges and Opportunities of Statistics in Oncology Immunotherapy
Yi He, Celldex Therapeutics

TL36: Statistical Intellectual Property
Philip T Lavin, Lavin Consulting LLC

TL37: A Comprehensive Review of the Multiple-Sample Tests for the Three General Data Types
Dewi Gabriela Rahardja, DOD\WHS; Ying Yang, FDA/CDRH

TL38: Leadership and career development for Junior Statisticians working on clinical trials
Lei Gao, Sanofi

TL39: Crossover Design in Clinical Studies
Steven Bai, U.S. Food and Drug Administration; Tao Wang, Eli Lilly and Company

TL40: We the people of the Biopharm Section . . . - chat with the Chair-Elect
B Christine Clark, QDS

TL41: Patient Reported Outcomes in Oncology
Laura L Fernandes, FDA

TL42: Incorporating Patient Preferences Evidence into Regulatory Considerations
Martin Ho, Center for Devices and Radiological Health, FDA

TL43: PRO and COA Experiences: Regulatory and Patient Priorities and Processes
Laura Lee Johnson, FDA

TL44: Pathway for Antibiotics – Revisiting Endpoints and Designs
Yunxia Lu, PPD

TL45: Next Generation Sequencing Diagnostic Tests
Peggy Wong, Merck

TL46: Challenges and good practices to improve the quality of therapeutic device submissions
Manuela Buzoianu, FDA/CDRH

TL47: Promotion of involvement of statistician and statistical analysis of risk based monitoring in clinical trials
Xiaoqiang Xue, Quintiles INC

TL48: Blinded and Unblinded Evaluation of Aggregate Safety Data during Clinical Development
Bill Wang, Merck

TL49: Risk Stratification Strategies to Identify Low-Risk Patients In Cardiovascular Clinical Trials
CV Damaraju, Janssen Research and Development, LLC; Juliana Ianus, Janssen R&D

TL50: Suicidal Ideation and Behavior: Design and Analysis of Clinical Trials
Rosanne Lane, Janssen Research & Development, LLC; Pilar Lim, Janssen Research & Development

TL51: IWRS: Interactive Web Response System – Looking beyond randomization and medication kit assignment
Kim Cooper, Janssen, R&D; Rama Melkote, Janssen, R&D

 

PS1a Parallel Session: Statistical Experiences on Subgroup-stratified, Biomarker-stratified or Enrichment Trials

09/17/15
1:15 PM - 2:30 PM
Thurgood Marshall North

Organizer(s): Liming Dong, FDA; Eva R Miller, inVentiv Health Clinical; Yuanjia Wang, Department of Biostatistics, Columbia University; Xiting (Cindy) Yang, FDA/CDRH

Chair(s): Xiting (Cindy) Yang, FDA/CDRH

In a situation that a biomarker/baseline characteristic is identified with great potential in predicting treatment effect of a new therapy, a stratified design may be used: All patients are randomly assigned regardless of subgroup status, but the analysis plan is centered on testing treatment effect dependence on subgroup status. Similar situation applies to trials with concerns on heterogeneity in treatment effect among subgroups defined by important baseline characteristics such as age and gender. At other times, a biomarker is not known at the beginning of a study and an analysis plan may include an identification of the biomarker, followed by an adaptive enrichment plan. In this session, speakers from academia, industry and government will share their experience on these.

Opportunities of enrichment designs in the era of precision medicine
Bo Huang, Pfizer Inc.

Optimal Subgroup Sample Size Allocations in Clinical Studies
Guoxing (Greg) Soon, FDA/CDER/OTS/OB

Determining the Intended Use Population in Phase III Clinical Trials
Richard Simon, National Cancer Institute

 

PS1b Parallel Session: Current and Future Role of the Clinical Statistician in the World of Data Transparency

09/17/15
1:15 PM - 2:30 PM
Thurgood Marshall South

Organizer(s): Jeffrey L Joseph, Theorem Clinical Research; Vivian Shih, AstraZeneca; Stephen Wilson, FDA/CDER/OTS/OB/DBIII; Yueqin Zhao, U.S. Food and Drug Administration

Chair(s): Stephen Wilson, FDA/CDER/OTS/OB/DBIII

Pharmaceutical and biotechnology companies submit large amounts of clinical data and analysis to support approval of their drugs and devices with the expectation for this information to be kept confidential as has been the practice by regulators around the years for decades. With the current pressure mounting, regulators and industry have to review policies and procedures on how they will release this data to enhance the public health. The future will include data transparency. The FDA issued the deliberations from the Transparency Task Force (2011) and the EMA (2013) has released a draft policy document entitled ‘Publication and access to clinical trial data’. PhRMA (2011) has indicated that pharmaceutical and biotechnology companies are committed to enhancing the biomedical research through the responsible sharing of clinical trial data, which: protects patient privacy for research participants, preserves the integrity of regulatory systems, and maintains incentives for investments in biomedical research. Such responsible data sharing would enhance the public health and accelerate development of new drugs and devices by allowing re-analysis of the existing data compiled by sponsor supported clinical trials. The procedures and processes for data transparency to occur with re-analysis of the clinical data to undergo scientific rigor has possible new roles and responsibilities for the statistician. These roles and responsibilities would fall in three categories: 1) access to the clinical data through independent panel, 2) planning and analysis of the data, 3) future planning and design of clinical trials. For a researcher to gain access to clinical data under data transparency, a number of pharmaceutical companies have setup an independent panel (i.e., GSK has an independent panel (2014), Janssen has Yale Open Data Access (YODA) at Yale University). The roles and responsibilities of the statistician as a member of this panel and the criteria for review of the research proposal plan (design, methods, and analysis) to meet scientific objectives. During this session, a statistician currently in this role will update those attending the session on his/her experience. A researcher in conjunction with a statistician will need to provide the formal statistical analysis plan to gain access of the data through a gatekeeper, and formal study report will need to be provided within 1 year of completing re-analysis for public access. In this session we would discuss the analysis necessary for multiple clinical studies such as meta-analysis, and any new or exploratory analysis; also, we would have to discuss the issues of multiplicity and sub-group analyses. The topic of generation and combining of patient level data for ease of analysis would also be discussed and would include appropriate methods of de-identification, standardization of the data (e.g. CDISC), and appropriate coding (i.e., medication procedures, adverse events). Lastly, this session would address how data transparency would affect the planning and design of future clinical trials. Will there be an effect of the sample size of the future trial? Would this increase or decrease the use of adaptive design in future trials? References: European Medicines Agency policy on access to documents (related to medicinal products for human and veterinary use. 2010. www.ema.europa.eu/docs/en_GB/document_library/Other/2010/11/WC500099473.pdf Draft Policy 70: Publication and access to clinical-trial data. 2013. European Medicines Agency. www.ema.europa.eu/ema/pages/includes/document/open_document.jsp?webContentId=WC500144730 PhRMA’s Principles on conduct on clinical trials and communications of clinical trial results. 2011. http://phrma/org/sites/default/pdf/042009_clinical_trial_principles_final_0.pdf EFPIA and PhRMA Principles for responsible clinical trial data sharing. Our commitment to patients and researchers.2013. http: //transparency.efpia.eu/uploads/modules/documents/data-sharing-prin-final.pdf Strom, BL, Buyse, M, Hughes, J, Knoppers, BM. Data Sharing, Year 1 – Access to Data From Industry-Sponsored Clinical Trials. 2014. New England Journal of Medicine. Published on October 15 2014 at NEJM.org. http://www.nejm.org/doi/full/10.1056/NEJMp1411794#t=article Nisen, P, Rockhold, F. Access to Patient-Level Data from GlaxoSmithKline Clinical Trials. 2013. N. Engl J Med 369 (5): 475-478. Possible Presenters Dr. Marc Buyse – Founder and CEO, IDDI – Serves on Independent Review Panel for Proposals sponsored by GSK Dr. Jeffrey Gardner – Director, Statistics, Janssen - Heading the operational support for data transparency at Janssen Currently in discussion with Steve Wilson for a FDA presenter that is currently generation and combining of patient level data for ease of analysis.

Supporting Open Access for Researchers
View Presentation View Presentation Michael Pencina, Duke Clinical Research Institute

Data Sharing, Year 2 — Access to Data from Industry-Sponsored Clinical Trials
View Presentation View Presentation Marc Buyse, IDDI Inc.

Yale Open Data Access (YODA) – J&J Collaboration with Yale
View Presentation View Presentation Jeffrey Gardner, Janssen R&D

Discussant(s): Stephen Wilson, FDA/CDER/OTS/OB/DBIII

 

PS1c Parallel Session: Big Data/Big Analytics: Challenges and Opportunities in Pre- and Post-market Medical Product Evaluations Utilizing National/International Registries

09/17/15
1:15 PM - 2:30 PM
Thurgood Marshall East

Organizer(s): Charles H Darby, Statistical Consultant; Rakhi Kilaru, PPD; Nelson T Lu, FDA/CDRH; Yunling Xu, FDA/CDRH

Panelist(s): Jesse Berlin, J&J; Chunrong Cheng, CBER/FDA; Rima Izem, OB/CDER/FDA; Theodore Lystig, Medtronic, Inc.; Bram Zuckerman, CDRH/FDA

Regulatory decisions are made based on the assessment of risks and benefits of the medical products, including drugs, biologics and medical devices, at the time of pre-market approval, and subsequently, when post-market risk-benefit balance needs reevaluation. Such assessments depend on scientific evidence obtained from pre-market studies, post-approval studies, post-market surveillance studies, and relevant registries. Currently, national/international registries are playing more and more important roles in the safety and effectiveness evaluations of medical products, in both pre- and post-market settings. Although such registries provide a huge amount of data reflecting real world practice, challenges arise concerning how to use the data to draw reliable statistical inferences. This session will focus on why and how to prospectively design clinical studies utilizing registries for objective causal inference. Statistical and regulatory challenges and opportunities will be presented with examples. Various issues will be discussed by a panel of statistical and medical experts from academia, industry and government, from both pre- and post-market perspectives. 1) Objective Observational Study Design Using Big Data for Causal Inference Donald Rubin, Prof. John L. Loeb Professor of Statistics Department of Statistics Harvard University 2) Designing Observational Comparative Studies Using Registry Data in Regulatory Settings Lilly Yue, Ph.D. Deputy Director, Division of Biostatistics Center for Devices and Radiological Health U.S. Food and Drug Administration 3) Panelists: Jesse Berlin, Sc.D., VP, J&J Chunrong Cheng, Ph.D. Senior Mathematical Statistician, Division of Biostatistics, CBER/FDA Rima Izem, Ph.D. Team leader, Division of Biometrics VII, OB/CDER/FDA Ted Lystig, Ph.D. Distinguished Statistician, Medtronic Inc. Bram Zuckerman, M.D., Director, Division of Cardiovascular Devices, CDRH/FDA

Objective Observational Study Design Using Big Data for Causal Inference
Donald Rubin, Harvard University

Designing Observational Comparative Studies Using Registry Data in Regulatory Settings
Lilly Q Yue, FDA/CDRH

 

PS1d CMC Session: Analytical Similarity - Current Statistical Issues in Biosimilar Product Development

09/17/15
1:15 PM - 2:30 PM
Thurgood Marshall West

Organizer(s): Meiyu Shen, FDA; Harry Yang, MedImmune LLC

Chair(s): Harry Yang, MedImmune LLC

The biosimilars regulatory program has been taking shape at the FDA over the past three years. Several guidance documents have been issued. In parallel, associated statistical methodologies have been evolving. For example, statistical accommodation of limited availability of biological materials and lots has been developed. Statistical assessment of biosimilars in a clinical setting also remains an outstanding issue as we grapple with the new challenges arising from different endpoints and limited information. This session consists of two presentations, one expert statistician representing an industry perspective and one representing a regulatory perspective, who will together provide fresh insight on the application and issues surrounding statistical approaches to analytical similarity. There will be ample time for audience discussion and participation during the second half of the session.

Discussant(s): Rick Burdick, Amgen Inc.; Yi Tsong, US FDA CDER

 

PS1e Parallel Session: DMCs and Adaptive Clinical Trials: Considerations in Balancing Safety and Trial Integrity

09/17/15
1:15 PM - 2:30 PM
Lincoln 5

Organizer(s): Greg Ball, AbbVie; Michelle A Detry, Berry Consultants; Zhuang Miao, FDA; Jie (Jack) Zhou, FDA CDRH

Chair(s): Michelle A Detry, Berry Consultants

Traditionally, data monitoring committees (DMCs)—the independent bodies tasked with overseeing ongoing clinical trials with the goals of mitigating risks to subjects and protecting the scientific integrity of the trial—have been given substantial latitude in the scope of the information they consider and the types of recommendations they may make. FDA’s thoughts regarding the function of DMCs were captured in a March 2006 FDA Guidance Document entitled “Guidance for Clinical Trial Sponsors: Establishment and Operation of Clinical Trial Data Monitoring Committees” (see http://www.fda.gov/downloads/Regulatoryinformation/Guidances/ucm127073.pdf). Since the issuing of the FDA Guidance on DMCs, there has been increasing interest in the use of adaptive clinical trials—trials in which key aspects of the trial are modified during the trial in response to data accumulating within the trial itself, according to pre-specified rules, to achieve goals of efficiency, improved patient outcomes, or better ethical balance—in both the exploratory and confirmatory phases of clinical development. The FDA’s current thinking regarding the use of such designs is largely captured in a 2010 draft Guidance entitled “Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics” (see http://www.fda.gov/downloads/Drugs/.../Guidances/ucm201790.pdf) and the 2015 draft guidance “Adaptive Designs for Medical Device Clinical Studies” (see http://www.fda.gov/ucm/groups/fdagov-public/@fdagov-meddev-gen/documents/document/ucm446729.pdf). When overseeing the conduct of a confirmatory, adaptive clinical trial, a DMC must balance the need for flexibility in responding to unexpected patterns in efficacy and safety and the need to maintain the pre-specified nature of the trial design and statistical integrity for regulatory review. This requires a sophisticated understanding of these competing considerations, regulatory science, and clinical care by the DMC members. As both traditional and innovative adaptive designs have become more prevalent, previously unconsidered issues have arisen, requiring DMC’s roles and responsibilities to evolve. This session will use the existing FDA Guidance Documents and speaker expertise with both participation on, and support of DMCs to illustrate and explore the potential issues and propose solutions. The objectives of this session are: 1. To briefly review the current FDA Guidance Document on the responsibilities and operation of DMCs, with particular emphasis on those areas that potentially impact the oversight and integrity of confirmatory, adaptive clinical trials; 2. To briefly review the current FDA Draft Guidance Documents on adaptive design clinical trials, with particular emphasis on those areas more likely to impact the roles, responsibilities, and operation of DMCs; 3. To identify and discuss areas of agreement, potential conflicts, and gaps in these FDA Guidance Documents for guiding the oversight of confirmatory adaptive design clinical trials; 4. To suggest operational procedures and clarifications of roles and responsibilities to be included in DMC charters that best address issues identified above; and 5. To identify gaps in regulatory science related to the oversight of confirmatory, adaptive design clinical trials. Organizers: Michelle A. Detry, PhD (Berry Consultants), Jie (Jack) Zhou, PhD (FDA CDRH), Gregory Ball, PhD (Abbvie), Miao Zhuang (FDA CDRH) Speakers: FDA: Gregory Campbell, PhD, Director, Division of Biostatistics, FDA CDRH Academia: Thomas D. Cook, PhD, Senior Scientist, Department of Biostatistics and Medical Informatics, University of Wisconsin School of Medicine and Public Health Industry: Roger J. Lewis, MD, PhD, Senior Medical Scientist, Berry Consultants

DMCs and Adaptive Clinical Trials: Considerations in Balancing Safety and Trial Integrity
View Presentation View Presentation Roger Lewis, Berry Consultants

DMCs and Adaptive Clinical Trials: Keep it Simple, Statistically
Thomas D Cook, University of Wisconsin School of Medicine and Public Health

Adaptive Clicial Trials and Data Monitoring Committees: One View from the World of Medical Devices
Greg Campbell, Consultant

 

PS1f Parallel Session: Advanced Multiple Testing Methodologies for Confirmatory Trials

09/17/15
1:15 PM - 2:30 PM
Lincoln 6

Organizer(s): Freda Cooner, FDA/CDER; Xuan Liu, AbbVie; Julia Jingyu Luan, FDA/CDER; Anthony Rodgers, Merck

Chair(s): Xuan Liu, AbbVie

There has been considerable development in advanced multiple testing methodologies in recent years in respond to the demand of more efficient confirmatory trial designs. An ideal trial design needs to be tailored not only to meet the sponsor’s objectives with “optimal” efficiency but also to ensure valid interpretation of the study results for regulatory considerations. Very often nowadays, such trial design may inherit a very complex multiple testing problem with multiplicities from different sources. For example, a confirmatory trial can include multiple endpoints, multiple doses of the investigational drug, multiple control arms, or multiple populations. It may also include adaptive features such as group sequential designs with early stopping for efficacy or futility, seamless phase II/III designs with adaptations in-between phases, or enrichment designs to narrow down the patient population. The multiple testing strategy is not only one of the most important components in the trial design from the sponsor’s perspective, but also an important factor in regulatory agencies’ review processes. In this session, a group of prominent researchers from pharmaceutical industry and regulatory agencies will present their research and views on this important topic. Speakers: Frank Bretz, Novartis Frank.bretz@novartis.com Mo Huque (or discussant) FDA mohammad.huque@fda.hhs.gov Walt Often AbbVie Walter.often@abbvie.com Discussants: James Hung FDA hsienming.hung@fda.hhs.gov

Innovative Clinical Trial Designs That Control For Multiplicity
View Presentation View Presentation Walter W Offen, AbbVie

Generalized error rates for subgroup analyses
Frank Bretz, Novartis

Validity of the Hochberg Procedure Revisited for Clinical Trial Applications
Mohammad F. Huque, CDER/FDA

Discussant(s): Hsien-Ming James Hung, FDA

 

PS2a Parallel Session: Logical Inference on Treatment Efficacy in Subgroups and Their Combinations in Personalized Medicine Development

09/17/15
2:45 PM - 4:00 PM
Thurgood Marshall North

Organizer(s): Thomas Birkner, FDA; Ying Ding, Department of Biostatistics, University of Pittsburgh; Jason C Hsu, Eli Lilly and Company & Ohio State University; Xiang Ling, FDA/CDER

Chair(s): Jason C Hsu, Eli Lilly and Company & Ohio State University

In personalized medicine development, the patient population is thought of as a mixture of two or more subgroups that may derive differential treatment efficacy. In order to find the right patient population for the treatment to target, it is necessary to infer treatment efficacy in subgroups and combinations of subgroups. A fundamental consideration in this inference process is that the logical relationships between treatment efficacy in subgroups and their combinations should be respected (for otherwise the consistency assessment of efficacy may become paradoxical). Surprisingly, this basic principle is violated by several commonly used efficacy measures and/or popular inference procedures, causing illogical conclusions. In this session, new methods of inference procedures that preserve the logical relationships on appropriately defined efficacy measures will be presented. Presentation by Dr. Yi Liu will contrast the different objectives of developing new drugs that target subgroups of patients versus individualizing selection of treatment among existing drugs for each patient. Then, for developing new drugs, she will illustrate how to logically infer on the efficacy of a treatment for patients with biomarker values above a threshold, and how to choose a threshold for the biomarker. Presentation by Dr. Ying Ding will show, for time-to-event outcomes and ordinal biomarkers, how to analyze subgroups and their mixtures for treatment efficacy with suitable efficacy measures. Dr. James Hung will be a discussant of the presentations.

Thresholding of a Companion Diagnostic test Confident of Efficacy in Targeted Population
Yi Liu, Takeda Pharmaceuticals

Logical Inference on Treatment Efficacy in Subgroups and Their Mixture, with an Application to Time-to-event Outcomes
Ying Ding, Department of Biostatistics, University of Pittsburgh

Discussant(s): Hsien-Ming James Hung, FDA

 

PS2b Parallel Session: Subgroup Analysis under Rising Regulatory Emphasis: Fundamentals and Challenges

09/17/15
2:45 PM - 4:00 PM
Thurgood Marshall South

Organizer(s): Yifan Wang, FDA; Bo Yang, AbbVie; Weiya Zhang, FDA; Yijie Zhou, AbbVie

Investigation and interpretation of the findings of subgroup analysis in confirmatory clinical trials has always been important and yet challenging. Recently in 2014, initiatives have been undertaken by major regulatory agencies regarding subgroup analysis: EMA issued a draft guideline, collected industry feedback and held a workshop for further discussion; FDA held a public hearing on demographic subgroups and afterwards issued an action plan. With the rising regulatory emphasis on this topic, we will re-convey what are the fundamental statistical components embedded in subgroup analyses that will enable correct decision making regarding subgroups, and how we can address these components in the regulatory environment nowadays.

Issues related to subgroup analysis and possible ways for improvements
View Presentation View Presentation Lu Cui, AbbVie, Inc.; Shufang Liu, AbbVie, Inc.

Subgroup analysis
View Presentation View Presentation Janet Wittes, Statistics Collaborative

Discussant(s): Bob Temple, FDA

 

PS2c Parallel Session: Large trials for major adverse cardiovascular events

09/17/15
2:45 PM - 4:00 PM
Thurgood Marshall East

Organizer(s): Aloka Chakravarty, FDA, CDER; Olga V Marchenko, Quintiles; Bret Musser, Merck

Chair(s): Richard C Zink, JMP Life Sciences, SAS Institute

Panelist(s): Aloka Chakravarty, FDA, CDER; Qi Jiang, Amgen; Olga V Marchenko, Quintiles; Jose' C. Pinheiro, Johnson & Johnson; Estelle Russek-Cohen, CBER FDA

Motivation: This session will discuss the design and operational aspects of trials with safety objectives, such as the cardiovascular outcome trials (CVOTs) of type 2 diabetes mellitus (T2DM) programs. CVOTs provide an opportunity to assess rare safety signals and better evaluate benefit-risk profiles, but present challenges in statistical and operational areas including efficiency of statistical design, study interpretation, long-term patient retention, data confidentiality and high cost. In this session, experts from the pharmaceutical industry and the FDA will share their thoughts on CV risk assessment strategies in T2DM development programs, and discuss lessons learned and best practices. Format: The session will open with a presentation given by Olga Marchenko from Quintiles, who is co-chair of the ASA Biopharmaceutical Section Safety Working Group, followed by a panel discussion featuring thought leaders from the biopharmaceutical industry and FDA. During the panel discussion, preplanned questions will be addressed first, followed by questions from the audience. The panel discussion will complement the presentation.

Overview of strategies for assessing CV risks of T2DM treatments and arising questions
View Presentation View Presentation Olga V Marchenko, Quintiles

 

PS2d CMC Session: Quality and Quality Metrics

09/17/15
2:45 PM - 4:00 PM
Thurgood Marshall West

Organizer(s): Stan Altan, JNJ; Liang Zhao, U.S. Food and Drug Administration

The Food and Drug Administration Safety and Innovation Act (FDASIA) of 2012 gave the FDA broad additional authority including developing a uniform set of standards for all regulated products, generic, brand name and OTCs. The concept of Quality Metrics is in the process of being articulated and promoted by the new office of Pharmaceutical Quality. A guidance on Quality Metrics is in progress. FDA has emphasized the importance of a company's Quality Culture and the intent to identify and measure this culture with corresponding metrics. In May 2014, Russ Wesdyk of FDA proposed four "consensus Quality Metrics" with definitions and stated that additional metrics were still being defined. In June 2014, ISPE initiated an industry-wide Quality Metrics Pilot through McKinsey to collect data for calculation of the "four consensus metrics" plus others that were defined by the ISPE team. A published summary of ISPE's Quality Metrics Pilot data analysis and learnings is anticipated for further discussion at the ISPE Quality Metrics Summit in April 2015. The FDA-Industry workshop would dedicate one session to this important topic of “quality” metrics being developed by the FDA to promote a dialogue on the impact they might have on statistical practice related to the assessment and improvement of quality at biopharmaceutical companies.

Quality Metrics - Why it Matters?
Lawrence Yu, FDA/CDER/OPQ

Process Capability: Is it Just a Quality Metric?
Helen Strickland, GSK

 

PS2e Parallel Session: Current Statistical Issues in Biosimilar Product Development

09/17/15
2:45 PM - 4:00 PM
Lincoln 5

Organizer(s): Sungwoo Choi, FDA Center for Drug Evaluation and Research; Xin Gao, FDA Center for Drug Evaluation and Research; Bo Jin, Pfizer Biotechnology Clinical Development; Jason Liao, Novartis Pharmaceuticals

Chair(s): Joshua Chen, Sanofi Pasteur; Eric M. Chi, Amgen Inc.

There have been a few FDA draft guidance on the development of biosimilar products. Specifically, the FDA draft guidance in 2012 on scientific considerations in demonstrating biosimilarity to a reference product recommends that sponsors use a stepwise approach in their development of biosimilar products and indicates that FDA considers the totality of the evidence provided by a sponsor to support a demonstration of biosimilarity. General scientific principles are discussed in the guidance on conducting comparative structural and functional analysis, animal testing, human PK and PD studies, clinical immunogenicity assessment, and clinical safety and effectiveness studies. The other FDA draft guidance in 2012 on quality considerations provides some recommendations on the scientific and technical information of the chemistry, manufacturing, and controls (CMC) section of a marketing applicant for a proposed biosimilar product. The FDA 2014 draft guidance on clinical pharmacology data to support a demonstration of biosimilarity to a reference product further discusses some concepts related to clinical pharmacology testing for biosimilar products and the approaches for developing the appropriate clinical pharmacology database, and the utility of modeling and simulation for designing clinical trials. Despite these efforts, there still remains to be a number of statistical questions to be answered for the regulations and the development for biosimilar products. This includes, but is not limited to, standardization of manufacturing quality control, assessment of variability, stability testing and quality comparison, methodology to demonstrate clinical PK and PD similarity, statistical considerations for clinical efficacy and safety comparability trials including determination of equivalence margin and statistical assessment of immunogenicity similarity, and statistical study design and assessment of biosimilar interchangeability etc. This session provides an excellent opportunity to statisticians at FDA, academia and the industry to work together and join force to discuss these challenging issues on the development of biosimilar products. The presentations in this session will span all the stages of biosimilar development, from CMC, clinical PK and PK/PD, to clinical efficacy and safety comparability trials. The session will consist of three presentations and the discussions with industry, academia and FDA representatives.

Equivalence Test for Two Emax Curves in Biosimilar Studies
Bo Jin, Pfizer Biotechnology Clinical Development

Statistical Approaches to Demonstrate Analytical Similarity of Quality Attributes
View Presentation View Presentation Cassie Dong, FDA

Statistical Issues in Comparative Clinical Studies of Biosimilars
View Presentation View Presentation Gregory Levin, Food and Drug Administration

 

PS2f Parallel Session: Adaptive Enrichment Design – A Way to Achieve the Goal of Personalized Medicine?

09/17/15
2:45 PM - 4:00 PM
Lincoln 6

Organizer(s): Min (Annie) Lin, FDA CBER; Feng Liu, GlaxoSmithKline Inc; Inna Perevozskaya, Pfizer; Zhiwei Zhang, FDA/CDRH

Chair(s): Meijuan Li, US Food and Drug Administration; Feng Liu, GlaxoSmithKline Inc

Growing interest in adaptive enrichment designs, which involve preplanned rules for modifying enrollment criteria based on accrued data, has been recognized in pharmaceutical researches. An adaptive enrichment study is usually designed to decrease the heterogeneity of patients being studied and therefore considered as a way for pursuing personalized medicine. While most statistical literatures on adaptive enrichment designs focus on the approaches with pre-specified subpopulation characteristics identified prior to or early in clinical development (e.g. a predictive biomarker), some recent methodologies paid more attention to work on adaptively choosing the entry criteria based on interim observations. In this session, we will present the recent developments in methodology and case studies of adaptive enrichment design trials. Discussion will be made to better understand the methodological issues as well as challenges in implementation from statistical, operational and regulatory perspectives.

Adaptive Biomarker Population Selection and Enrichment in Confirmatory Phase III Trials
View Presentation View Presentation Cong Chen, Merck

Subgroup Selection in Adaptive Signature Designs of Confirmatory Clinical Trials
Zhiwei Zhang, FDA/CDRH

Optimal, Two Stage, Adaptive Enrichment Designs for Randomized Trials, using Sparse Linear Programming
View Presentation View Presentation Michael Rosenblum, Johns Hopkins Bloomberg School of Public Health

 

PS3a Parallel Session: Concerns with reanalysis for ongoing data transparency initiatives

09/17/15
4:15 PM - 5:30 PM
Thurgood Marshall North

Organizer(s): Theodore Lystig, Medtronic, Inc.

Chair(s): Michael Hale, Amgen

BACKGROUND Recent publications (Ebrahim et al 2014, Christakis et al 2013, Krumholz et al 2014) have called attention to an emerging problem of poorly defined practices and conflicting results for reanalysis of previously reported studies. While there are generally accepted practices of pre-specified statistical analysis plans for original studies, there appears to be a notable absence of agreement for standards for reanalyzing existing data. The Christakis et al 2013 viewpoint article discussed some of the situations leading to questionable reanalyses and recommended some practices to address those. Ebrahim (2014) and colleagues reviewed 37 published reanalyses and compared the results and interpretation with those originally reported, finding unacceptably high rates of discordance, even when the reanalyses were conducted by the same people who performed the original analysis. As access to datasets from completed trials from industry, government, and others continues to increase, we have a rapidly increasing potential for confusion for prescribers, payers, and patients, as well as posing difficult labeling and market access questions for regulators. In view of this, reanalysis has important implications for our healthcare systems, and we should all be concerned that reanalysis delivers on its promise of greater certainty of our understanding of the benefits and risks of therapy. APPROACH The speakers will highlight some of the inherent problems of reanalysis by a secondary party, including degradation of source data due to de-identification and masking, working in a glove-box environment for analysis, possible lack of clarity regarding study conduct, measurements, and endpoints, and other complications. Some provocative questions will be posed for further discussion, such as the role of regulators in evaluating findings from reanalyses, and inappropriate practices related to specification, conduct, and reporting of reanalysis which could lead to bias and misinformation. Early efforts to address these problems and practices will be presented, including potential ways forward. SPEAKERS Estelle Russek-Cohen, FDA/CBER (Confirmed) Jonathan Hartzel, Merck (tentative) Paul McSorley, GSK (Confirmed) REFERENCES Ebrahim S, Sohani ZN, Montoya L, et al. Reanalyses of randomized clinical trial data. JAMA. 2014; 312(10):1024-1032. doi:10.1001/jama.2014.9646. Christakis DA, Zimmerman FJ. Rethinking reanalysis. JAMA. 2013; 310(23):2499-2500. doi: 10.1001/jama.2013.281337 Krumholz HM, Peterson ED. Open Access to Clinical Data. JAMA. 2014; 312(10):1002-1003. doi:10.1001/jama.2014.9647

Reanalysis of De-Identified Data
Jonathan Hartzel, Merck & Co., Inc.

Re-Use of Clinical Trial Data: An FDA Perspective
Estelle Russek-Cohen, CBER FDA

Concerns with reanalysis for ongoing data transparency initiatives
View Presentation View Presentation Sara Hughes, GSK

 

PS3b Parallel Session: Bayesian subgroup analysis – opportunities and challenges in unmet medical need

09/17/15
4:15 PM - 5:30 PM
Thurgood Marshall South

Organizer(s): Freda Cooner, FDA/CDER; Margaret Gamalo, CDER, Food and Drug Adminstration; David Ohlssen, Novartis;

Traditionally, clinical trials have primarily been concerned with comparing treatments on an entire population to provide the most reliable data about the effects of treatments. But often it is also important to determine whether there are differential treatment effects on subgroups, if there is potential heterogeneity of treatment effect in relation to pathophysiology, if there are practical questions about when to treat, or if there are doubts about benefit in specific groups which are leading to potentially inappropriate undertreatment. Many of these challenges and some solutions are discussed in the 2014 European Medicines Agency guideline on investigational subgroups. While this document represents an important step in this complex area, there seems to be a need for better tools that quantify the risks associated with key decisions. Bayesian methods provide a natural framework for balancing the risk of overlooking an important subgroup with the potential to make a decision based on a false discovery. Due to the need to streamline drug development, in areas of high unmet medical need, recently, subgroup analysis has started to play a different role. For example, the FDA released a draft Guidance on antibacterial therapies for unmet medical need and notes the possibility to use innovative design strategies including Bayesian modeling approaches for assessing subgroup-specific treatment effects trials involving multi-site infections instead of starting with multiple clinical trials in different sites (which requires duplication of regulatory and infrastructure efforts). Basket trials have also gained ground in oncology. This method recruits patients via biomarker status instead of cancer type. After biomarker identification, the patients are divided into multiple study arms (baskets) by cancer type, and the drug’s impact is assessed within the separate arms as well as within the study as a whole. In other challenging situations, such as drug development pediatrics, the EMA produced a concept paper on extrapolation of efficacy and safety. The document encourages use of Bayesian methods to extrapolated efficacy from source to target population (e.g. Adults to pediatrics). The appeal of subgroup analyses in these scenarios is undeniable. In this session, talks will showcase Bayesian subgroup analysis e.g., methodology, trial design considerations and other innovations, or case examples that include investigations in subgroups. Session Speakers: 1. Gene Penello (FDA/CDRH): “Bayesian hierarchical models for multi-way subgroup problems” 2. Kert Viele (Berry Consultants): 3. Satrajit Roychoudry (Novartis): “Multi-strata exchangeability non-exchangeability designs for early phase oncology trials”. Discussant: Ravi Varadhan (Johns Hopkins University)

A Bayesian approach for designing Phase 2 clinical trials with rare tumor types in Oncology
Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals

Clustered Hierarchical Modeling for Identifying Promising Subgroups
View Presentation View Presentation Kert Viele, Berry Consultants

Bayesian hierarchical models for multi-way subgroup problems
View Presentation View Presentation Gene Anthony Pennello, FDA/CDRH

Discussant(s): Ravi Varadhan, The Johns Hopkins Center on Aging and Health

 

PS3c Town Hall Session: Roles of Statisticians in Academia, Regulatory and Pharmaceuticals Industry

09/17/15
4:15 PM - 5:30 PM
Thurgood Marshall East

Organizer(s): Keaven Anderson, Merck Research Laboratories; Yulan Li, Novartis; Guoxing (Greg) Soon, FDA/CDER/OTS/OB; Yanming Yin, FDA/CDER/OTS/OB

Chair(s): Yulan Li, Novartis; Guoxing (Greg) Soon, FDA/CDER/OTS/OB

Panelist(s): Robert Califf, FDA; Sonia Davis, University of North Carolina at Chapel Hill; Tom Fleming, University of Washington; Jeffrey Helterbrand, Roche; Lisa LaVange, FDA; Ramachandran Suresh, GSK; Bob Temple, FDA; Janet Wittes, Statistics Collaborative

In the era of personalized medicine with rapid evolving science and technologies across multiple disciplines, the need to stress “Roles of Statisticians” has apparently become a pertinent and pressing issue that has not been adequately addressed in the realm of “Roles of Statistics.” Incorporating and anticipating scientific and technological advances and the ability to bridge statistics to other fields are necessary for successful collaborations of statisticians with medical professionals and scientists. There is no doubt that further development in analyses and methodologies is both fundamental and critical for practicing statisticians. This single aspect alone is far from sufficient to ensure successful collaboration and leadership in multidisciplinary environment including not only project teams, regulatory agencies, but also medical and patient communities. Developing the ability to broaden advancing scientific knowledge and to incorporate new technologies is of parallel importance to developing communication and interpersonal skills. Being able to properly blend and apply these skills in a particular context of research while taking into considerations of various practical, regulatory and ethical issues is a “statistical art” that is at the core of “Roles of Statisticians”. This “statistical art” can take various forms, which encompass demands of statisticians "asking the right questions”, "making investigators confront their own assumptions”, “using both statistical reasoning and common sense”, “having ability to bridge between statistics and other disciplines”, “showing leadership in driving the integration of science and technology advances with innovative clinical development”, etc. Playing the roles well also motivate the development of innovative and relevant statistical theory and methodology. In this Town Hall meeting, we aim to broaden our own view of the roles of statisticians as well as to learn the viewpoints held by medical research and public health personnel in this new ear of personalized medicine. We will investigate barriers which prevent statisticians from performing more effectively, explore the journey of successful statisticians, and present some examples of critical roles of statisticians. The session will include a panel of well-known statistical and medical experts who will speak from their own experiences to address these issues.

Role of Statistician in the era of personalized medicine – perspectives from Academia, Regulatory and Pharmaceuticals Industry
View Presentation View Presentation Janet Wittes, Statistics Collaborative

Future evolution of the statistician’s role in drug development: What qualifies as “good statistical practice”?
Robert Califf, FDA

Role of Statisticians on both sides of drug development
Lisa LaVange, FDA

Core competencies for statisticians in pharmaceuticals industry
Jeffrey Helterbrand, Roche

 

PS3d CMC Session: Continuing Discussion: Statistical Considerations for Continuous Manufacturing Processes

09/17/15
4:15 PM - 5:30 PM
Thurgood Marshall West

Organizer(s): David Christopher, Merck; Helen Strickland, GSK; Yi Tsong, US FDA CDER

This is a continuation of the session from last year, extending discussion around the evolving topic of statistical considerations for continuous manufacturing. In addition to presenting an update on the current FDA perspective on this topic, the session will introduce perspective and experience from a non-pharmaceutical industry in which continuous manufacturing processes have been successfully used for many years.

CMC Session: Continuing Discussion: Statistical Considerations for Continuous Manufacturing Processes
View Presentation View Presentation Celia N. Cruz, CDER/Office of Pharmaceutical Science/IO

CMC Session: Control Strategy Implications for Continuous Manufacturing
View Presentation View Presentation Tara Scherder, Arlenda

 

PS3e Parallel Session: New Statistical Methods for Risk Assessment

09/17/15
4:15 PM - 5:30 PM
Lincoln 5

Organizer(s): Greg Ball, AbbVie; Mei-Ling Ting Lee, University of Maryland – College Park; Mengdie Yuan, FDA; Zhiwei Zhang, FDA/CDRH

Chair(s): Zhiwei Zhang, FDA/CDRH

An important aspect of medical treatment evaluation is quantifying the risks of undesirable events, such as adverse events, disease progression or death. Risk-related questions arise in different contexts (e.g., pre-market evaluation of safety/efficacy/effectiveness, post-market surveillance), and the relevant information may be collected as binary data, count data, time-to-event data, or a combination of several data types. It is important to keep the scientific context in mind in choosing a risk measure and an appropriate statistical method for a given application. This session gives several examples of newly developed statistical methods for risk assessment that are targeted at specific scientific questions.

An Extension of Likelihood Ratio Test-Based Method for Signal Detection in a Drug Class with Application to FDA's AERS Database
View Presentation View Presentation Yueqin Zhao, U.S. Food and Drug Administration

Rigorous Risk Assessment of Patient Subgroups in Late Phase Drug Development
Lei Shen, Eli Lilly and Company

Threshold Regression Models: with application in a multiple myeloma clinical trial
Mei-Ling Ting Lee, University of Maryland – College Park

 

PS3f Parallel Session: Practical Experiences Using Meta-analysis

09/17/15
4:15 PM - 5:30 PM
Lincoln 6

Organizer(s): ; Anna B Nevius, FDA/CVM; Steven V. Radecki, ASA; Lisa R Rodriguez, FDA/CVM

Chair(s): Virginia Recta, FDA/CVM

FDA has incorporated the use of systematic review and meta-analysis methods for evaluation of drug safety and effectiveness. Most of the use of meta-analysis at FDA has been for safety evaluations. The Center for Veterinary Medicine (CVM) has successfully employed meta-analysis for establishing substantial evidence of effectiveness FOLLTROPIN , a follicle stimulating hormone (FSH). The session will include the following talks: 1) Clinical perspective of the use of meta-analysis in the approval of FOLLTROPIN This talk will focus on a clinical review of the FOLLTROPIN approval. It will put the approval in perspective from CVM’s viewpoint, discussing how existing literature and data from previous studies made a meta-analysis possible. The discussion will incorporate how existing pharmacology and toxicology information were used to design the target animal safety evaluation and predict potential adverse effects, but the focus will be on effectiveness. (2) Discussion of the FOLLTROPIN meta-analysis statistical design and implementation incorporating clinical concerns The focus of this talk is the review of the meta-analysis approach used for the FOLLTROPIN approval. Details of the statistical analysis for effectiveness will be discussed. (3) The melding of meta-analysis, systematic review, and regulatory concerns The focus of this talk is a discussion of systematic review in general from the regulatory viewpoint. CVM’s experiences with systematic review and applying it for FOLLTROPIN and others will be discussed. Safety concerns may be discussed further.

Systematic Review, Meta-analysis and the Regulatory Process
View Presentation View Presentation Laura Hungerford, FDA Center for Veterinary Medicine

Use of literature to make regulatory decisions: the FOLLTROPIN example
Emily Smith, FDA Center for Veterinary Medicine

A Retrospective Meta-analysis on the Effectiveness of FOLLTROPIN-V for the Induction of Superovulation in Beef and Dairy Heifers and Cows
Steven V. Radecki, ASA

 

Fri, Sep 18

PS4a Parallel Session: Advancing Personalized Medicine Using Innovative Subgroup Identification Methods

09/18/15
8:30 AM - 9:45 AM
Thurgood Marshall North

Organizer(s): Rong (Rachel) Chu, Agensys, Inc.; Anna Sun, FDA; Qi Tang, AbbVie; Yun Wang, DB5/OB/OTS/CDER/FDA

Chair(s): Qi Tang, AbbVie

Personalized medicine is the future of drug development. However, our limited understanding of human biology is a big hurdle for development of personalized medicines. To overcome this hurdle, several novel subgroup identification methods have been developed recently to better utilize the clinical data at hand to generate hypotheses about personalized treatments. There are many challenges faced by subgroup identification methods: variable selection bias, control of Type I error, multiplicity, predictive performance and confounding variables. Because of these challenges and the complication of subgroup identification methods themselves, it is almost impossible to know which method works well under which scenario. Thus, in practice, for a given dataset, multiple methods need to be applied and the one with the best predictive performance chosen. The purpose of this session is to bring to the audience the most recent advances in subgroup identification methods and offer practical guidance for selecting appropriate methods. Topic 1: Statistical Methods for Subgroup Identification in Personalized Medicine Presenter: James J Chen, Director, FDA Abstract: Personalized medicine applies molecular technologies to identify genomic biomarkers in target patients for assigning more effective therapies and avoiding adverse events. Subgroup identification involves partitioning patients into subgroups defined by sets of biomarkers, where each subgroup corresponds to an optimal treatment. This presentation is an overview of statistical methods for subgroup identification in personalized medicine applications. Subgroup identification for treatment selection generally involves three components: 1) biomarker identification, 2) classifier development, and 3) performance assessment. Biomarker identification applies statistical procedures to identify predictive or prognostic biomarkers. Classifier development partitions patients into biomarker-positive and biomarker-negative subgroups for treatment optimization. Performance assessment evaluates classifier accuracy and a subgroup analysis of treatment effect in the selected biomarker-positive patients. Statistical issues and challenges in the subgroup identification and subgroup analysis will also be discussed. Topic 2: The GUIDE approach Presenter: Wei-Yin Loh, Professor, Department of Statistics, University of Wisconsin, Madison Abstract: In using a regression tree to identify subgroups with differential treatment effects, the first requirement is that the method be free of bias in the selection of variables to split the nodes. A second requirement is that the method can differentiate between the effects of prognostic and predictive variables. A third requirement is that it does so with good accuracy. This talk will discuss a recent extension of GUIDE that satisfies these requirements and compares its performance with several existing regression tree methods. Topic 3: Exploratory identification of biomarker signatures for patient subgroup selection in clinical drug development Presenter: Xin Huang, Manager, AbbVie Abstract: Mechanistic relationships between putative biomarkers, clinical baseline and related predictors versus clinical outcome (efficacy/safety) are usually unknown, and must be deduced empirically from experimental data. Such relationships enable the implementation of a personalized medicine strategy in clinical trials to help stratify patients in terms of disease progression, clinical response, treatment differentiation, etc. The relationship between some biomarkers & clinical baseline predictors versus clinical outcome are typically stepwise or nonlinear, often requiring complex models to develop the prognostic and predictive signatures. For the purpose of easier interpretation and implementation in the clinic, defining a multivariate biomarker signature in terms of thresholds on the biomarker combinations would be preferable. In this talk, we present some methods for developing such signatures in the context of continuous, binary and time-to-event endpoints. Further, to evaluate the future sample performance of the biomarker signature, we proposed the concept of predictive significance via cross-validation. Results from simulations and case-study illustration will also be provided. Discussant: Lei Shen, Research Advisor, Eli Lilly Reference: Loh, W. Y. (2002). Regression trees with unbiased variable selection and interaction detection. Statistica Sinica, 12(2), 361-386. Loh, W.-Y., He, X., and Man, M. (2014), A regression tree approach to identifying subgroups with differential treatment effects, arXiv.org, submitted for publication. Chen, G., Zhong, H., Belousov, A., & Devanarayan, V. (2014). A PRIM approach to predictive-signature development for patient stratification. Statistics in medicine.

The GUIDE regression tree approach
View Presentation View Presentation Wei-Yin Loh, University of Wisconsin Madison

Statistical Methods for Subgroup Identification in Personalized Medicine
James J. Chen, National Center for Toxicological Research, FDA

Exploratory identification of biomarker signatures for patient subgroup selection in clinical drug development
Xin Huang, AbbVie

 

PS4b Parallel Session: Modeling and simulation at the FDA and in industry: collaboration among statisticians, modelers and pharmacometricians

09/18/15
8:30 AM - 9:45 AM
Thurgood Marshall South

Organizer(s): Alan Hartford, AbbVie; Shiowjen Lee, FDA; Cristiana Mayer, Johnson & Johnson; Misook Park, FDA

Chair(s): Cristiana Mayer, Johnson & Johnson

The increasing costs of drug development forces the industry and regulators to look at innovation in a more aggressive way. The role of modeling and simulation (M&S) has gained great momentum and enthusiastic interest from all stakeholders in recent years. This session will bring to the table the challenges and solutions for promoting model-based drug development and model-informed regulatory assessment to a new higher level. Quantification of risk, statistical and mathematical modeling, virtual trial simulation are all important tools in advancing the characterization of dose response profile, PK/PD relationship and benefit/risk ratio. This effort requires a more effective and intense collaboration between industry and regulators as well as among different groups within quantitative sciences, such as statisticians, modelers and pharmacometricians. All parties are motivated to improve and intensify the collaboration. The session will highlight a case study from a very recent M&S approach in a regulatory submission paired with the model-informed regulatory assessment leading to the approval. The perspectives, concerns and recommendations from the different groups involved will be emphasized. Confirmed speakers are José Pinheiro (JNJ) (confirmed), and Vikram Sinha (FDA); a 3rd speaker from FDA will be confirmed shortly.

Incorporation of stochastic engineering models as prior information in Bayesian medical device trials and post-market surveillance
Tarek Haddad , Medtronic

Model-Based Bridging of Dose-Regimens Supporting Drug Approval: A Case Study of Cross Functional Modeling Collaboration
View Presentation View Presentation Jose' C. Pinheiro, Johnson & Johnson

Modeling and simulation at the FDA and in industry: collaboration among statisticians, modelers and pharmacometricians
Vikram Sinha, FDA

 

PS4c Parallel Session: Messy Data Issues in Evaluation of Bioequivalence

09/18/15
8:30 AM - 9:45 AM
Thurgood Marshall East

Organizer(s): Charles DiLiberti, Montclair Bioequivalence Services, LLC; Stella Grosser, FDA; Mark Shiyao Liu, Mylan Inc; Wanjie Sun, FDA

Chair(s): Julia Jingyu Luan, FDA/CDER

Panelist(s): Shein-Chung Chow, Duke University; Pina D'Angelo, Novum Pharmaceutical Research Services; Stella Grosser, FDA; Yi Tsong, US FDA CDER

In 2013, generic drugs accounted for 86% of the market share in U.S. However, statistical research in bioequivalence tests, especially how to handle complications such as missing data and messy data, has been lacking. Bioequivalence studies can be pharmacokinetic (PK) bioequivalence or clinical end-point bioequivalence studies. With the institution of the Generic Drug User Fee Act (GDUFA) in 2012, it becomes a pressing task to study the impact of data complications on statistical conclusions, and the robustness of the current statistical methods in face of these complications. Furthermore, robust statistical methods need to be proposed to handle complications in bioequivalence tests. This session will include two presentations followed by a panel discussion. Speakers and panelists from the agency, industry and academia will present and discuss issues and statistical approaches in the area of bioequivalence.

Missing Data and Non-compliance Data in Clinical End-point Equivalence Studies
Wanjie Sun, FDA

Messy Data in PK Bioequivalence Studies
View Presentation View Presentation Charles Bon, Biostudy Solutions, LLC

 

PS4d Parallel Session: Recent Innovations in the Development and Application of Statistical Designs for Early-Phase Oncology Trials

09/18/15
8:30 AM - 9:45 AM
Thurgood Marshall West

Organizer(s): Adam Hamm, Theorem Clinical Research, Inc.; Yuan Ji, NorthShore University HealthSystem/University of Chicago; Hui Zhang, FDA;

Chair(s): Sue-Jane Wang, US FDA

In this session, we discuss innovative adaptive designs in early phase oncology, statistical aspects, and challenges in implementing the designs. Focus is placed on innovation in both new methodology development and applications in practice. We also evaluate and establish prerequisites for choosing designs that are most efficient in attaining the goals of the trial. Hay et al. (2014, Nature Biotech.) presented a miserable success rate in oncology drug development compared to non-oncology diseases, and early phase oncology trials exhibit the largest deficiencies. We assemble a panel of academic and industry speakers, chaired by an expert from FDA, to present and discuss some recent breakthroughs aimed at greatly improving the design and conduct of early-phase oncology studies. Novel statistical designs such as dose finding in two treatment cycles based on efficacy and toxicity will be presented as innovation in methodological research, and real-life experiences in applying the mTPI design (Ji and Wang, 2013, JCO) with comparison to other classical designs such as 3+3 and CRM (O’Quigley et al., 1990, Biometrics) will be shared. Discussions will focus around requirements for implementing the various methods to ensure efficiency and reliability. Theoretical comparisons between different analysis methods, including optimal sample size among the methods will be described.

Utility-Based Bayesian Adaptive Designs for Early Phase Clinical Trials
View Presentation View Presentation Peter Thall, Univ. of Texas MD Anderson Cancer Center

Implementation of innovative adaptive designs in early phase oncology trials: from theory to practice
View Presentation View Presentation Inna Perevozskaya, Pfizer

Practical Considerations for Adaptive Dose Finding in Phase I Oncology Studies Using Toxicity Probability Interval Approaches
Linda Sun, Merck & Co.

Discussant(s): Lei Nie, FDA/CDER; Lindsay Renfro, Mayo Clinic

 

PS4e Parallel Session: Meeting the Ebola Challenge

09/18/15
8:30 AM - 9:45 AM
Lincoln 5

Organizer(s): Deepak B. Khatry, MedImmune; James Lymp, Genentech; Dionne Price, CDER, FDA; Estelle Russek-Cohen, CBER FDA

Chair(s): Deepak B. Khatry, MedImmune

The Ebola outbreak in West Africa has created multiple challenges in terms of public health and an opportunity for statisticians to respond to a public health crisis. We have neither an approved vaccine for Ebola nor an FDA approved therapeutic for the treatment of Ebola. The response on the part of agencies within Health and Human Services, the World Health Organization and regulatory agencies within Africa along with biopharmaceutical companies has been unprecedented. We plan on bringing experts together that have weighed in on study designs, considering issues such as ethics of randomized trials, supply constraints, multiple candidate products and the need to get products to people rapidly. Some designs that have been proposed to date specifically in the context of Ebola have included platform trials for therapeutic products that allow multiple therapies to be tested in the same trial and for vaccines, several forms of cluster designs. Because of the declining epidemic, data from animals may be used to inform the evaluation of medical products for Ebola. The session will focus on both therapeutic trials and vaccine trials. Our speakers include Dr Lori Dodd (NIAID) who will talk about a platform trial for ebola therapeutics and the impetus behind the design Dr Ivan Chan (Merck) will talk about the challenges of bridging data from non-human primates to people when defining an immunogenicity endpoint. Discussants will include Estelle Russek-Cohen (CBER FDA) Dionne Price (CDER FDA)

Statistical challenges in developing immune correlates to support licensure of Ebola vaccines
Ivan S.F. Chan, Merck Research Laboratories

The Ebola Medical Counter Measures Trial: A Flexible Randomized Clinical Trial for Evaluating Therapeutics for Ebola Disease
Lori E Dodd, NIAID

Discussant(s): Dionne Price, CDER, FDA; Estelle Russek-Cohen, CBER FDA

 

PS4f Parallel Session: Minimizing bias in medical device trials through study design and data analysis

09/18/15
8:30 AM - 9:45 AM
Lincoln 6

Organizer(s): Chia-Wen Ko, FDA; Laura Lu, CDRH; Theodore Lystig, Medtronic, Inc.

Chair(s): Laura Lu, CDRH

Randomized and well controlled studies are golden standard in clinical trial practice. However, there are situations in device studies where randomization is impossible, difficult, or potentially inappropriate. For example, investigators may face an ethical dilemma in recommending a randomized study to subjects when they believe that the different interventions in the study are not equally safe and effective (i.e., they lack clinical equipoise). Also, due to the implantation and operation procedures of some devices, it is impossible to keep the patients, clinician or evaluator blinded/masked. Improper trial monitoring could also lead to the break of blinding. Lack of randomization and blinding randomization will potentially lead to selection and operational bias and could adversely impact the level of evidence provided by the study and the ability to rely on the data as valid. In this session, we will focus on the approaches in minimizing bias in device trials through adequate study design, monitoring and data analysis.

Good Practice of Objective Propensity Score Design for Premarket Nonrandomized Medical Device Studies: A discussion with examples
View Presentation View Presentation Yunling Xu, FDA/CDRH

Consideration of trial design comparing RCT to single arm study when abundant patient level historical data are available
Peter Lam, Boston Scientific

Balance Reduction for Observational Studies Using Propensity Scores
View Presentation View Presentation Thomas Ezra Love, Case Western Reserve University

 

PS5a Parallel Session: Recent developments and considerations for personalized medicine: Follow-on “Me Too” companion diagnostic devices

09/18/15
10:00 AM - 11:15 AM
Thurgood Marshall North

Organizer(s): Kuang-Lin He, Fujirebio Diagnostics, Inc.; Yuying Jin, FDA/CDRH; ; Laura M Yee, FDA, CDRH

Chair(s): Qin Li, FDA/CDRH

It is a current trend that diagnostic testing is used to select patients for corresponding therapeutic products. This is a medical model that involves both therapeutic products and companion diagnostic devices. With the rapid development in the area of personalized medicine, and a number of FDA-approved companion diagnostics for use with specific corresponding therapeutic products, opportunities exist for device companies to develop follow-on companion diagnostic devices, which also are called “Me Too” companion diagnostic devices. These follow-on companion diagnostic devices seek the same therapeutic indication as a FDA-approved companion diagnostic. There are many challenges associated with the validation of these devices, including patient sample availability, the lack of therapeutic partner, etc. In the session, we will discuss the study design and statistical considerations for “Me Too” companion diagnostics from the perspective of FDA researchers, academia and industry.

BiomarkersRecent developments and considerations for personalized medicine: Follow-on “Me Too” companion diagnostic devices.
Xiao-Hua Zhou, University of Washington

Building Bridges for Companion Diagnostics
James Roger Ranger-Moore, Roche Tissue Diagnostics

Study design and statistical considerations/challenges for “Me Too” companion diagnostics
Meijuan Li, US Food and Drug Administration

 

PS5b Parallel Session: Innovative Designs and Advanced Statistical Methodologies for Rare Disease Clinical Trials

09/18/15
10:00 AM - 11:15 AM
Thurgood Marshall South

Organizer(s): Yeh-Fong Chen, US FDA; Laura Lee Johnson, FDA; Hope Knuckles, Abbott Laboratories; Jeffrey Krischer, University of South Florida

Chair(s): Yeh-Fong Chen, US FDA

Even though prevalence of each rare disease is low, roughly 30 million Americans have been affected by one or more of the nearly 7,000 rare diseases. For most rare diseases, it can be challenging to conduct clinical trials with enough power to detect the treatment effect. To bring a breakthrough therapy to the market early, it is important to find efficient approaches to utilizing individual patient data (e.g., improved study design and sound statistical methods). Although it may be necessary to adjust the general standard in clinical trials for common diseases when it is applied to rare diseases, it is not clear how the general standard should be adjusted to ensure both the quality of good trials and the efficacy of approved drugs. Many workshops have been run in recent years to accelerate the development of therapies for rare diseases. Researchers from the industry, academia and regulatory agencies are working diligently to develop innovative trial designs and statistical methodologies that can be applied to this area. Nevertheless, more research is needed for reaching a consensus. Emerging topics include the use of two or three staged enrichment designs to target specific type of patient populations and the use of historical controls to efficiently conduct trials that will reduce the number of subjects recruited and ease ethical considerations. For both cases, Bayesian approaches have been proposed; however, its usage in terms of applications is not widespread. Three speakers from different organizations will be invited to present their successful work in the rare disease area. This session provides a platform for researchers working in this area to discuss the challenges they faced, share the lessons they learned, and offer possible solutions.

A Sequential Multiple Assignment Randomized Phase 2 Trial for Rare Diseases
View Presentation View Presentation Roy Tamura, University of South Florida

Challenges Encountered in Conducting Rare Disease Clinical Trials
Min Min, FDA

Interim Futility Analysis based on Linear Regression with Longitudinal Endpoint in A Rare Disease Indication
View Presentation View Presentation GQ Cai, GSK

 

PS5c Parallel Session: Role of statisticians in CDISC data standards from developers to users

09/18/15
10:00 AM - 11:15 AM
Thurgood Marshall East

Organizer(s): Deborah Bauer, Sanofi; Peter Mesenbrink, Novartis Pharmaceuticals Corporation; Stephen Wilson, FDA/CDER/OTS/OB/DBIII; Weiya Zhang, FDA

Chair(s): Deborah Bauer, Sanofi

Panelist(s): Deborah Bauer, Sanofi; Chris Holland, Amgen; Susan Kenny, Maximum Likelihood, Inc.; Stephen Wilson, FDA/CDER/OTS/OB/DBIII

High quality data is essential for clinical trial design, statistical inference and decision making. Substantial efforts have been dedicated across industry, academia, and regulatory agencies to develop data standards aligned with the models defined by the Clinical Data Interchange Standards Consortium (CDISC). CDISC standards support the clinical and non-clinical research process from protocol design through data collection, data exchange, data management, data analysis and reporting. The Prescription Drug User Fee Act (PDUFA V) set goals for FDA to develop guidance for industry on the use of CDISC data standards for the electronic submission of study data in applications [1]. FDA has been actively working on guidance requiring study data in conformance to CDISC standards and developing distinct data standards for therapeutic areas using a public process that allows for stakeholder input through open standards development organization. Industry sponsors have also been working to develop therapeutic area data standards aligned with the guidelines developed by CDISC and the FDA. With these standards in place and many under development, how are we as statisticians employing these standards? What are the advantages and challenges of implementing these standards? In this session, we will invite speakers from industry and regulatory to share their experience with developing and implementing CDISC data standards, and how these data standards facilitate regulatory reviews and clinical research. [1] http://www.fda.gov/ForIndustry/DataStandards/StudyDataStandards/default.htm

The Good, the Bad, and the Ugly: The Pharmaceutical Industry Perspective with CDISC Data Standards
View Presentation View Presentation Peter Mesenbrink, Novartis Pharmaceuticals Corporation

CDISC Data Standards: A Statistical Reviewers’ Perspective
View Presentation View Presentation Weiya Zhang, FDA

 

PS5d Parallel Session: ICH E9 R1 Defining the Estimand and Sensitivity Analysis

09/18/15
10:00 AM - 11:15 AM
Thurgood Marshall West

Organizer(s): Brent Burger, PAREXEL International; Estelle Russek-Cohen, CBER FDA ; Guoying Sun, FDA

Chair(s): Joan Buenconsejo, AstraZeneca

Panelist(s): Frank Bretz, Novartis; Craig Mallinckrodt, Eli Lilly; Dan Scharfstein, Johns Hopkins

ICH E9 is a key guidance on the subject of designing clinical trials and is recognized by multiple regulatory bodies. A new addendum is being written and it deals with defining the estimand and sensitivity analysis, preferably in advance of conducting a study. Motivation for the ICH addendum comes from the NRC report on missing data issued in 2010 but also a desire to improve the way we design clinical trials in order to properly assess treatment benefit. Issues such as how we deal with missing data will clearly be part of the discussions but so will sensitivity of the interpretation of trial outcomes to assumptions made at the time the study was planned. The session will have two speakers that are members of the new ICH group, namely Tom Permutt (FDA) and Devan Mehrotra(Merck). It will be followed by a panel including our speakers and 3 other panelists, namely Dan Scharfstein (Johns Hopkins), and Craig Mallickrodt (Eli Lilly), members of the 2010 NRC committee dealing with missing data and Frank Bretz (Novartis), a european member of the ICH working group.

Tackling Missing Data: Control-Based Quantile Imputation and Tipping Point Analysis
Devan V Mehrotra, Merck Research Laboratories

Practical Causal Estimands
View Presentation View Presentation Thomas Permutt, U.S. Food & Drug Admin.

 

PS5e Parallel Session: Using Historical Data in Clinical Trials: Synthesis of Truth with Uncertainty

09/18/15
10:00 AM - 11:15 AM
Lincoln 5

Organizer(s): Margaret Gamalo, CDER, Food and Drug Adminstration; Pandurang M Kulkarni, Eli Lilly; Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals

Chair(s): Satrajit Roychoudhury, BDM Oncology, Novartis Pharmaceuticals

In recent years, Bayesian design and analysis have generated extensive discussions in the literature of clinical trial. For Bayesian methods, decisions have to be made at the design stage regarding “prior belief”. When good prior information exists, Bayesian approach may enable this information to be incorporated into the statistical analysis. There is an intrinsic interest of leveraging historical data into the prior information for efficient design. Clinical trials including historical control information are used in earlier phases of drug development (Neuenschwander et al., 2010; Trippa, Rosnerand Muller, 2012; French, Thomas and Wang, 2012; Hueber et al., 2012), occasionally in phase III trials (French et al., 2012), and also in special areas such as medical devices (FDA, 2010a; Campbell, 2011; Chen et al. 2011), orphan indications (Dupont and Van Wilder, 2011) and pediatric studies (Berry, 1989). Non-inferiority trials also rely on historical information, and hence have similar characteristics as historical control trials (FDA, 2010b). “Enriching” control arm of current trial with information from historical trial(s) holds the promise of more efficient trial design. This allows trials with smaller size or with unequal randomization (places more subjects on the treatment arm in a study). This enriches the amount of information both on the efficacy or safety of the current novel treatment including important secondary endpoints. Borrowing historical information can further facilitate analysis of important subgroups. The advantages and disadvantages of this approach include increased power, decreased sample, and effects on type I error. There are many ways of borrowing from historical data. Generally all these methods act to “pull” or “shrink” estimates from the current control arm toward point estimates from the historical study(s). Moreover generally these methods have some parameters governing the borrowing, and can be set by the user to either borrow extensively or minimally. In all these methods guidance on setting these parameters is vital. But in practice these methods for borrowing historical information are not well understood in terms of benefits, effects, and regulatory ramifications. This session will focus on different area of incorporating historical information in clinical trial along with their advantage and disadvantages. This session will feature four prominent participants (three speakers and one discussant) from industry, academia and regulatory agency discussing borrowing information in different framework. Speakers: Dr. Manuela Buzoianu (FDA), Dr. Beat Neuenschwander (Novartis), Dr. Jason Connor (Berry Consultants) Discussant: Dr. Sujit Ghosh (SAMSI).

On the use of co-data in clinical trials
Beat Neuenschwander, Novartis Pharma AG

A Prospective Bayesian Adaptive Trial with Hierarchical Borrowing from a Prior Single Arm Study
View Presentation View Presentation Jason T Connor, Berry Consultants

Incorporation of Prior Information in a Clinical Trial: A Reviewer's Perspective
View Presentation View Presentation Manuela Buzoianu, FDA/CDRH

Discussant(s): Sujit K Ghosh, Department of Statistics, NC State University

 

PS5f Parallel Session: Analysis and Interpretation of Human Abuse Potential Study Data

09/18/15
10:00 AM - 11:15 AM
Lincoln 6

Organizer(s): Yahui Hsueh, FDA; Hope Knuckles, Abbott Laboratories; Chen Ling, FDA; Kerri Schoedel, Altreos Research Partners, Inc

Chair(s): Wei Liu, FDA/CDER/OTS/OB/DBVI; Marta Sokolowska, Grunenthal USA

Human abuse potential studies are randomized crossovers, which use multiple measures of subjective effects, administered at multiple timepoints, at multiple doses, over multiple periods of study. Their designs include sensitized subjects with enrichment for “responders”, as well as highly sensitive endpoints (“canary in the coalmine” approach). These studies are often geared towards excluding false negatives (i.e., concluding no abuse liability, when it exists), usually at the expense of false positives results (concluding abuse liability when there isn’t any). The use of multiple doses and non-linear dose-responses (inverted U-shape) associated with drugs of abuse can present additional challenges in interpretation, as can the use of subjective measures, in the form of high variability and “distributional” violations. These factors can make analysis and interpretation of these data difficult for statisticians and non-statisticians alike. Are we satisfied with the “I’ll know it when I see it” approach to evaluating abuse liability? The multiple endpoints and sometimes “exploratory” nature of these studies may lead us to succumb to the natural temptations of “cherry-picking”. Are we better off using a confirmatory approach? How do we know we’re using the right tests for our data? What are some new approaches? This session discusses the complexities associated with analysis and interpretation of abuse liability data, and opens a dialogue between statisticians and non-statisticians about addressing challenges inherent in these data. This topic has been discussed at only a few prior meetings, only one of these was a statistical conference in 2011. In addition, currently, FDA is revising both the 2010 and 2013 FDA guidance documents (i.e., Assessment of Abuse Potential of Drugs, 2010, and Abuse-Deterrent Opioids - Evaluation and Labeling, 2013). Discussion of the best statistical approach to analyzing these data among statisticians from both the FDA and the pharmaceutical industry is extremely important. We also want to urge statisticians on both sides to get involved in conducting research in this important area. Previous presentations: Ling Chen: 1) presentation at ISCTM 11th annual meeting (February, 2015) related to the revision of the 2010 FDA guidance; 2) “Analysis of Data from Human Abuse Potential Studies,” Public workshop: Science of Abuse Liability Assessment, Washington, DC, 2011; 3) Abuse Potential Evaluation on Subjective Responses to Drug Liking VAS – Issues and Strategies, March 1, 2011 ISBS, Berlin, Germany. Kerri Schoedel: “Defining Abuse Potential in Clinical Studies”, March 1, 2011 ISBS, Berlin, Germany Proposed Speakers: Ling Chen, Food and Drug Administration; (301) 796-0864; ling.chen@fda.hhs.gov; Proposed Title of Talk: Statistical Issues in Design and Analysis of Human Abuse Liability Studies Kerri A. Schoedel, PhD, Altreos Research Partners, Inc. 1-416-434-6921; kschoedel@altreos.com; Proposed Title of Talk: Defining the Liability in Abuse Liability: Practical Approaches to a Complex Analytical Problem Susan E. Spruill, PStat®, Applied Statistics and Consulting 828-467-9184; sspruill@appstatsconsulting.com; Proposed Title of Talk: Statistics: the Real Abuse Liability

Statistical Issues in Design and Analysis of Human Abuse Potential Studies
Chen Ling, FDA

Defining the Liability in Abuse Liability: Practical Approaches to a Complex Analytical Problem
Naama Levy-Cooperman, Altreos Research Partners

Statistics: The Real Abuse Liability
View Presentation View Presentation Susan E Spruill, Applied Statistics and Consulting

 

PS6a Parallel Session: Statistical considerations of delayed treatment effects in cancer vaccine trials

09/18/15
12:45 PM - 2:00 PM
Thurgood Marshall North

Organizer(s): Marc Buyse, IDDI Inc.; Jonathan D Norton, MedImmune; Shenghui Tang, FDA; Zhenzhen Xu, CBER/FDA

Chair(s): Shenghui Tang, FDA

In a relatively short period of time, therapeutic cancer vaccines have entered the landscape of cancer therapy. In contrast to the conventional chemotherapeutic drugs, these novel agents stimulate the patient’s own immune response to combat cancer. This indirect mechanism-of-action for vaccines poses the possibility of a delayed onset of clinical effect, due to the time required to mount an effective immune response and the time for that response to be translated into an observable clinical effect. The conventional design and analysis methods based on log-rank test, however, often ignore this delayed effect and result in underestimated sample size with insufficient power, failing to detect the potential effects of the vaccines. More innovative statistical methodologies are needed to address the unique characteristics of therapeutic cancer vaccines in the design and analysis of such a trial. This session will feature speakers from experts in industry, academia and regulatory arenas who will present their research on design and analysis of cancer vaccine trials. Three major topics will be addressed: (1) Sample size calculation considering the delayed treatment effects in cancer vaccine trials; (2) Proper analysis of cancer vaccine trials with delayed treatment effects; (3) Statistical challenges in cancer vaccine trials development from regulatory perspective.

Sample size and power calculation in therapeutic cancer vaccine trials with delayed treatment effect
Zhenzhen Xu, CBER/FDA

Power Calculation for Log-rank Test under a Non-proportional Hazards Model
View Presentation View Presentation Daowen Zhang, North Carolina State University

Sample Size and Power of Survival Trials in Group Sequential Design with Delayed Treatment Effect
View Presentation View Presentation Jianliang Zhang, MedImmune, LLC

 

PS6b Parallel Session: Designing Bioequivalence Studies for the Evaluation of Generic Drugs:Addressing Challenges Arising from Different Sources of Variability

09/18/15
12:45 PM - 2:00 PM
Thurgood Marshall South

Organizer(s): CV Damaraju, Janssen Research and Development, LLC; Susan Huyck, Merck; Fairouz Makhlouf, FDA/CDER; Elena Rantou, FDA/CDER

Chair(s): Fairouz Makhlouf, FDA/CDER

Speakers: Shein-Chung Chow Duke University School of Medicine-Charles DiLiberti, Montclair Bioequivalence Services, LLC -Elena Rantou, FDA/CDER Discussant: Stella Grosser, FDA/CDER Chair: Fairouz Makhlouf, FDA/CDER Session Abstract: When designing bioequivalence (BE) studies for generic drugs, different issues arise as a result of the presence of high variability. Between-subject variability is often considerable in parallel, crossover or even simple paired-sample designs. Within-reference variability characterizes highly variable drugs and is used as a scaling factor for determining BE. The criteria for assessing BE are based upon the type and magnitude of observed variability, the study design and the viable sample sizes. Such choices are crucial as they influence the sensitivity of the test to meaningful differences (consumer risk) and affect the chance of rejecting good products (producer risk). Different cases of generic drug studies such as solid oral dosage, long-acting injectable and locally-acting dosage forms will be discussed. For these forms, a variety of statistical techniques will be presented such as ANCOVA with a special covariate and a modified scaled-average bioequivalence criterion, when the data set is a paired-sample design. Finally, recently developed method using scaled criterion for assessing drug interchangeability will be proposed and a numerical study will demonstrate its use for generic products. Presentation 1: An ANCOVA Approach to Reducing the Residual Variance and Sample Sizes Required for Parallel Design Bioequivalence Studies Abstract: Parallel design BE studies, which are commonly used for long half-life drugs, often pose substantial challenges for sponsors in that they employ no replication, and thus cannot utilize the popular reference-scaled average bioequivalence method to control sample size. Furthermore, their sample sizes are dictated by between-subject variance, which is often substantially larger than the within-subject variance that dictates the sample sizes of crossover design studies. As a result, parallel design studies may easily require hundreds of subjects to achieve reasonable power. While this problem may arise for some solid oral dosage forms, it is also common for long-acting injectable formulations. Under some circumstances, the terminal elimination rate constant (kel) is strongly correlated with the primary pharmacokinetic (PK) parameters AUC and Cmax and yet is independent of the treatment effect. Incorporating ln(kel) as a covariate in an ANCOVA, provides an opportunity to dramatically reduce the residual variance and the sample size for parallel design studies. The validity conditions and practical considerations of this model will be illustrated with simulations and case studies. Presentation 2: Assessing Bioequivalence of Locally-Acting Generic Products; Statistical Controversies and Arising Issues Abstract: To determine whether a generic product is bioequivalent to its reference listed drug, the comparison of the test and reference distributions of a pharmacokinetic parameter is necessary. For locally-acting dosage forms, the available data often follow a paired-sample design where the appropriate criterion for determining bioequivalence, is the two one-sided (TOST) confidence interval. Such approaches, although theoretically correct, cannot be always applied as these data sets introduce challenges like lack of subject availability and/or unusually high within and between subject variability. Concerns arising from the use of a BE criterion are related to the sensitivity of the test and its power which is affected by various factors like the parameter margin, the sample size, the within variance and the regulatory constants. These concerns are going to be discussed in reference to a novel in-vitro permeation test. The data will be analyzed using regular average BE and different forms of reference-scaled average BE demonstrating advantages and imposed risks. Presentation 3: A New Proposed Scaled Criterion for Drug Interchangeability Abstract: Criteria for assessment of BE for generic drug products are reviewed. These include criterion for average BE, criteria for population and individual BE, and a recent proposed scaled average bioequivalence (SABE) criterion for highly variable drug products. In addition, following similar idea of IBE and the development of SABE, a new criterion for assessment of drug interchangeability is proposed. A numerical study was conducted to illustrate the use of the proposed criterion.

An ANCOVA Approach to Reducing the Residual Variance and Sample Sizes Required for Parallel Design Bioequivalence Studies
Pina D'Angelo, Novum Pharmaceutical Research Services

Assessing Bioequivalence of Locally-Acting Generic Products; Statistical Controversies and Arising Issues
Elena Rantou, FDA/CDER

A New Proposed Scaled Criterion for Drug Interchangeability
Shein-Chung Chow, Duke University

 

PS6c Parallel Session: Summarizing case studies to learn and improve confirmatory adaptive trial design and implementation

09/18/15
12:45 PM - 2:00 PM
Thurgood Marshall East

Organizer(s): Paul Gallo, Novartis; Weili He, Merck & Co. Inc.; Xuefeng Li, FDA CDRH; Min (Annie) Lin, FDA CBER

Chair(s): Paul Gallo, Novartis; Xuefeng Li, FDA CDRH

In the past decade there have been an increasing number of confirmatory phase III trials that utilized adaptive designs. However, the uptake and general use of adaptive trial designs still seems relatively low (estimated at approximately 20%, according to Tufts CSDD survey). Among the main reasons are likely the added complexity of these designs compared to traditional designs, perceived risks of regulatory concerns, and lack of consensus on best practices for planning, implementing, and documenting these trials. The speakers in this session will present several completed confirmatory adaptive design case studies collected by the DIA ADSWG Best Practices (BP) Subteam, describe challenges in the design or conduct of these trials, and solutions that were adopted, and will provide general guidance on improvements that could address concerns that led to certain types of adaptive designs being characterized as “less well understood” in the 2010 FDA draft AD guidance. Invited speakers and panelists from the DIA ADSWG BP Subteam, and from FDA, industry, and academia, will share their views on challenges and solutions for some key issues in the design or implementation of confirmatory adaptive design trials. They may include Sue-Jane Wang, Greg Campbell, Martin Posch, and Robert Hemmings.

Addressing Challenges and Opportunities of “Less Well Understood” Adaptive Designs
Weili He, Merck & Co. Inc.; Paul Gallo, Novartis

Case Studies of Less-Well Understood Adaptive Designs
View Presentation View Presentation Eva Miller, inVentiv Health Clinical

Panel Discussion
Greg Campbell, formerly of FDA/CDRH; Weili He, Merck & Co. Inc.; Eva R Miller, inVentiv Health Clinical; Jerry Schindler, Merck & Co., Inc.; Sue-Jane Wang, US FDA; Boguang Zhen, FDA CBER

 

PS6d Parallel Session: Incorporating Patient Perspectives in Medical Product Life Cycle

09/18/15
12:45 PM - 2:00 PM
Thurgood Marshall West

Organizer(s): Scott Braithwaite, Department of Population Health, New York University School ; Martin Ho, Center for Devices and Radiological Health, FDA; Telba Irony, FDA; Bennett Levitan, Janssen Research & Development at Johnson & Johnson

On November 4, 2014, the FDA published a Federal Register Notice (FRN) to solicit input from stakeholders on strategies to obtain the views of patients in development of medical product and ways to account for patients' input in the regulatory process under the Food and Drug Administration Safety and Innovation Act. This FRN reflects that the principle of patient-centered health care has been widely accepted in the US. This session will describe how various types of quantitative patient preferences data can be elicited, assessed, and applied at different stages of medical product life cycles. In particular, the session will focus on how the patient preference data can potentially be incorporated into FDA’s regulatory decision making process in the benefit-risk assessment context. Experts from the FDA, the industry, and the academia will shed light on this emerging and important regulatory science research area.

Incorporating Patient Preferences into Regulatory Decision Making
View Presentation View Presentation Telba Irony, FDA

Patient-focused Benefit-risk Assessment
Bennett Levitan, Janssen Research & Development at Johnson & Johnson

What research is necessary to determine whether particular measures to incorporate patient preferences into the medical product life cycle leads to decisions that are more preference-concordant?
Scott Braithwaite, Department of Population Health, New York University School

 

PS6e Parallel Session: Common statistical issues FDA encounters

09/18/15
12:45 PM - 2:00 PM
Lincoln 5

Organizer(s): Brent Burger, PAREXEL International; Adam Hamm, Theorem Clinical Research, Inc.; Heng Li, FDA/CDRH/OSB; Vandana Mukhi, FDA/CDRH/OSB

Chair(s): Heng Li, FDA/CDRH/OSB

FDA statisticians routinely review investigational plans for pivotal clinical trials. Of the issues that may arise, some are quite common. In this session we discuss such common issues. We hope this discussion would improve the quality of submission and consistency of the review.

Statistical issues in regulatory reviews of CBER products
Shiowjen Lee, FDA

Multiplicity and Type I error control
Laura L Fernandes, FDA

Statistical Study Design Considerations for Medical Device Clinical Studies - from an FDA Reviewer’s Perspective
View Presentation View Presentation Xu Yan, FDA/CDRH

 

PS6f Parallel Session: Missing Data in Diagnostic Device Studies: Methods and Case Studies

09/18/15
12:45 PM - 2:00 PM
Lincoln 6

Organizer(s): Bipasa Biswas, FDA; Kristen Meier, Illumina, Inc; Vicki Petrides, Abbott Laboratories; Xuan Ye, FDA

Chair(s): Bipasa Biswas, FDA

Missing data are a prevailing problem for both therapeutic and diagnostic medical device studies. While missing data may be minimized by appropriate design and conduct of a clinical trial, it is still inevitable in such trials. For handling missing data, it is generally recommended to conduct sensitivity analyses to assess the robustness of the analysis results. Yet there are differences regarding issues and handling of missing data for therapeutic and diagnostic medical devices due to differences in study designs. Ignoring missing results while reporting diagnostic performance can be misleading. This session will focus on various types of missing data in diagnostic device studies. Practical issues and methodologies for handling missing data under different scenarios will be presented and discussed. Case studies using real clinical trial data will be presented to illustrate the problem.

Missing Data in Diagnostic Device Studies: Methods and Case Studies
Xiao-Hua Zhou, University of Washington

A Case Study of Handling Missing Data in Diagnostic Device Studies
Yuqing Tang, US Food and Drug Administration

Missing Data in IVD Studies
View Presentation View Presentation Hope Knuckles, Abbott Laboratories

Discussant(s): Gene Anthony Pennello, FDA/CDRH

 

PS7a Parallel Session: Platform Trials and Master Protocols: New adaptive designs advancing personalized medicine

09/18/15
2:15 PM - 3:30 PM
Thurgood Marshall North

Organizer(s): Ohad Amit, Glaxosmithkline; Cristiana Mayer, Johnson & Johnson; Lei Nie, FDA/CDER; Rajeshwari Shridhara, FDA

Chair(s): Teri Ashton, GlaxoSmithKline

In the race towards drug development innovation, adaptive designs have played a major and increasing role in the last decade. The concept of platform clinical trials has come to the surface and is gaining incremental acceptance among pharmaceutical companies for advancing personalized medicine and the study of challenging and/or rare diseases. In 2010 the I-SPY2 trial was launched as one of the first platform trials. I-SPY2 is an innovative collaboration across five pharmaceutical companies in a phase 2 breast cancer trial. This collaborative approach, which aims to accelerate the identification of the most promising compound for a given population or the most suitable population for a given experimental drug, significantly reduces the cost, time and sample size in drug development. The same platform protocol can investigate multiple drugs and regimens for multiple and diverse subgroups of patients. In rare or difficult to enroll populations, platform trials offer the opportunity to steer important new therapies into a standing clinical trials infrastructure potentially expediting the availability of highly needed new therapies. This session will provide the audience with an overview of the statistical aspects defining platform clinical trials with emphasis on the design novelties and implementation efficiencies. Examples across a broad range of areas will be provided to highlight statistical innovations, regulatory perspective and implementation efficiencies. If the session will be accepted speakers that may be approached are: Scott Berry (Berry Consultants) and Vlad Dragalin (Johnson & Johnson) and FDA representatives.

Statistical Considerations in Developing Master Protocols in Oncology
Lijun Zhang, FDA

Utilizing Patient Data to Guide Treatment – All Patients are Not the Same!
J. Kyle Wathen, Janssen R&D

Platform Trials: Statistical Efficiencies and Practical Examples
Scott Berry, Berry Consultants

 

PS7b Parallel Session: Poolable or Non-poolable: Challenges and Solutions

09/18/15
2:15 PM - 3:30 PM
Thurgood Marshall South

Organizer(s): Ying Yan, Helsinn; Ying Yang, FDA/CDRH; Minjung Yoon, .; Yu Zhao, FDA/CDRH

Chair(s): Yunling Xu, FDA/CDRH

Almost all pivotal clinical studies are conducted in multiple centers and/or regions. Patients from different study centers, country/regions, may have different effectiveness and/or safety profiles. The differences may be due to different patient baseline characteristics, surgeon or physician’s knowledge and skill level, patient care, etc. It is always a concern whether the data from different centers or regions can be pooled together. The estimated treatment effect may be biased and misleading if the heterogeneity among study sites, regions are ignored. In this session, we will discuss the approaches that are used to assess the inconsistency among study centers and/or regions. We will also discuss statistical methods used to estimate the treatment effect while adjusting for the aforementioned heterogeneity.

Data pooling in clinical studies: Statistical model and methods as well as our experience
Shun Zhenming, Sanofi-Aventis

Poolability Analyses in Medical Device Trials: a Reviewer’s Perspective
Yu Zhao, FDA/CDRH

CASE STUDY: Assessing poolability in a large randomized study on dual-antiplatelet therapy
Joe Massaro, Boston University School of Public health

 

PS7c Parallel Session: Emerging topics in Benefit-risk assessment

09/18/15
2:15 PM - 3:30 PM
Thurgood Marshall East

Organizer(s): Weili He, Merck & Co. Inc.; Qi Jiang, Amgen; Xuefeng Li, FDA CDRH; John Scott, CBER FDA

Chair(s): Weili He, Merck & Co. Inc.; Xuefeng Li, FDA CDRH

In February 2013, FDA released a draft PDUFA V implementation plan on Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. This document joined several other benefit-risk guidance/recommendations, including ones from FDA CDRH, PROTECT, EMA, and ISPOR, which have been widely reviewed and discussed in recent years. This body of BR guidance and recommendations is very timely and consistent with the long understanding that the benefits of a medical product can only be understood in the context of the risks or harms associated with that product, and vice-versa. The Quantitative Sciences in the Pharmaceutical Industry (QSPI) Benefit-Risk Working Group (BRWG), formed in early 2013, has been actively pursuing several emerging topics in BR assessment, including identification and evaluation of uncertainties in BR assessment, commonly used graphics in BR assessment in clinical development, identification of different data sources in BR assessment, and issues to consider for BR assessment in subgroups. This session will present the most current work from this working group. In addition, a panel of experts in BR assessment from FDA, industry, and academia will share their views on the current regulatory environment, emerging issues in BR assessment, and next steps and future directions. Potential speakers and panelists may include Telba Irony, Aloka Chakravarty, and Ellis Unger.

Panel Discussion
View Presentation View Presentation Telba Irony, FDA; Qi Jiang, Amgen; John Scott, CBER FDA ; Ellis Unger , FDA CDER

Emerging Topics in Benefit-Risk Assessment
Qi Jiang, Amgen

 

PS7d Parallel Session: Bayesian Assessment of Benefit-Risk Balance in Drug Development

09/18/15
2:15 PM - 3:30 PM
Thurgood Marshall West

Organizer(s): Maria Costa, GlaxoSmithKline; Carl DiCasoli, Bayer Healthcare Pharmaceuticals; Min Min, FDA; Yueqin Zhao, U.S. Food and Drug Administration

Chair(s): Yueqin Zhao, U.S. Food and Drug Administration

To gain regulatory approval, a new medicine must demonstrate that its benefits outweigh any potential risks. Over the past several years there has been a growing recognition amongst Sponsors and Regulators for the need of a more structured and consistent approach in assessing the benefit-risk balance of new therapies. The Bayesian inference framework and philosophy offers a tool for learning and updating one’s beliefs about particular parameters of interest. This aspect of the Bayesian philosophy is especially attractive in the context of benefit-risk assessment, as existing information can be formally incorporated into the analysis of any emerging data. In addition, posterior probabilities offer a simple and clear device with which one may convey the benefit-risk balance to a non-statistical audience. This session will feature three talks showcasing the added benefit of including a formal assessment of the benefit-risk balance using Bayesian inference from different perspectives: Sponsor, Regulatory and Academia. The objective is to understand the potential for these methods to provide greater clarity of the benefit-risk balance to regulators, and what the state of the art is regarding statistical methodology. Speaker 1: Professor Deborah Ashby (Imperial College London) Title: What role should formal risk-benefit decision-making play in the regulation of medicines? Speaker 2: Dr Ian Hirsch (AstraZeneca) Title: A novel methodological approach which allows structured benefit risk assessments to incorporate both uncertainty and correlations between endpoints and weights. Speaker 3: Dr Ram Tiwari (FDA) Title: Bayesian approach to personalized benefit-risk assessment with application to a clinical trial data.

What role should formal risk-benefit decision-making play in the regulation of medicines?
View Presentation View Presentation Deborah Ashby, Imperial College London

Bayesian approach to personalized benefit-risk assessment with application to a clinical trial data
View Presentation View Presentation Ram Tiwari, Food and Drug Administration

A novel methodological approach which allows structured benefit risk assessments to incorporate both uncertainty and correlations between endpoints and weights
View Presentation View Presentation Ian Hirsch, AstraZeneca

 

PS7e Parallel Session: Statistical considerations in evaluating imaging-based devices

09/18/15
2:15 PM - 3:30 PM
Lincoln 5

Organizer(s): Jeffrey L Joseph, Theorem Clinical Research; ; Jincao Wu, FDA, CDRH; Jingjing Ye, FDA

Chair(s): Jincao Wu, FDA, CDRH

Imaging devices are valuable technologies for primary diagnosis or as an aid to diagnosis for disease screening, diagnosis work-up, monitoring, quantitative biomarker measurement, etc. These imaging devices include radiological techniques to identify abnormalities (e.g., mammography for breast cancer) and digital slides in pathology that allow the pattern recognition and visual search (e.g., tissue slide stained with Her2 for gastric cancer). The evaluation of these devices requires unique analytical and clinical studies. For example, study design and analysis typically needs to address variability in image interpretation by reader. Also, when reading cases in two modalities, the second reading of a case can be affected by memory of the case from the first reading. In this session, statistical considerations on study design and analysis will be discussed among academic, industry and FDA researchers. We have invited three speakers (all have been confirmed): 1. Qin Li, FDA/CDRH “Statistical considerations in Multi-Reader Multi-Case study for medical imaging devices” 2. Yuying Jin, FDA/CDRH, “Challenges in Digital Pathology and its recent development” 3. Nancy Obuchowski, Quantitative Health Sciences, The Cleveland Clinic Foundation, “Comparison of ROC methods in Multi-Reader Multi-Case study”

Effect of Location Bias in MRMC ROC Studies
Nancy A. Obuchowski, Cleveland Clinic Foundation

Challenges in digital pathology and its recent development
Yuying Jin, FDA/CDRH

Statistical considerations in reviewing radiological imaging devices in CDRH
View Presentation View Presentation Qin Li, FDA/CDRH

 

PS7f Parallel Session: Use of phase 2 interim analysis to expedite drug development decisions

09/18/15
2:15 PM - 3:30 PM
Lincoln 6

Organizer(s): Jenny Huang, Genentech Inc., a member of Roche; Qin Li, FDA/CDRH; Norberto Pantoja-Galicia, FDA; Qi Xia, Genentech Inc., a member of Roche

Chair(s): Jenny Huang, Genentech Inc., a member of Roche; Qi Xia, Genentech Inc., a member of Roche

This session aims to discuss the systematic utilization of interim efficacy analyses from comparative phase 2 trials to expedite the phase 3 development decisions, a practical approach to shorten the drug development timeline and reduce the development cost, without complicating the trial design and compromising the sponsor’s ability to identify gaps in knowledge and thoughtfully design the phase 3 trial. We will first examine the theoretical basis as well as the empirical evidence from 35 Roche/Genentech oncology trials for using the phase 2 interim analysis to facilitate earlier development decisions that is consistent with the final phase 2 readout, and then go through examples from recent Roche/Genentech oncology trial experience to address issues related to the benefit, cost and implementation details.

Use of phase 2 interim analysis to expedite drug development decisions
Qi Xia, Genentech

FDA guidance and regulatory experience on enrichment design
Yuan Li Shen, FDA/CDER/OTS/OB; Raji Sridhara, FDA/CDER/OTS/OB

Discussant(s): Daniel Sargent, Mayo Clinic