Online Program

Search by Last Name:

Keyword:

   

Mon, Sep 16

SC1 Overview, Hurdles, and Future Work in Adaptive Design

09/16/13
8:30 AM - 12:00 PM
Thurgood Marshall South/West

Organizer(s): John Scott, FDA CBER/OBE/DB

Instructor(s): Chris Coffey, University of Iowa

In recent years, there has been substantial interest in the use of adaptive or novel randomized trial designs. Adaptive clinical trial designs provide the flexibility to make adjustments to aspects of the design of a clinical trial based on data reviewed at interim stages. Although there are a large number of proposed adaptations, all generally share the common characteristic that they allow for some design modifications during an ongoing trial. Unfortunately, the rapid proliferation of research on adaptive designs, and inconsistent use of terminology, has created confusion about the similarities, and more importantly, the differences among the techniques. Furthermore, the implementation of adaptive designs to date does not seem consistent with the increasing attention provided to these designs in the statistical literature. This course will first provide some clarification on the topic and describe some of the more commonly proposed adaptive designs. It will focus on some specific barriers that impede the use of adaptive designs in the current environment. Finally, there will be a discussion on future work that is needed to ensure that investigators can achieve the promised benefits of adaptive designs. The course presenter will be Chris Coffey, Ph.D., Professor, Department of Biostatistics and Director, Clinical Trials Statistical and Data Management Center, University of Iowa.

Download Presentation Download Material
 

SC2 Survival Analysis: Overview of Parametric, Nonparametric and Semiparametric approaches and New Developments using SAS Software

09/16/13
8:30 AM - 12:00 PM
Thurgood Marshall North/East

Organizer(s): Cristiana Mayer, Johnson & Johnson Pharmaceutical Research and Development

Instructor(s): Joseph Gardiner, Michigan State University

Duration and severity data arise in several fields including biostatistics, demography, economics, engineering and sociology. The course presents an overview of survival analysis methods comprising parametric, nonparametric and semiparametric approaches and their recent developments. These new techniques extend their reach to include analyses of multiple failure times, time-dependent covariates, recurrent events, frailty models, Markov models and use of Bayesian methods. Methods include Kaplan-Meier estimation, accelerated life-testing models, and the ubiquitous Cox model. The course will use examples from biostatistics. Topics include modeling hazard rates, multivariate outcomes of mixed types, and quantile estimation. Applications will be paired with syntax for SAS procedures LIFETEST, LIFEREG, PHREG, RELIABILITY and QUANTLIFE for analyzing real data sets drawn from easily accessible sources.

Download Presentation Download Material
 

SC3 Unsettled Issues In Clinical Trial Data Monitoring

09/16/13
8:30 AM - 12:00 PM
Wilson A,B,C

Instructor(s): Susan Ellenberg, Univesity of Pennsylvania; Steve Snapinn, Amgen

The use of independent data monitoring committees to oversee the accruing data in a clinical trial has become much more common over the past two decades, both in industry- and government-sponsored clinical trials. Some basic principles—avoiding conflicts of interest, maintaining confidentiality of interim data, having a formal charter to describe DMC operations, establishing boundaries to guide early stopping decisions—are widely accepted. The implementation of these principles, however, is often not straightforward and multiple perspectives have emerged in regard to optimal practices. In this short course we will address issues relating to independence (what constitutes a conflict of interest; role of sponsor staff; the independent statistician; use of “non-independent” DMCs); appropriate criteria for early termination (both for efficacy and for futility); blinding (what should be blind, and to whom); DMCs in studies using adaptive designs; content and format of DMC reports; and special challenges for DMCs for multinational trials.

Download Presentation Download Material
 

SC4 Enrichment Strategies in Adaptive Clinical Trials: Theory and Implementation

09/16/13
1:30 PM - 5:00 PM
Thurgood Marshall South/West

Organizer(s): Vlad Dragalin, Aptiv Solutions; Anastasia Ivanova, University of North Carolina at Chapel Hill

Instructor(s): Vlad Dragalin, Aptiv Solutions; Anastasia Ivanova, University of North Carolina at Chapel Hill

This course will offer recent developments in methodology and case studies of adaptive trials with patient population enrichment. We will cover some specific to adaptive issues highlighted in the recently issued FDA Draft Guidance on Enrichment Strategies for Clinical Trials. The goal is to help the participants better understand the methodological issues and challenges in implementation. We will review designs for trials with potentially high placebo response including placebo run-in and the sequential parallel comparison design. We will also review randomized withdrawal design and introduce designs that combine parallel comparison, randomized withdrawal and placebo run-in strategies. We will examine recent methodology developments in population enrichment designs with predefined sub-populations. Testing strategies for combining data from before and after sub-population selection, multiplicity tests for intersection hypotheses, sub-population selection rules and related sample size re-estimation methods will be reviewed.

Download Presentation Download Material
 

SC5 A Practical Guide to Prevention and Treatment of Missing Data

09/16/13
1:30 PM - 5:00 PM
Thurgood Marshall North/East

Organizer(s): Devan Mehrotra, Merck; Lei Xu, Biogen Idec

Instructor(s): Craig Mallinckrodt, Eli Lilly and Company; Russ Wolfinger, SAS

This short course will focus on practical approaches to the prevention and treatment of missing data in longitudinal clinical trials. The course will be taught by internationally recognized experts as representatives from the Drug Information Association’s Scientific Working Group on missing data. Participants will have free access to, and hands-on experience using, sensitivity analysis tools developed by DIA working group. Recent decades have brought advances in statistical theory, which, combined with advances in computing ability, have allowed implementation of a wide array of analyses. In fact, so many methods are available that it can be difficult to ascertain when to use which method. A danger in such circumstances is to blindly use newer methods without proper understanding of their strengths and limitations, or to disregard all newer methods in favor of familiar approaches. Moreover, the complex discussions on how to analyze incomplete data have overshadowed discussions on ways to prevent missing data, which would of course be the preferred solution. Therefore, preventing missing data through appropriate trial design and conduct is given significant attention in this course. Nevertheless, missing data will remain an ever-present problem and analytic approaches will continue to be an important consideration. Recent research has fostered an emerging consensus regarding the analysis of incomplete longitudinal data, especially in regards to the need for sensitivity analyses. Theoretical concepts underpinning this consensus will be covered. An example data set will be used to illustrate a structured framework for choosing estimands, estimators, and sensitivity analyses. Participants will be guided through the analysis of a second example data set using the SAS macros developed by the DIA Working Group. Participants will have access to other training material related to missing data and the use of the sensitivity tools. A few highlights of this short course that differentiate itself from previous ones include (1) emphasizing “practical” and providing hands-on software programs for immediate implementations (2) a comprehensive range of actionable approaches to implement and leverage the 2010 National Academy of Sciences report on the handling missing data in clinical trials (3) targeting to facilitate real-life problem solving as well as regulatory-industry interactions.

Download Presentation Download Material
 

SC6 Issues and Methods for Multi-Regional Clinical Trials From an Industry and Regulatory Perspective

09/16/13
1:30 PM - 5:00 PM
Wilson A,B,C

Organizer(s): Bruce Binkowitz, Merck and Co. Inc.; Joshua Chen, Merck and Co. Inc.

Instructor(s): Joshua Chen, Merck and Co. Inc.; James Hung, FDA; Sue Jane Wang, FDA

Part 1: Industry Perspectives and Experiences Part 2: A Regulatory Perspective and Experiences While the concept of a multiregional clinical trial is not new, the prevalence of these trials has been growing over the last few decades. MRCTs are most often conducted as a single trial focusing on an overall result or set of results, but when such trials are submitted to health authorities, the scope and concern often broaden to include the ""local"" results. It is readily accepted that studying patients from many different regions within a single trial under a single protocol is an efficient method of trial design. However, such trials come with their own special issues during design, operation, analysis, and interpretation. This short course will introduce the statistician to these issues, and describe methods to help handle such issues. Topics to be covered include specific MRCT concerns at the design stage, including defining region and the impact of lack of regulatory harmonization on therapeutic area guidances. The impact of the number of regions and the sample size configuration across the regions on the required total sample size, the estimation of between-region variability and type I error rate control for the overall treatment effect will also be discussed. Analysis and interpretation will be discussed including graphics and methodology to evaluate consistency of effect. Methods of assessing consistency of results across regions will be described and the properties compared, and additional methods involving adaptive designs and Bayesian methods will be described. Both fixed and random effect models will be discussed with application to the MRCT setting. Case studies will be presented and the methods introduced will be applied to those case studies. This course will be taught by FDA statisticians who have done research and published on this topic, as well as Industry statisticians who were members of the PhRMA MRCT Working Group.

Download Presentation Download Material
 

Tue, Sep 17

GS1 Keynote Address

09/17/13
8:30 AM - 10:00 AM

Organizer(s): Bruce Binkowitz, Merck and Co. Inc.; Lilly Yue, FDA

Shouldn’t We Be Proud of What We do?
Ron Wasserstein, American Statistical Association

Essential Concepts for Causal Inference in Randomized Experiments and Observational Studies in Biostatistics
Don Rubin, Harvard University

 

GS2 Plenary Panel Session: Innovation and Best Practices for Clinical Trials

09/17/13
10:15 AM - 11:45 AM

Organizer(s): Bruce Binkowitz, Merck and Co. Inc.; Lilly Yue, FDA

Moderator: Lisa LaVange, FDA Panelists: Greg Campbell, FDA; Tom Fleming, University of Washington; Frank Shen, Abbvie; Sue-Jane Wang, FDA; Kyle Wathen, J&J; Janet Wittes, Statistics Collaborative Inc.

 

TL01 Logistics and Implementation of Adaptive Trial Designs

09/17/13
11:45 AM - 1:00 PM
Madison B

Chair(s): Eva R Miller, Quality Data Services

It has been almost 4 years since February 2010 and the release of the FDA Draft Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics. Statisticians who would like to share some of their experiences in implementing these adaptive clinical trials and working with their study teams to adopt these new methodologies will discuss their experiences and the differences between adaptive trial designs and the more traditional trials. Impact on teams and team structures will also be discussed. Newcomers to adaptive trial design are welcome. Questions to be covered include but are not limited to: (1) What additional planning, teamwork and communication is required for successful implementation of adaptive trial designs? (2) How do simulations play a role in the development of effective adaptive trial designs? (3) What is required for randomization schemes, drug supply management, and clean data within adaptively designed studies?

 

TL02 Borrowing from Informative Priors

09/17/13
11:45 AM - 1:00 PM
Harding

Chair(s): Pablo E. Bonangelino, FDA/CDRH

While not as common as non-informative priors in Bayesian trials submitted to FDA, borrowing from informative priors is possible in a regulatory setting. A critical question in these cases is how much to discount a given prior. This can be viewed as an example of elicitation from experts, with all of the attendant considerations. The purpose of this roundtable is to share experiences with informative borrowing and to discuss some of the difficulties with doing so including issues related to expert elicitation.

 

TL03 Utility of Bayesian Methods in the Analysis of Safety Data in the Pre-Market Setting

09/17/13
11:45 AM - 1:00 PM
Harding

Organizer(s): Caiyan Li, Takeda; Melvin Slaighter Munsaka, Takeda Global Research and Development, Inc

Chair(s): Caiyan Li, Takeda; Melvin Slaighter Munsaka, Takeda Global Research and Development, Inc

There are many statistical challenges encountered in the analysis of safety data. Statistical methodologies and their applications to safety data have not yet been fully realized, though various approaches have been discussed in the literature. This creates room for improving in the analysis and reporting of safety data. Bayesian methods present a potential and viable approach to enhance the analysis of safety data. Potential applications of Bayesian methods for analysis of safety data include the ability to incorporate prior information that may be pertinent to the safety question being investigated or perhaps incorporating clinical judgment and other factors related to safety. Bayesian estimation methods also allow for parameter uncertainty and are especially useful because of the high dimensionality of safety data and can be used to combine information across different safety studies. In this round table discussion, the utility of Bayesian methods will be discussed within the premarketing setting in the context of the following questions: (a) What is your experience using Bayesian approaches in the analysis of safety data? (b) Why should be we use Bayesian approaches in the analysis of safety data? (c) Can Bayesian methods be used in routine analysis of safety within the testing and estimation contest? If so what approaches are available and what are the challenges? (d) Can Bayesian methods be used in routine modeling and prediction of safety data, analysis of rare events, safety data synthesis, and in addressing multiplicity questions? The discussion will highlight the relevant methodological approaches and also discuss how these compare with the frequentist methods and the advantage of using such methods.

 

TL04 Long Term Safety Follow up - Challenges

09/17/13
11:45 AM - 1:00 PM
Harding

Chair(s): Vipin Arora, AbbVie, Inc.

Long term follow up especially for Safety is needed to ensure that the risk beneift profile is not shifted for treatments of chronic conditions. Challenges in designing long term safety studies and ensuring sufficient participation is a challenge. The discussion to highlight challenges and how to help Safety Team to ensure designing and implementing realistic studies/follow up that are feasible and have interim checks (built in) to ensure appropiate flexibility is available and agreed upon due to long term follow up type of these studies.

 

TL05 Assessment of Benefit Risk in Pivotal Studies

09/17/13
11:45 AM - 1:00 PM
Harding

Chair(s): Suchitrita Sarkar Rathmann, AbbVie

Benefit-risk assessment of a treatment under review has become one of the primary concerns of Regulatory Agencies over the last few years. Quantifying Benefit-risk profile of any treatment is not an easy task, especially if multiple endpoints are involved in the pivotal study. We would like to discuss different methods of quantifying the benefit-risk profile of a treatment under consideration in accordance with review practices of regulatory agencies and look at examples from a pivotal study with multiple endpoints (which included both categorical and continuous variables). This could lead to further discussions on whether the benefit-risk profile can be compared to treatments already available in the market.

 

TL06 Implementation of Biomarker Development and then Validation at the Intercection of Research and Clinical Development

09/17/13
11:45 AM - 1:00 PM
Hoover

Chair(s): Maha Karnoub, Celgene

In this roundtable, we would review what the FDA guidelines are on what constitutes a biomarker worthy of inclusion in a submission for a given claim and we would discuss then models of collaboration between Research and Clinical Development. The collaboration would ensure - Reproducibility of the work done in Discovery Research - That we understand the characteristics of the biomarkers that were developed - That we understand the objectives from this biomarker as it now enters a clinical trial - That we have the necessary information to answer the questions from the objectives and to have that biomarker considered within the submission.

 

TL07 Criteria for Biosimilar approval

09/17/13
11:45 AM - 1:00 PM
Hoover

Organizer(s): Peter A Lachenbruch, Oregon State University (retired)

Chair(s): Eric Chi, AMGEN; Peter A Lachenbruch, Oregon State University (retired)

Biosimilars, or Follow on Biologics, have attracted much interest in recent years. FDA has been developing standards and the pharmaceutical industry is also doing so. Some issues are the large molecules that must be shown to be ‘similar’ – and this isn’t easy. In addition, many variables may be required to be similar. This roundtable will examine some of the issues 1. What are appropriate levels of similarity? 2. How many variables need to be shown to be similar? a. Which variables should be considered? b. How should safety and efficacy variables be treated?

 

TL08 Biomarker

09/17/13
11:45 AM - 1:00 PM
Hoover

Organizer(s): Steven Bai, FDA

Chair(s): Grace Xiuping Liu, Johnson & Johnson - Janssen Research Development ; Lixia Pei, Jenssen Research & Development

Increase understanding of a disease mechanism, thereby provide better choice of drug targets for the future study becomes very important in the clinical studies. Clinical biomarker tests that enable the characterization of patients population and aid in making new drugs reach the intended target for a better treatment decisions play an important role in clinical trials. Definitive evaluation of the clinical utility of these biomarkers requires understanding the concept of biomarkers, type of biomarkers, validation of biomarkers and the application in the clinical trials design. This round table will focus on discussion those questions. The Key aspects of the discussion include biomarkers selection in early trials to confirm a treatment’s activity and select a dose for further testing via the Prognostic biomarkers; biomarkers application for predicting the treatment effect on the clinical outcome; surrogate biomarkers in clinical endpoint to evaluate the effect of a specific treatment. Another application will be for the bridging biomarkers and the clinical data study supports the third country submission. Questions to discuss: 1. Have you done some biomarker analyses in clinical trials? Please share your experience. 2. Do you have any experience of applying noval surrogate endpoint in the regulatory setting?

 

TL09 Ideas on Establishing an Independent Supervisory Body for Data Monitoring Committee Processes

09/17/13
11:45 AM - 1:00 PM
Hoover

Chair(s): Yeh-Fong Chen, US Food and Drug Administration; Paul Gallo, Novartis Pharmaceuticals

For clinical trials which incorporate interim analyses of unblinded data, an independent Data Monitoring Committee (DMC) is commonly constituted to review the results, to ensure patient safety and protect trial integrity. Whether or not the DMC will function effectively and provide sound recommendations to the sponsor depends on the DMC’s qualifications and experiences, and the degree of pre-planning. Recently, there have been proposals for establishing an independent party to provide oversight or supervision to a DMC, or with whom the DMC can interact, in the hope of enhancing the process. We will discuss pros and cons of such operational models.

 

TL10 Operational Models for Data Monitoring Committees

09/17/13
11:45 AM - 1:00 PM
Hoover

Chair(s): William Coar, Axio Reserach; Lynn Navale, Amgen Inc.

Data Monitoring Committees (DMCs) are now standard for many types of clinical studies, and have numerous roles such as protecting patient safety and ensuring quality conduct of the study. This is done through interim assessment of data throughout the course of a clinical trial and involves the Sponsor organization, an Independent Statistical Center (ISC), and the DMC. The interaction of these three entities is often clearly defined in a DMC Charter document, but the operational procedures may not be. There is a spectrum of ways in which the ISC and Sponsor organization may arrange responsibilities. One model commonly used is for the ISC to perform all programming of reports provided to the DMC. A very different model is for the Sponsor organization to send the ISC a complete set of validated programs and CRF data. Other arrangements may occur that are between these two extremes. The focus of this round table will be to discuss the merits for and against each of these operational models.

 

TL11 How Can Statisticians Show Leadership in Improving Data Quality?

09/17/13
11:45 AM - 1:00 PM
Hoover

Chair(s): Michael D. Hale, Amgen

Studies of medical interventions cost billions of dollars annually, and involve enormous numbers of volunteer patients. What should we be doing to better ensure those efforts provide high quality data, and enable better decisions? Error-rate reduction in trial data may get a disproportionate amount of resource and attention, with diminishing returns as one strives for ZERO. This measure of quality is about dataset FIDELITY. Discussion point 1 is about "fit for purpose" data quality Vs a zero-error-rate perspective. What would we need to change to move to a fit for purpose paradigm of quality? Would this involve a risk-based approach to data quality, considering probability and impact of error for various data categories? Discussion point 2 addresses a different dimension, that of RELEVANCE: are we measuring all the things needed to well-inform decisions related to medical interventions? Perhaps a more thorough approach to relevance assessment (pre- & post-study) is needed to adopt a fit for purpose approach? How else would we categorize data regarding impact of error?

 

TL12 Demographic Criteria for Diagnostic and Device Clinical Trials

09/17/13
11:45 AM - 1:00 PM
Balcony A

Organizer(s): Hope Knuckles, Abbott

Chair(s): Richard Kotz, FDA

A draft guidance was issued for gender and other demographic consideration in device trials where treatment may be affected by these potential covariates. There is a wide variance in device and diagnostic trials, and a discussion on the application of this guidance on the clinical trial design will be discussed.

 

TL13 Practical issues in the application of propensity score methodology for designing pivotal, non-randomized studies of medical device

09/17/13
11:45 AM - 1:00 PM
Balcony A

Chair(s): Jianxiong Chu, Food and Drug Administration

Many therapeutic device studies involve highly complicated surgeries and sometimes it is impractical to conduct a multi-center, randomized clinical trial. Therefore, it is not uncommon that a prospective concurrently controlled but non-randomized multi-center study is proposed to collect clinical data to support a future 510K or PMA submission. In order to ensure a fair comparison between the investigative group and the control group, propensity score methodology is often proposed. In this roundtable session, we will discuss several practical issues for designing such non-randomized device studies, such as sample size estimation, pre-specification of the models for calculating propensity score (in particular, how to handle the center-level covariates), pre-specification of the primary analysis method using the calculated propensity score.

 

TL14 Data poolability

09/17/13
11:45 AM - 1:00 PM
Balcony A

Chair(s): Chul H Ahn, FDA/CDRH

Data from different groups are combined to obtain an overall estimate of an outcome variable. Different groups may consist of different centers, different studies, different patient populations, different device models, and so on. We will discuss some of the related issues with emphasis on pooling data from multi-centers. The following are the questions we may be interested in discussing. How do we demonstrate data are poolable? How do we handle non-poolable data? How do we determine the treatment effect if centers are homogeneous with respect to protocol and study execution, but heterogeneous with respect to results? Why do some sites perform better than the others – any potential sources of site heterogeneity?

 

TL15 Study design and imprecision estimation in precision study for diagnostic devices

09/17/13
11:45 AM - 1:00 PM
Balcony A

Chair(s): Qin Li, FDA

Imprecision is a summary measure indicating the extent of disagreement or variability of a set of replicated measurements. How to design and estimate imprecision depends on whether a test is continuous, semiquantitative, or qualitative. CLSI guideline EP5 discusses precision studies for quantitative assays performed on homogeneous biological specimens. I/LA28 A2 Appendix 1 provides statistical suggestions on how to evaluate the precision of immunohistochemistry assays which can be qualitative, semiquantitative or quantitative. Evaluation of qualitative test including assessing its reproducibility is also introduced in EP12. This roundtable session intends to share experiences and suggestions on study design and variance estimation approaches among industrial, academic and regulatory researchers.

 

TL16 Bias adjustment following group sequential design

09/17/13
11:45 AM - 1:00 PM
Balcony A

Chair(s): Qi Zhang, Eli Lilly and Company

The group sequential design (GSD) has been used to allow potential early trial termination while preserving the type I error rate, however, it can also cause bias in estimating treatment effect. As demonstrated by literature, the GSD tend to overestimate the true treatment effect size at early interim analysis, although the exact cause and effect of such a phenomenon, and the ways for bias adjustment is not well-understood by many in clinical trial practice. In this session, at first, magnitude of bias and its impact on the assessment of study effect size in some simulation studies in the literature will be discussed; secondly, existing approach for bias adjustment will also be shared; lastly, appropriate approach to display the final data after GSD will be discussed, especially to allow an appropriate assessment of magnitude of treatment effect across multiple studies during an NDA submission.

 

TL17 Analysis of Overrun Data in Group Sequential Trials

09/17/13
11:45 AM - 1:00 PM
Balcony A

Chair(s): Paul Thomas DeLucca, Merck and Co., Inc.

When a group sequential clinical trial is stopped by a data monitoring committee data often continue to accumulate from the time of database lock until patients can be brought back for an end of study visit (overrun data). In accordance with the intention-to-treat principle all data should be included in a final analysis of study data. A relevant question is whether the final analysis including the overrun data can be tested at an unadjusted alpha level or whether some adjustment is required to preserve the type I error. Questions to be discussed at this roundtable include: Under the null hypothesis once a study has stopped at an interim analysis and a type I error has been committed is it possible to make a type I error at a final analysis including overrun data? Do regulatory agencies expect an alpha adjustment at the final analysis? Do sponsors typically test at an adjusted alpha and if so, how is the adjusted alpha determined? Methods of estimating treatment effect and confidence intervals, such as repeated confidence intervals and median unbiased estimates will also be discussed.

 

TL18 Is interim futility analysis a free lunch?

09/17/13
11:45 AM - 1:00 PM
Balcony A

Chair(s): Julie Cong, Boehringer Ingelheim Pharmaceuticals Inc

In recent years, it has become more common to assess futility of an investigational product during interim analyses in randomized clinical trials. Futility assessment is either considered the sole purpose for the interim analysis or together with efficacy claim. An attractive feature of futility analysis at the interim is that it typically does not inflate the type I error rate. This roundtable session aims to discuss a few statistical and practical issues when planning for futility analysis in randomized trials: 1) under what circumstances, a futility analysis should be needed or useful; 2) scenarios when type I error rate needs to be adjusted; 3) impact on power; 4) practical issues in the timing, stopping rules and interpretation of the trial results; 5) use of conditional power and Bayesian predictive probability in futility analysis.

 

TL19 Regulatory Issues in Meta Analysis of Safety Data

09/17/13
11:45 AM - 1:00 PM
Balcony A

Chair(s): Aloka G Chakravarty, US FDA; Brenda Crowe, Eli Lilly & Company

Meta analysis has been used in regulatory decision making, especially in safety evaluations, quite extensively. Meta-analyses conducted in the regulatory context and for safety evaluation have unique issues compared with traditional meta-analyses. In particular, use in the regulatory context requires high levels of rigor, robustness, and transparency. Concepts such as well-defined objectives, pre-specification, blinding, clear exposure definitions, good outcome ascertainment, appropriate statistical methodology, data quality, and clear and thorough reporting are very important. In addition, the sparseness and possible imbalance of the outcomes creates both procedural and methodological challenges. In this roundtable discussion, we discuss the use of meta-analysis of randomized trials conducted in the regulatory context for the evaluation of safety. We will discuss key design, methodological, and reporting issues. We will also discuss real-life examples that had regulatory consequences in light of these issues.

 

TL20 Planning for Clinical Trials with Recurrent Event Data: When the Poisson Modeling Assumption may Not Hold

09/17/13
11:45 AM - 1:00 PM
Coolidge

Chair(s): Judy Li, FDA; Jerry Weaver, Novartis Pharmaceuticals Corporation

Recurrent events are often encountered in clinical trials. Some examples include the number of seizures in epilepsy, the number of relapses in multiple sclerosis, the number of bleeding episodes in coagulopathies, and the number of exacerbations in pulmonary diseases such as chronic obstructive pulmonary disease and asthma. The traditional design and analysis approach for these situations involves using the Poisson model under the assumption of a homogeneous Poisson sampling process. However, the homogeneous Poisson process may not hold leading to a problem of over-dispersion and thus incorrect inferences. While there are many alternative analysis methods for addressing the non-homogeneous Poisson process using the negative binomial model or semi-parametric multiple time-to-event methods (Anderson and Gill [1982]; Prentice, Williams, and Peterson [1981]; Wei, Lin, and Weissfeld [1989]), appropriately sizing recurrent event trials of this type in practice seems to be directed more towards the negative binomial model (Keene, et al [2007]) since there exists a closed form solution. Topics for discussion: 1) In what situations would time-to-first event be more favorable compared to the negative binomial or multiple time-to-event methods? 2) How robust is the negative binomial model when the homogeneous Poisson process is not violated?; 3) What additional assumption still exists when using the closed form sample size solution for the negative binomial model (Keene, et al [2007]), how could this impact power, and how should we possibly address this violation?; 4) How would one go about sizing the study using the semi-parametric multiple time-to-event models, and should we consider sample size re-estimation methods to revise our initial sample size estimates?

 

TL21 Predictive Modeling In Observational Studies

09/17/13
11:45 AM - 1:00 PM
Coolidge

Chair(s): Rui Li, Quintiles Outcome; Zhaohui Su, Quintiles Outcome

PURPOSE: Observational studies collect many clinical and patient characteristic variables. The objective of this session is to present statistical methods for building predictive models of patient outcomes as a function of patient characteristics and baseline clinical variables. DESCRIPTION: Predictive models focus on identifying significant predictors of outcomes rather than making inferences about pre-selected variables after adjusting for covariates. Statistical considerations include variable selection, variable forms, multicollinearity, missing data, clinical input and model validation. Variable-selection algorithms in current packaged programs, such as conventional stepwise regression, can easily lead to invalid estimates and tests of effects. Models should be constructed taking into account clinical reason or understanding. This session starts with an overview of existing model building methods, followed by detailed discussions of model-building steps and aforementioned statistical considerations. Univariate and multivariate regression, correlation analysis, bundling, variable reduction, goodness of fit, and bootstrap validation will be illustrated. The concepts and methods will be explained with the use of real-world observational study data, when possible.

 

TL22 Reallocation of Type I Error to Doses not Eliminated Due to Prospectively Defined Objective Safety Criteria

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Anthony James Rodgers, Merck & Co, Inc.

Studies with more than one study dose versus a comparator have an inherrent issue of multiplicity which could be mitigated using objective safety criteria that if prospectively defined could preclude certain doses from being subjected to a multiplicity adjustment for efficacy. Type I error for testing the efficacy could be allocated only to those doses that are not excluded from consideration due to safety. However, the criteria for exclusion based on safety would need to be objectively assessed and prospectively planned. If objective futility criteria are employed this may also provide for an opportunity to prospectively plan to reallocate alpha in the event doses are eliminated from consideration. Specific examples will be discussed.

 

TL23 Cohort sampling design in risk assessment markers

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Yuying Jin, FDA; Rong Tang, FDA

The study of risk assessment markers in rare disease or slow progressing disease requires studies with a very large sample size or a study that spans many years. A prospective population based cohort study may not be practical for evaluating such markers. Some cohort sampling designs such as nested case control and case cohort sampling have been widely used in epidemiological studies. And they can be effectively used in the risk assessment marker studies under certain circumstances. These cohort sampling designs may be used with both retrospective as well prospective study populations. However for such cohort sampling designs, appropriate sampling scheme should be considered to ensure the samples are representative of the intended use population. Furthermore, appropriate statistical method need to be utilized to eliminate the bias of the performance estimates. The roundtable discussion will provide the participants an opportunity to share the experience and the challenges with the cohort sampling studies. Question: what is your experience with cohort sampling design? Can you think of examples where these methods can be helpful? What should one watch out for when these methods are used?

 

TL24 Evaluating The Methodological Quality of Randomized Clinical Trials

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Vance William Berger, NIH

There are many hazards facing those who plan, conduct, and review randomized clinical trials, and the road to validity is rather narrow. For example, it is not enough that a study be randomized; the precise manner of randomization is also crucial in determining if the study is valid or not. The choice of endpoints, masking, allocation concealment, enrichment, alpha preservation, and the types of statistical analysis are also key determinants of trial quality. We will discuss all of these features, and possibly others as well, to better help participants recognize quality trials, and to suggest how all trials might be held to higher standards.

 

TL25 Pros and Cons of Discriminate Analysis in Animal Health Dose Titration Studies

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Theresa M Real, Novartis Animal Health

The analysis of tissue pathology data can be accomplished using many different statistical methods ranging from separate analyses for every variable to canonical discriminate analysis to provide information about the contribution of the variables included in the analysis. Separate analyses for each variable provide information only on that variable for each treatment group and may lead to equivocal results. The discriminate analysis provides weights based on those variables that have a high influence and may remove variables that have a minor contribution. A fixed index, based on input from a pathologist, generally keeps all variables and weights each variable’s importance by its incidence and severity. Discussion will center on the pros and cons of each method.

 

TL26 Recognizing and avoiding common bias problems in clinical trials

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Jonathan Siegel, Bayer HealthCare Pharmaceuticals Inc.

In this session we will discuss common bias problems in clinical trials and ways to avoid or mitigate them. Our aim is prevention, and the focus is on practical, low-tech, hands-on, shoe-dirt trial design strategies to recognize and avoid common problems and pitfalls, rather than sophisticated post-hoc analyses to adjust for them. Some issues may appear very simple yet are surprisingly pervasive. The extent of the problem will be illustrated with examples of results from biased trials published in leading journals, with discussion on how the trials could have been improved to avoid them. Examples will focus on problems particular to cancer trials, but issues are relevant to virtually all classes of trials and input from all backgrounds and therapeutic areas is welcome. Problems discussed will include assessment schedules which are inadequate to observe the dynamics of the phenomenon being modeled; assessments that depend on treatment timing; patient dropout patterns that differ among treatment arms; the use of continuous methods for highly discrete assessments; crossover issues; and more. This session would be of value to clinicians, regulators, and medical professionals as well as statisticians.

 

TL27 Randomization Metrics: How do you measure the goodness of a randomization scheme?

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Dennis E Sweitzer, Medidata Solution

By what statistical criteria, if any, do statisticians decide randomization methods and parameters to use for a given study? While most randomization methods usually seem to work well in published literature, is there any concern about worst-case performance of the methods, or performance under real world conditions (e.g., large differences in numbers of patients in subgroups or centers). Randomization methods generally are designed to be both unpredictable and balanced between treatment allocations overall and within strata. However, when planning studies, little consideration is given to measuring these characteristics, nor are they examined jointly, and published comparisons between methods often are not useful. In order to compare randomization performance, I simulated various covariate-adjusted randomization methods (such as permuted block, minimization, and urn methods), and compared efficiency & unpredictability graphically and statistically. Predictability was measured with a modified Blackwell-Hodges potential selection bias in which an observer guesses the next treatment to be one that previously occurred least in a strata, reflecting a game theory model pitting observers versus statistician, and is easy to calculate and interpret. Efficiency was calculated using Atkinson’s method because: (1) The main impact of imbalances is a loss of statistical power; (2) Even if treatments are balanced overall, imbalances within small strata can have a disproportionate impact on efficiency; (3) It is conveniently interpretable as lost sample size.

 

TL28 Statistical Significance vs. Clinical Significance

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Melissa Simones, Boston Scientific; Jack Zhou, FDA

For a clinical trial to be successful, the result usually has to meet both statistical and clinical success. Statistical success is usually measured by a statistically significant result, which can be demonstrated by having the lower bound of the confidence interval for the primary endpoint greater than zero (for endpoints with higher values being better). Clinical success, on the other hand, is often interpreted as the observed point estimate of the primary endpoint being greater than a certain value, often the Minimum Clinically Important Difference (MCID). This frequently creates a problem if one is not careful. Suppose the study is powered at 80%, assuming the true distribution of the primary endpoint is normally distributed with the mean of MCID. By the end of the study, there is only a 50% chance that the OBSERVED point estimate of the measure will be greater than the MCID. In other words, even though the study has 80% power to reach statistical success, it only has 50% power to reach clinical success. There is a 30% chance the study will be statistically successful but clinically unsuccessful. Clinical trial sponsors and regulatory agencies should be aware of this issue. Have you ever had a study like this where the statistical significance was reached but the clinical significance was questioned? How was the study result interpreted? What do you think that can be done to properly align statistical significance with clinical significance?

 

TL29 Prediction of event times in randomized clinical trials

09/17/13
11:45 AM - 1:00 PM
Balcony B

Chair(s): Misha Salganik, Cytel Inc

In a short presentation we will illustrate a simple prediction method that is based on a combination of previous knowledge of the time to event distribution with the data accumulated in the trial. We will discuss the advantages of the prediction methods that do not require unblinding of the treatment assignments. We will ask the participants about their experiences.

 

TL30 Impact of missing data on the approval of potentially efficacious therapies

09/17/13
11:45 AM - 1:00 PM
Coolidge

Chair(s): Xiaohong Huang, Vertex Pharmaceuticals; Abdul J Sankoh, Vertex Pharmaceuticals

There are a number of operationally preventive measures and some statistical approaches with seemingly remedial properties in the clinical and statistical literature that if appropriately implemented during the design, conduct and analysis of clinical trial data, should mitigate the occurrence and thus the detrimental impact of missing data on trial outcome, and subsequent approval of potentially safe and efficacious new drugs. Yet a number of drugs with potential therapeutic benefit have failed to gain approval due to serious missing data issue. This roundtable discussion will probe into the challenges pose by missing data in clinical trials and the utility of current approaches in minimizing the non-approval chances of potentially efficacious new therapies.

 

TL31 Missing Data Handling

09/17/13
11:45 AM - 1:00 PM
Coolidge

Chair(s): David Li, Pfizer; Jin Xu, Merck

Plenty methods have been used to handle the missing data problem. Some of them are based on the maximum likelihood function while others are based on multiple imputation; some of them are valid only when the data are missing at random; others may be more complicated but useful when the missing data are non-ignorable. None of these methods are universally best. Selecting a proper and most efficient method to analyze the data from your particular clinical trial takes a broad knowledge of the methods and an in-depth understanding of your data. How to put forth a convicing argument for the MAR assumption? How to select meaningful sensitivity analyses to validate the conclusion drawn from your primary analysis? This roundtable discussion will gather a group of statisticians to share their best practices and view points, and for all to learn from each other’s experiances.

 

TL32 A Comprehensive Review of Sample Size Determinations for the Wilcoxon-Mann-Whitney Test

09/17/13
11:45 AM - 1:00 PM
Coolidge

Chair(s): Gary Lynn Kamer, FDA; Dewi (Gabriela) Rahardja, FDA

The determination of the required sample size for a clinical study is always an important step in assuring that the study results provide meaningful information. It's not a pleasant occurence when a statistically underpowered study results in a failure to establish a definitive answer to the question posed by the hypotheses. Money and time have been wasted. Sample size estimation is even more important when the less-than-definitive study results in not only the waste of time and money, but also in the failure to establish the marketability of a new medical device, for example. For most parametric statistical tests, the primary concerns are the selection of the anticipated rates or means and the choice of statistical parameters. Various easy to use sample size estimation programs are readily available. Also, reasonably simple sample size formulae usually exist. But, what if your study is expected to result in data which are not normally distributed? The Wilcoxon-Mann-Whitney Test is a powerful non-parametric test for which we will present sample size formulae for continuous data where data from two groups are available, for continuous data where only data from one group is available along with summary statistics from a second group, and for continuous data where only summary statistics are available for both groups. Also, we will present sample size formulae for ordinal data where those data satisfy the proportional odds assumption, and for where those data may not satisfy the proportional odds assumption.

 

TL33 Sample size estimation for multi-regional clinical trials

09/17/13
11:45 AM - 1:00 PM
Coolidge

Chair(s): Kimberly M Cooper, Janssen Research & Development

Multi-regional clinical trials (MRCTs) have been widely used for increased cost and time efficiency of global new drug development. One challenge of these trials concerns the impact of ethnic factors on clinical outcome and treatment effect. The question becomes what sample size is sufficient to show consistency of results across a number of regions. Guidelines exist to assist in determining the necessary sample size, for example the two methods proposed by Japan’s Ministry of Health, Labour, and Welfare (MHLW), but often feasibility issues arise. Questions: What approach do you take in determining sample size requirements for a region? What are your experiences with regulatory agencies in acceptance or rejection of sample size determination methods? Do approaches vary depending on therapeutic area?

 

TL34 Assessment of Multiple Endpoints in Phase 3 Preventive Vaccine Clinical Trials

09/17/13
11:45 AM - 1:00 PM
McKinley

Chair(s): Karen Lynn Goldenthal, Bethesda Biologics Consulting, LLC; Amelia Dale Horne, Division of Biostatistics/OBE/CBER/FDA

Multiple endpoints are often needed to adequately characterize the effectiveness of a product. There may be multiple primary as well as secondary endpoints, and these should be clearly defined in the protocol, with appropriate plans for Type 1 error control. This roundtable session will cover these issues for Phase 3 trials of preventive vaccines for infectious disease indications. Specifically, the focus will be immunogenicity and efficacy endpoints intended to support product labeling claims. (a) One example for discussion will be a trial with multiple immune response endpoints, all of which are important for labeling or related regulatory purposes. Sources of multiplicity for vaccine immunogenicity trials may include the assessment of combination vaccines, simultaneous administration of the new vaccine with previously licensed vaccines, clinical lot consistency, etc. (b) Another example for discussion will be a vaccine efficacy trial with multiple primary and secondary clinical disease endpoints. (c) In addition, attendees are encouraged to share their perspectives about issues regarding Type 1 error control for multiple endpoints in vaccine trials.

 

TL35 Non-inferiority studies for both human and veterinary medicine

09/17/13
11:45 AM - 1:00 PM
McKinley

Chair(s): Anna Nevius, FDA/CVM

Non-inferiority studies represent an important class of designs for studying efficacy in the drug approval process. Many of the design issues are the same for human and animal medicine. However, veterinary medicine presents unique challenges including the choice of a control group, sample size, experimental unit, and choice of primary variable endpoint. The implications of related guidance from FDA and other institutions will be discussed.

 

TL36 Analyzing very large retrospective claims databases

09/17/13
11:45 AM - 1:00 PM
McKinley

Chair(s): C V Damaraju, Janssen Research and Development, LLC

Availability of rapidly growing insurance and medical claims databases promise to offer unique insights into post-marketed experience with regulated products. While standard data mining techniques for broad statistical summaries are useful, analyses involving models to better understand the relationships between treatment exposure and outcomes require proper specification to avoid inherent biases and confounding. In this discussion, we will exchange ideas and experience about analyzing large observational datasets, common pitfalls in model specifications and best practices used. Two main questions are as follows - 1) What specific considerations would you take into account when developing a statistcal analysis plan? 2) How do you plan to use results obtained from observational analyses in comparing with clinical studies in terms of key rates or statistics?

 

TL37 Pharmacokinetic Dose Proportionality studies

09/17/13
11:45 AM - 1:00 PM
Tyler

Chair(s): Jaya Natarajan, Janssen Research & Development LLC

Pharmacokinetic (PK) dose proportionality (DP) is a desired property of a compound. Thus, evaluating DP in a properly designed study is a crucial part of drug development. However, in the absence of a regulatory guidance, the analysis of data from a DP study varies significantly between sponsors, ranging from descriptive statistics and graphical evaluation to mixed effects modeling. Some of the statistical techniques that have been used are: (1) ANOVA modeling of dose-normalized PK parameters and Bioequivalence testing between each pair of doses; (2) Linear regression modeling of PK parameters vs. dose where intercept = 0 for DP; (3) Power model or log-linear modeling of logarithm of PK parameter vs. logarithm of dose where slope = 0 for DP.1 Bioequivalence criterion can also be applied to regression models to yield a corresponding criterion on the slope of regression line.2 In crossover DP studies, the subject-specific random effect terms (subject-specific random intercept and random slope) can also be added to the regression model. But, with limited number of doses studied (3 or 4), the model may not always converge. Following are the questions for discussion: 1. In your company, is dose proportionality evaluated only from early development studies? Do you conduct a separate pharmacokinetic dose proportionality study for most compounds? 2. What are the study designs is utilized for DP studies? Parallel group, crossover, single-sequence? 3. How is the sample size determined? Estimation (precision) approach or hypothesis testing (power) approach? 4. What is the analysis methodology? (a) Descriptive statistics and graphs only; (b) Hypothesis testing? If hypothesis testing, what is the model used for analysis? 5. What are the pros and cons for each analysis method? 6. Do you use random intercept, random slope model for analyzing data from crossover studies? What are the issues with convergence of the model? Reference: 1. Chow and Liu: Design and Analysis of Bioavailability and Bioequivalence Studies, Chapman &Hall/CRC, 2009. 2. Smith Bp, et.al. Confidence Interval Criteria for Assessment of Dose Proportionality. Pharmaceutical Research, vol. 17, pp 1278 - 1283.

 

TL38 Vaccine Antigen Overages

09/17/13
11:45 AM - 1:00 PM
Tyler

Chair(s): Louis George Luempert, Novartis Animal Health

All vaccines should be released at a titer high enough to assure that the last vaccination is sufficiently potent to meet the proposed claim (protection, etc). Methods are being developed to assure that vaccinates will receive a dose that is at or above the Minimum Protective Dose for live vaccines. In order to assure that live vaccine products will be released at a titer that takes into consideration vial/assay variation (termed ‘Overage 1’) and live titer loss throughout dating (termed ‘Overage 2’) two methods are being considered: 1) a 2s or 3s overage based on initial vial/assay variation, and 2) Replacing Overage 1 and Overage 2 with a single overage that ensures a sufficient proportion of the population of doses in a serial are above the MPD throughout the dating period. This would eliminate the need to characterize specific components of the required antigen overage.

 

TL39 Safety Evaluation During Clinical Drug Development – Size and Extent of Population Exposure

09/17/13
11:45 AM - 1:00 PM
Truman

Chair(s): Bradley McEvoy, CDER, FDA; Elena Polverejan, Janssen R&D, Johnson & Johnson

The ICH E1 guideline provides a set of principles for the development of a sufficiently large safety database for drugs that are given for more than six months for non-life-threatening diseases. The guideline suggests 300-600 patients treated for six months, 100 patients treated for one year, with a total number of 1500 patients exposed to the drug, including short-term exposure. The guideline recognizes a few exceptions to these rules, such as when there is concern the drug will cause late developing adverse events or there is need to quantify the occurrence rate of an expected low-frequency adverse event. These exceptions require a larger or longer-term safety database, employing the use of various statistical methodologies and corresponding criteria. The participants will share their views and experience on these topics. Key Discussion Questions: - What statistical criteria are most commonly used to determine the size or the extent of exposure of a safety database? - Under what circumstances could interim results pertaining to a long-term safety database be used for a submission? When are post-approval commitments possible?

 

TL40 Challenges in Post-Marketing Safety Assessment by Using Spontaneous Reports

09/17/13
11:45 AM - 1:00 PM
Truman

Chair(s): Brent Burger, Cytel; Rongmei Zhang, FDA

Post-marketing safety assessment provides essential information in characterizing a drug’s safety profile as pre-marketing studies are often underpowered to detect rare and possibly serious adverse events. Spontaneous reports systems, such as FDA Adverse Event Reporting System (FAERS, formerly AERS) or Poison Control Centers, are two separate datastreams that allow the evaluation of drug safety after the drug has been approved. In the case of FAERS, the challenge is that while a large number of records for suspected adverse drug reactions exists, the data have numerous limitations including substantial underreporting and potential misclassifications. In this roundtable, we are going to discuss different statistical approaches to formalize the signal generation from spontaneous reports. Key Discussion Questions: - For the numerator-based data such as FAERS, what are traditional approaches? What Bayesian approaches (or other newer methods) are available and what are their advantages compared with traditional approaches? - When denominators are available such as the number of prescriptions for each drug, can and should this additional information be used in the analysis of spontaneous report? If so, what statistical approaches can be used?

 

TL41 Some Challenges of Using Open-Source Software in a Regulatory Environment

09/17/13
11:45 AM - 1:00 PM
Taylor

Chair(s): Jae Brodsky, FDA

Open source software offers affordable alternatives to commonly used proprietary tools in both regulatory agencies and industry. We will discuss how open source software can aid both FDA and industry statisticians, review some of the open source software that is already in use at the FDA, and discuss how industry statisticians can use open source software in submissions to the FDA. As a case example, R is often used as an alternative to proprietary statistical analysis software such as SAS, in industry, academia, and at the FDA. We will discuss our experiences using R at the FDA.

 

TL42 Effective Use of Table, Figure and Listing in the Study Report of a Clinical Trial

09/17/13
11:45 AM - 1:00 PM
Taylor

Chair(s): Wei Wang, Eli Lilly and Company

Given that the cost of running clinical trials continues to rise, various efforts have been made to reduce this cost. One such area explored has been the reduction of the number of TFLs in study reports. As statistical leaders, we may be able to reduce the number of Tables, Figures and Listings (TFLs) and thereby streamline the relevant information and improve the way data is presented in the study report. 1. On average, how do you feel the amount of TFLs generated for the study reports in your company? Is it too much? About right? 2. What can statisticians do to reduce the amount of TFLs and deliver a more concise and more informative study report? Will (interactive) review tools help reduce the TFLs?

 

TL43 Statistical Analysis Plan (SAP) or Robust Statistical Methods - What does the FDA need for Protocol Evaluation

09/17/13
11:45 AM - 1:00 PM
Taft

Chair(s): Janet Elizabeth McDougall, McDougall Scientific Ltd.

A well written and robust statistical methods section is required for a protocol to be fairly evaluated by Regulatory (e.g. FDA), local Ethics Review Boards and the investigational team. Recently this has been translated into requiring the SAP be part of the IND (or CTA in Canada) submission. Is it time for a guidance (even ICH) on the contents of the statistical methods for a protocol and the statistical analysis plan? The SAP, signed prior to data unblinding, is an essential document reflecting the agreed to contents of the protocol and knowledge gained (e.g. blinded data review, emerging analysis trends) across the life of the trial. Providing a draft SAP prior to the acceptance of the protocol can diminish the role of both the statistical methods section in the protocol and the SAP. - Agree or Disagree.

 

TL44 Development of Statistical Analysis Plans (SAPs) in Observational Studies

09/17/13
11:45 AM - 1:00 PM
Taft

Chair(s): Ari D Marcus, Quintiles Outcome

The SAP as a required element of clinical trials is well established and standardized. However, the importance of developing clear and detailed analysis plans for observational studies is less well understood. Observational studies contain unique challenges not addressed in randomized controlled trials that need to be addressed in the SAP (such as the lack of a prescribed study schedule, bias due to the non-randomized, uncontrolled nature of the study), and different statistical methods (with a focus on precision or prediction rather than hypothesis testing). This session starts with an illustration of the benefit of having clear, detailed SAPs in observation al studies, followed by a discussion of specific elements of the observational SAP that may differ from a SAP for clinical trials, timing of the SAP and considerations for SAP revisions.

 

TL45 Recent Developments in the Clinical Trials for Alzheimer’s Disease

09/17/13
11:45 AM - 1:00 PM
Madison B

Chair(s): Jingyu Luan, FDA; Chengjie Xiong, Washington University

Alzheimer’s Disease (AD), a progressive neurodegenerative disorder, is a common cognitive disorder of the elderly population. AD begins with memory loss and progresses to severe impairment of activities of daily living, leading to death approximately 8 years on average from time of diagnosis of dementia. The prevalence of AD in people over the age of 65 is 5-10%, increasing up to 50% in those over the age of 85. Alzheimer's Disease affected approximately 4.5 million people in the United States. It is predicted that 13.8 million people in the U.S. will have AD by 2050. Accumulating research evidence suggests that neurodegenerative processes associated with AD begin years prior to the symptomatic onset of AD when the disease is clinically at the early prodromal stage or even the latent stage. To date, there are no pharmaceutical treatments that reverse the pathological processes of AD. Hence, it will be critically important to design randomized clinical trials for individuals at the earliest clinical stages since targeted therapies may have the greatest chance of preserving normal brain function for this group of individuals. This roundtable will discuss the recent developments in the clinical trials for Alzheimer’s Disease. Key discussion questions: What are the challenges in the design and analysis of early and prodromal AD trials? What is the role of biomarkers in the design of early and prodromal AD trials? What cognitive and functional outcome measures should be used in early and prodromal AD trials? What is the current status of Dominantly Inherited Alzheimer’s Network (DIAN) and the DIAN trials?

 

TL46 Sensitivity Analyses of PFS in Oncology Trials

09/17/13
11:45 AM - 1:00 PM
Madison B

Chair(s): Biao Xing, Onyx Pharmaceuticals

In oncology trials with progression-free survival (PFS) endpoint, sensitivity analyses are typically required to demonstrate the robustness of the primary analysis results to potential alternative ways of assessing disease outcomes, censoring data, or conducting analyses. The FDA and EMEA have released guidance on designing and analyzing trials with PFS endpoint. Differences exist in the regulatory views as well as in the practices across the pharmaceutical industry. This roundtable will discuss the regulatory views, survey the industry practices, and share the lessons learned.

 

PS1a Adaptive Population Enrichment Designs in Oncology

09/17/13
1:00 PM - 2:15 PM
Thurgood Marshall North

Organizer(s): Jianxiong Chu, Food and Drug Administration; Bo Huang, Pfizer Inc.; Sandeep Menon, Pfizer Inc.; Qing Xu, FDA

Chair(s): Jared Christensen, Pfizer

Adaptive population enrichment designs start out by enrolling all-comers, but build in an interim analysis at which one may examine the data and restrict future enrollment to a subgroup identified by a pre-specified biomarker. Such designs are especially important for phase 2 proof of concept oncology trials involving molecularly targeted therapies. If it is demonstrated in phase 2 that the biomarker is predictive, a phase 3 confirmatory trial can be launched entirely in the enriched population and a validated companion diagnostic test can simultaneously be developed. This session will present simulation tools, predictive classifiers and formal statistical methods for controlling type-1 error rates in such designs. Bayesian decision rules that connect response to survival may be utilized for the interim analysis. Speakers: Dr. Richard Simon, NCI (Confirmed); Dr. Cyrus Mehta, Cytel (Confirmed); Dr. Mei-Chiung Shih, Stanford University (Confirmed). Discussant: Dr. Sue-Jane Wang (Confirmed)

Finding the Right Patient Population for a Drug
Richard Simon, National Cancer Institute

Joint Modeling of Response and Time to Event in Phase II-III Cancer Trials
Mei-Chiung Shih, VA Palo Alto Cooperative Studies Program Coordinating Center

Simulation-Guided Design for Molecularly Targeted Therapies in Oncology
Cyrus Mehta, Cytel Inc

 

PS1b Advanced Causal Inference Methods for Postmarketing Safety Evaluation

09/17/13
1:00 PM - 2:15 PM
Thurgood Marshall South

Organizer(s): Chris Holland, Amgen; Qi Jiang, Amgen Inc.; Greg Soon, U.S. Food and Drug Administration

Chair(s): Lan Huang, OTS-CDER- FDA

The proposed session is motivated by the need to join the force from biostatisticians working on causal inference methods and postmarketing safety evaluation at Academia, FDA and Industry, and share ideas and foster interdisciplinary and inter-institutional collaborations. The goal of this proposed session is to promote collaborations between Academia, FDA and Industry statisticians, and to synergistically enhance the quality of postmarketing safety evaluation using advanced causal inference methods to deal with complicated confounding issues. This topic has not been presented that much in the past, especially from the "advanced causal methods" aspect. Although an analysis based on a carefully conducted, randomized and controlled clinical trial is still the gold standard in obtaining valid causal effects of medical products, such designs can be either impractical or too burdensome to conduct in post-market studies. For example, very frequently a prospective, controlled cohort design is used for the Post-approval Study (PAS) of medical devices at the post-market phase. In observational postmarketing studies, confounding issues could come from various stages of the study, and consequently the resulting statistical inference may be biased and misleading, which could lead to wrong medical conclusions. To ensure the objectivity of the study design and the validity of study inference, it is critical to address those confounding issues arising from postmarketing safety studies. Over the last two decades, causal inference methods, particularly propensity score approaches, have become more and more popular in the postmarketing safety evaluations. Excitingly, new developments and extensions have been published in recent years. In this session, advanced and up-to-date causal methods will be presented and discussed in the applications of postmarketing safety assessments. Please give serious consideration to the proposed session. I will contact the following potential speakers once this session is selected or approved. Please do not hesitate to contact me if you have any questions or concerns. (1), Potential FDA speakers to be invited: Drs. Lilly Yue, Mark Levenson, Zhiwei Zhang, and Dan Rubin. (2), Potential academia/industry speakers: Drs. Yi Huang (Agreed to talk, topic: average causal effect estimation allowing covariate measurement error), Haitao Chu, and Lu Bo.

Marginal structural models for observational studies with time-dependent treatments
Daniel Rubin, FDA

Causal inference methods to assess safety upper bounds in randomized trials with noncompliance
Yiting Wang, Janssen Research & Development, LLC

Latent Propensity Score Approach for Post-market Evaluation of Regular Breast Pump Usage
Yi Huang, University of Maryland

 

PS1c Graphical Visualizations of Data

09/17/13
1:00 PM - 2:15 PM
Thurgood Marshall East

Organizer(s): Vijay Chauhan, Alpha Stats Inc (CRO); Jing Han, FDA; Jennifer Schumi, Statistics Collaborative; Wei Zhang, FDA

Chair(s): Virginia Recta, FDA

Graphical displays of quantitative information in different forms help us better understand the data, and thereby making more informed decisions and leading to effective communication of the data. Specially, in the safety studies, graphics are very useful to detect safety signals by providing displays that visualize trends, correlations, and anomalous observations; therefore help us to quickly identify clinical anomalies that require further analyses. This session will invite speakers from academy, industry and FDA to present graphics for visualization of data.

Visualization of the Adverse Events Data
Misha Salganik, Cytel Inc

Summarizing the Incidence of Adverse Events Using Volcano Plots
Richard C. Zink, JMP Life Sciences, SAS Institute, Inc.

A Web site for Running and Sharing R Scripts
Jonathan G Levine, FDA

 

PS1d CMC1 - Process Validation (CMC/Non-clinical parallel session)

09/17/13
1:00 PM - 2:15 PM
Thurgood Marshall West

Organizer(s): Meiyu Shen, FDA; Harry Yang, MedImmune

The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance recommends the use of statistics throughout process validation, it does not prescribe specific methods. This session is intended to discuss statistical tools/methods that can be used to address the following issues concerning process validation: • How to establish acceptance criteria • How to determine number of validation batches • How to estimate inter- and intra-batch variations • How to develop effective acceptance sampling plans • How to trend data to ensure continued process verification Speakers: • Xiaoyu (Cassie) Dong (FDA) • Helen Strickland (GSK)

The Science of Quality from a Product Lifecycle Perspective
Helen Strickland, GlaxoSmithKline

Statistical Issues in Process Validation
Xiaoyu Dong, FDA

 

PS1e Prevention and Treatment Of Missing Data In Observational Studies

09/17/13
1:00 PM - 2:15 PM
Lincoln 5

Organizer(s): Chul H Ahn, FDA/CDRH; Kari B Kastango, Quintile Outcome; Gosford Sawyerr, Cognizant; Yan Zhou, FDA

PURPOSE: The effect of missing data on the validity of inference in prospectively designed observational studies is potentially greater than in randomized trials given the selection bias inherent in the former and inability to mask treatments. The objective of this session is to bring awareness to the challenges of missing data in observational research from study design and analysis perspectives, and to discuss study design elements and analysis methods that may support meaningful and valid inference in observational research. DESCRIPTION: Prospectively designed observational studies are widely used to study the post approval drug exposed population for drug and medical safety assessments. Imposing structure on data captured from real world clinical practice potentially increases the magnitude of missing data. The National Research Council issued the report The Prevention and Treatment of Missing Data in Clinical Trials in December of 2012. However, the focus of the report was on the assessment of intervention efficacy in confirmatory randomized controlled clinical trials. Understanding the potential sources of missing data from a study whose design imposes structure on data captured from real world clinical practice allows the selection of study design elements that may help reduce the magnitude of missing data. Given missing data exist, analysis methods that support meaningful and valid inference from observational research are necessary. This session starts with an overview of the potential sources of missing data in prospectively designed observational research, including retrospective data collection and patient reported outcome. It then focuses on proactive planning and data collection. The session also aims to discuss the types of missing data in observational studies, statistical methods of testing missing data pattern and handling missing data, when these methods should be applied, and the impact of missing data to the interpretation of study findings. The concepts and methods will be explained with the use of real-world observational study data, when possible.

Prevention of Missing Data in Observational Studies – Study Design, Operational, and Analysis & Reporting Considerations
Eric Gemmen, Quintiles

Considerations for Missing Data in Medical Device Observational Studies – a Statistical Reviewer’s Thoughts
Sherry Yan, FDA

Analytic Methods for Handling Missing Data
Zhaohui Su, Quintiles Outcome

Discussant(s): Don Rubin, Harvard University

 

PS1f Statistical Challenges in Surrogate Biomarker Evaluation

09/17/13
1:00 PM - 2:15 PM
Lincoln 6

Organizer(s): Jae Brodsky, FDA; Xin Gao, FDA; Qing Liu, Janssen; Nandini Raghavan, Janssen R&D

Chair(s): Tom Fleming, University of Washington

The ultimate goal to develop surrogate biomarkers is to replace clinical endpoints in clinical trials and to allow timely patient management. However, the statistical methodologies have been elusive and often misunderstood. The objectives of this session are 1) to explain the basic concepts relating to association, causality, and the role and criteria of surrogate biomarkers as measures of the underlying disease pathways and their predictability of clinically meaningful endpoints, 2) to present new developments in assessing surrogate biomarkers, 3) to illustrate their various uses in clinical development, and 4) to explore organizational structures for facilitating translational research and development within pharmaceutical companies to bridge various functional expertise and activities.

An Overview of Biomarkers and Surrogate Endpoints and Their Regulatory Roles
Greg Soon, U.S. Food and Drug Administration

Nature of Biomarkers and Surrogate Endpoints in Clinical Trials
Tom Fleming, University of Washington

Use of Semi-surrogate Biomarkers for Biosimilar Drug Development
Lisa Hendricks, Novartis

Predicting a future analysis of a time-to-event endpoint
Mark Rothmann, FDA

Discussant(s): Yulan Li, Novartis

 

PS2a Data Analysis Issues in Molecular Diagnostics

09/17/13
2:30 PM - 3:45 PM
Thurgood Marshall North

Organizer(s): Jae Brodsky, FDA; Anna Ketterman, FDA; Qin Li, FDA; Nusrat Rabbee, Pharmanet-i3, Inc.

Chair(s): Nusrat Rabbee, Pharmanet-i3, Inc.

Organizers: Nusrat Rabbee, inVentivHealth Clinical; Qin Li, FDA; Jae Brodsky, FDA; Anna Kettermann, FDA Chair: Nusrat Rabbee, inVentivHealth Clinical The advancement of high throughput technology has made it possible to analyze hundreds of biomarkers (e.g.,gene expression, SNPs, IHC) for predictive, prognostic or diagnostic purposes. This session will focus on statistical challenges faced in the analysis of these biomarkers by speakers who are working towards developing multi-dimensional molecular diagnostic tests. Much of R&D investment is being spent in exploring biomarkers for diagnosis of disease and prediction of treatment efficacy. The statistical challenges of exploring a vast number of biomarkers in a single trial - as well as across trials - will be explored. Issues of disease heterogeneity and finding a single statistical algorithm to diagnose the disease or condition will be addressed. The presentations will focus on methods and/or the analysis of data obtained from trials in the diagnostics industry. Speakers: 1. Preeti Lal, Director Biology, Biomarkers, Gilead Sciences, Inc. 2. Michael Crager, Research Fellow, Genomic Health, Inc. 3. Andrea Foulkes, Associate Professor and Director, Institute for Computational Biology, University of Massachusetts at Amherst

Separate Class True Discovery Rate Degree of Association Sets for Biomarker Identification
Michael R. Crager, Genomic Health, Inc.

MixMAP: An approach to gene level testing of association
Andrea S Foulkes, Division of Biostatistics, UMass Amherst School of Public Health and Health Sciences

Data Analysis Issues in Molecular Diagnostics
Preeti Gupta Lal, Gilead Sciences

 

PS2b The Challenges of Studies in Which Consent is Waived

09/17/13
2:30 PM - 3:45 PM
Thurgood Marshall South

Organizer(s): Weili He, Merck & Co., Inc.; Hope Knuckles, Abbott; Judy Li, FDA CBER/OBE/DB; Estelle Russek-Cohen, US FDA CBER

Chair(s): John Scott, FDA CBER/OBE/DB

In the area of emergency medicine, it is not always feasible to obtain informed consent from a patient. Thus operating well conducted clinical trials is relevant and requires special efforts including obtaining community consent. These studies are reviewed by FDA and both ethical and scientific merit is considered. A number of these have been sponsored by the US Department of Defense because of their relevance to soldiers in combat. Several such studies have been seen in CBER because blood products and components are reviewed in CBER. However, the issues of waiver of informed consent are not center specific and should be of interest to other attendees. There are three speakers in the session. Dr. Sara Goldkind is a medical officer and she provides advice to review teams at FDA that evaluate such submissions. She is a bioethicist and her perspectives on how these studies are reviewed her worth listening to. Dr. Renee Rees in CBER has worked as a statistician on several of these kinds of submissions and will highlight the statistical challenges in these studies including study design and analysis issues. Dr. Barbara Tilley (U Texas Houston Chair of Biostatistics) has been involved in such investigations and will provide her insights dealing with a specific study funded by DOD

Everything statisticians ever wanted to know about 50.24 studies but were afraid to ask.”
Sara Goldkind, CDER FDA

Waived Informed Consent in FDA/CBER Clinical Trials
Renee Rees, CBER FDA

Application of EFIC in the Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial
Barbara C. Tilley, University of Texas School of Public Health

 

PS2c Sample Size Planning for Psychiatric Clinical Trials

09/17/13
2:30 PM - 3:45 PM
Thurgood Marshall East

Organizer(s): Yeh-Fong Chen, US Food and Drug Administration; Matthew Davis, Theorem Clinical Research; Jennifer Lee, Otsuka America Pharmaceutical, Inc.; Anna Sun, FDA

Chair(s): Yeh-Fong Chen, US Food and Drug Administration

One critical component for a successful phase III trial is sample size planning. Good sample size planning will ensure a well-powered study and avoid recruiting an unnecessarily large number of patients to the trial. With sufficient prior knowledge about the drug and good understanding of the disease process, sample size planning can be made adequately such that the risk of power deficiency is low. In the case of psychiatric clinical trials, though, the sample size planning could be very challenging because there are substantial differences between the phase III trial and the earlier phase trials in the analyses of repeated measurements, the methods of handling missing data, and the study designs used to reduce high placebo response. Therefore, it is important to know how to efficiently utilize the available information from the historical trials and keep abreast of new methods for sample size planning in different designs and analyses. In this session, three speakers will be invited to share their new methods or experience for sample size planning in psychiatric clinical trials. They will discuss the potential gain and possible drawbacks of each method as well as the comparisons across methods.

Sample size calculations accounting for the placebo-based pattern-mixture model sensitivity analysis
Kaifeng Lu, Forest Laboratories

Sample Size Planning – Experience from Reviewing Psychiatric Clinical Trials
Peiling Yang, US FDA

Sample Size Planning for Psychiatric Clinical Trials
Anastasia Ivanova, University of North Carolina at Chapel Hill

 

PS2d CMC2 - Biomarkers and Big Data (CMC/Non-clinical parallel session)

09/17/13
2:30 PM - 3:45 PM
Thurgood Marshall West

Organizer(s): Donald Bennett, Biogen Idec; Terri Johnson, FDA; Jingjing Ye, FDA/CDRH

Chair(s): Donald Bennett, Biogen Idec Inc.

Session Description / Abstract: Traditional biomarker discovery strategies focus on proteins identified by hypotheses driven from systems biology or biological pathway/mechanism of action knowledge. A limited number of biomarker targets are evaluated in animal models to track disease progression and pharmaceutical intervention. The advent of big data available from public databases combining genomics, genetics, proteomics, and images with clinical and phenotype data are shifting the biomarker discovery process. Discovery of biomarkers for specific diseases with thousands to millions of variables shifts the analytic tools towards computational biology and data mining. Statisticians are playing an increasingly prominent role in merging these different methods into better biomarker discovery strategies. This session will explore the interface between statisics, genetics, and –omics methods used by statisticians in biomarker discovery. Speakers: Lisa McShane, NCI, Keith Baggerly, MD Anderson, Discussant: Richard Simon, NCI

Best practices in the development of omics-based tests to guide patient care
Lisa McShane, National Cancer Institute

Computational Biology – Biostatistics and New Opportunities for Therapeutics Discovery
Richard Simon, National Cancer Institute

When is Reproducibility an Ethical Issue? Genomics, Personalized Medicine, and Human Error
View Presentation View Presentation Keith Baggerly, MD Anderson Cancer Center

Discussant(s): Richard Simon, National Cancer Institute

 

PS2e Invited JBS Paper Session 1

09/17/13
2:30 PM - 3:45 PM
Lincoln 5

Organizer(s): Bruce Binkowitz, Merck and Co. Inc.; Lilly Yue, FDA

Chair(s): Lilly Yue, U.S. FDA

A Case Study of PK-PD Modeling and Exposure-Response Prediction for Count Data
Hui Quan, sanofi

Enabling Earlier Identification of Genomic Predictors in Drug Development
Devan Mehrotra, Merck

DMCs in Clinical Trials: Preserving Independence to Protect Integrity
Tom Fleming, University of Washington

 

PS3a Adaptive design trials: overcoming challenges and improving efficiencies

09/17/13
4:00 PM - 5:15 PM
Thurgood Marshall North

Organizer(s): Yeh-Fong Chen, US Food and Drug Administration; Weili He, Merck & Co., Inc.; George Kordzakhia, FDA; Anthony James Rodgers, Merck & Co, Inc.

Chair(s): Weili He, Merck & Co., Inc.

There are great potentials for clinical trials designed with adaptive features to result in more efficient decision making within a medical product development program. However, adaptive clinical trials are more complex to plan and implement than traditional fixed sample designs. Challenges to the use of clinical trials with adaptive features may include but are not limited to the concerns about the integrity of study design and conduct, the risk of acceptance with certain AD features, the need for an advanced infrastructure for complex randomization and clinical supply scenarios, change management for process and behavior modifications, extensive resource requirements for the planning and design of adaptive trials, and the potential to relegate key decision makings to outside entities. In particular, for trials which incorporate interim analyses of unblinded data, an independent Data Monitoring Committee (DMC) is commonly constituted to review the results to ensure patient safety and protect trial integrity. Whether or not the DMC will function effectively and provide sound recommendations to the sponsor depends on the DMC’s qualifications and experiences, and the degree of pre-planning. The speakers and panelists in this session will discuss the challenges encountered in real clinical trials with adaptive designs. They will also provide potential solutions on how to overcome possible barriers to improve trial efficiencies.

Risk Mitigation Strategies in Design and Implementation of Adaptive Designs
Brenda Gaydos, Eli Lilly and Company

Panel discussion
Vladimir Dragalin, Aptiv Solutions; Paul Gallo, Novartis; Brenda Gaydos, Eli Lilly and Company; James Hung, FDA; LJ Wei, Harvard School of Public Health

How to utilize an independent data monitoring committee to run an adaptive clinical trial?
View Presentation View Presentation LJ Wei, Harvard School of Public Health

 

PS3b Non-Inferiority Studies for Diagnostic Devices

09/17/13
4:00 PM - 5:15 PM
Thurgood Marshall South

Organizer(s): Jeffrey Louis Joseph, Theorem Clinical Research; Guoying Sun, FDA; Alicia Toledano, Biostatistics Consulting; Zhiheng Xu, US Food and Drug Administration

Performance of diagnostic devices is usually defined by a pair of measures like sensitivity and specificity, positive and negative predictive values, and likelihood ratios of positive and negative tests. The predictive values depned on the prevalence of the condition of interest, and this is important to remember during the planning. Non-inferiotity studies are also more complex for diagnostic devices; if both measures of a pair are higher, we have a superior test. But if one is better and one is worse, for example, better sensitivity and worse specificity for the new test compared to the old, it could be better or worse, depending not only on the maginitude, but also on the consequences of misclassification. We will consider situations where the new test could be worse, but by a clinically insignifiRcant margin. Note that the same magnitude of difference could make one test superior, one non-inferior, and another truly inferior, since consequences of false positives and false negatives are a crucial part in determining the performance of a diagnostic test.

Sequential Diagnostic Trials with applications to non-inferiority studies
Larry Tang, George Mason University

Non-Inferiority Studies for Diagnostic Devices
Gajanan Bhat, Lantheus Medical Imaging, Inc.

Non-Inferiority Studies for Diagnostic Devices
Lakshmi Vishnuvajjala, FDA\CDRH

 

PS3c Recent Regulatory/Industry Experiences in Biosimilar Efficacy Trial Designs

09/17/13
4:00 PM - 5:15 PM
Thurgood Marshall East

Organizer(s): Joshua Chen, Merck and Co. Inc.; Xiaoyu Dong, FDA; Pilar Lim, Janssen Research & Development, LLC; Yi Tsong, FDA

Chair(s): Lisa Hendricks, Novartis; Qing Liu, Johnson and Johnson

The FDA draft guideline on biosimilar drug development suggests a stepwise and totality-of-the-evidence approach by which biosimilarity may be established through highly similar structural and functional characterization, pre-clinical studies, and standard human pharmacology (i.e., PK/PD) studies. If these early steps still pose a residual uncertainty, clinical efficacy and safety trials may be necessary. In practice, the FDA may recommend a less stringent standard compared to those for general non-inferiority trials. On the other hand, the EMA/WHO guidelines require essentially all of these studies, and that many aspects of the trials be conducted according to the standards set for general drug development. This lack of harmonization often leads to development programs for world-wide submissions to meet the most stringent standards to satisfy requirements of all regulatory agencies. As a result, it is often necessary for a developer to plan a clinical efficacy and safety trial. Since the publication of the EMA/WHO/FDA guidelines, there have been numerous industry and regulatory interactions for various biosimilar programs. The objective of the session is to share recent industry experiences with regulatory agencies. Specific emphasis is on biosimilar clinical efficacy and safety trials to address issues ranging from clinical choice of disease model, biomarker and endpoint to statistical aspects of setting hypotheses, usage of historical data, and derivation of non-inferiority margins.

Overview of Regulatory Guidelines and Current Statistical Issues
Shein-Chung Chow, Duke University

Statistical Considerations on Analytical Equivalence Assessment
Yi Tsong, FDA

Where is the right balance in designing an optimal phase III biosimilar trial?
Yulan Li, Novartis

Discussant(s): Tom Fleming, University of Washington

 

PS3d CMC3 - Outliers and Cut Point Determination

09/17/13
4:00 PM - 5:15 PM
Thurgood Marshall West

Organizer(s): Xiaoyu Dong, FDA; Victoria Hill Petrides, Abbott Diagnostics

Chair(s): Victoria Petrides, Abbott Laboratories

As the Rolling Stones sang, “You can’t always get what you want…” Such is the case with outliers. We don’t want them, yet they show up anyway. How do you recognize an outlier? What do you do when you find one? Are more lurking? Could an outlier actually be just what you need? In this session we will hear representatives from industry, academia and FDA present methods they use for identifying, monitoring and minimizing outliers. We will also discuss ways to capitalize on outliers to improve product performance. By the end of this session, we hope you will get what you want (or at least some information that you need).

Characterizing the “Outlier Profiles” of Diagnostic Products
Krista Michelle Birch, Abbott Labs

Truncated Outlier Filtering
Dr. Peter J. Costa, Hologic Incorporated

Determination of Bioassay Cut Point Using Confidence Limit of Percentile
Meiyu Shen, FDA

 

PS3e Subgroups: Making their Day in the Sun Arrive

09/17/13
4:00 PM - 5:15 PM
Lincoln 5

Organizer(s): Chul H Ahn, FDA/CDRH; Cristiana Mayer, Johnson & Johnson Pharmaceutical Research and Development; John Scott, FDA CBER/OBE/DB; Janet Turk Wittes, Statistics Collaborative

Chair(s): Janet Turk Wittes, Statistics Collaborative

Identifying individual characteristics that mediate a treatment effect is vitally important, but extremely challenging. The usual approach in clinical trials requires the analysis of possible interactions between the treatment and one or more covariates. This approach suffers from serious shortcomings related to issues of model specification, statistical power, and multiple comparisons. A rich literature in clinical trials has shown that these conventional methods of identifying subgroups of patients who should or should not be treated with a specific intervention often falsely discover subgroups that appear to respond\ differently from the average patient, or fail to find subgroups that do in fact respond unusually positively or negatively. Thus, a common stance is to dismiss almost all subgroup findings as inherently suspect. The presenters in this session believe that throwing the baby out with the bathwater in this way is ultimately inefficient and dangerous.This session describes three novel, but rigorous, approaches to subgroup analysis, The three speakers come to the problem with different philosophical and statistical approaches. Dr. Scott Solomon, the first speaker, is a cardiologist at Brigham and Women’s Hospital in Boston. His interest lies in finding drugs that are particularly useful for subsets of the population at risk of major cardiovascular events. He will present an example of a biologically deduced subgroup successfully validated by an independent analysis of data from a large randomized clinical trial. Dr. Richard Simon, a statistician from the National Cancer Institute, will present methods of identifying subgroups of patients who are likely to respond to, or who are likely not to respond to, specific agents. He is involved in trials of cancer therapeutic agents, many of which are highly toxic, so that treating unnecessarily can cause serious harm. He will show how to develop and internally validate a predictive classifier for each endpoint of interest, translating the problem to one of predictive classification, rather than a series of unvalidated subset analyses. The final speaker, Dr. Herbert Weisberg, a statistician from Causalytics, LLC, will introduce a radically different approach that does not involve interaction effects. The basic method utilizes a special outcome variable, the cadit, that has a known statistical relationship to the causal effect (rate difference or mean difference). The cadit is a simple function of the individual’s exposure status (e.g., drug vs. placebo) and outcome value. A statistical model, such as ordinary least squares or logistic regression, can then be derived using the cadit as the dependent variable. This model can suggest which of the covariates are related to the causal effect and the strength of the relationships. The results can be used to identify individuals or subgroups that are most likely to benefit, or least likely to be harmed. Additional information from Janet to the program committee: 1. All three speakers have agreed to participate and, in fact, are eager to come. 2. All three are excellent speakers. Rich has addressed previous ASA/FDA workshops and is well known to most of the participants. His talks always generate excitement. This will be the first time that either Scott or Herb will have attended. They will bring fresh blood to the meeting. (And note that the three represent our three constituencies – academe, government, and industry.) 3. I have access to a dataset from a large recent clinical trial that has not yet been fully analyzed. If I receive permission from the sponsor of the trial, I will give the dataset to the three speakers for them to explore how they would address subgroups in this trial. I cannot promise to get permission. The session will be really important even if we do not receive the data, but having it will make it even more interesting.

A Panel on Subgroups: Making their Day in the Sun Arrive
Richard Simon, National Cancer Institute; Scott Solomon, Harvard Medical School; Herbert Weisberg, Causalytics

 

Wed, Sep 18

PS4a Handling Equivocal Results in Diagnostic Tests

09/18/13
8:30 AM - 9:45 AM
Thurgood Marshall North

Organizer(s): Zhonggai Li, Novartis; Kristen L Meier, FDA, CDRH; Songbai Wang, Janssen Diagnostics

Chair(s): Kristen L Meier, FDA, CDRH; Songbai Wang, Janssen Diagnostics

An 'Equivocal' result is not a clearly defined term; it can be associated with one of several different underlying concepts (e.g., middle category along a continuum, quality issue, interference). A variety of calculation methods have been suggested or used to describe test performance in labeling for products with equivocal zones. However, the perception of test performance can vary greatly due to study design and how these results are handled in calculations of test performance. The speakers in this session have been working with an AdvaMed committee researching this topic and writing a white paper to offer some options on addressing equivocals. The speakers will provide some background on their efforts and present the committee’s thinking on the topic. Two discussants are invited to offer their comments on the ideas presented. Please join us in hearing their ideas and in sharing your own. Speaker(s): Vicki Petrides, Abbott Laboratories; Andrzej Kosinski, Duke University

 

PS4b Utilizing information from patients who discontinue study treatment – Missing Data Handling faces New Challenges

09/18/13
8:30 AM - 9:45 AM
Thurgood Marshall South

Organizer(s): Chengxing Lu, Novartis Pharmaceuticals Corporation ; Yan Zhou, FDA

Chair(s): Chengxing Lu, Novartis Pharmaceuticals Corporation

When patients discontinue study treatments, the challenges of missing data usually arise since their on-treatment efficacy responses and safety measurements are not observable. Traditionally, these patients would often withdraw completely from the study and the only available information would be the patients’ reason for discontinuation. However, critical information such as outcomes collected following treatment discontinuation as well as their corresponding medication after withdrawal from study treatment could be valuable. As such, the National Research council (NRC) report on the prevention and treatment of missing data urges the sponsors to “collect information on key outcomes on participants who discontinue their protocol-specified intervention in the course of the study, except in those cases for which a compelling cost-benefit analysis argues otherwise, and this information should be recorded and used in the analysis”. More importantly, NRC emphasized the need to clearly define a target causal estimand, which can be described as the need to clearly define a primary summary measure of the effect of an intervention when compared to control on an outcome in a pre-defined population. This measure then forms the cornerstone for strategies to prevent and handle missing data in a clinical trial setting, including the primary and pre-specified sensitivity analysis utilizing the above-mentioned information from patients who prematurely discontinue study treatment. In this session we will invite experts from academia, industry and regulatory agency to present their research in examining the interplay between the target causal estimand and the utilization of retrieved data or data concerning reason for withdrawal. The session focuses on topics such as the implications of the primary analysis and pre-specified sensitivity analysis using retrieved data; the concept of establishing efficacy in a population of patients who can tolerate, respond, and adhere to the treatment, as opposed to answering questions for an intent to treat estimand; the situations where the retrieved information provides limited value; and the utility of the drop out category in best understanding the data and therefore serving for analyses to make inferences, with or without retrieved data. The presentations will be followed by a panel discussion to form an interactive discussion with the general audience.

The utilization of data retrieval strategies in clinical trials
David Ohlssen, Novartis Pharmaceuticals Corporation

Testing Treatment Effect in Schizophrenia Trials with Heavy Patient Dropout
Fanhui Kong, FDA

Design of RCTs to Identify Safe and Efficacious Treatments Among Responders, Tolerators, and Compliers
Scott Emerson, University of Washington

 

PS4c Ensuring Data Quality and Identifying Potential Fraud in Clinical Trials

09/18/13
8:30 AM - 9:45 AM
Thurgood Marshall East

Organizer(s): Steven Bird, Merck; Jingyee Kou, FDA; Estelle Russek-Cohen, US FDA CBER ; Richard C. Zink, JMP Life Sciences, SAS Institute, Inc.

Chair(s): Richard C. Zink, JMP Life Sciences, SAS Institute, Inc.

FDA regulations require that clinical trial sponsors monitor the progress of their trials, but do not specify how monitoring must be conducted. Good Clinical Practice (GCP) guidelines from the International Conference on Harmonisation (ICH) also suggest that clinical trials should be actively monitored to ensure data quality. These monitoring activities assess whether a trial is conducted according to the study protocol and the appropriate regulatory requirements, verify the accuracy of the data collected, and ensure the welfare of trial participants, all of which contributes to the validity of study findings and conclusions. Traditional interpretation of the ICH guidance has often led to 100% source data verification (SDV) of case report forms against subject records through onsite monitoring by the clinical trial sponsor. However, such extensive onsite review is time consuming, expensive, and as is true for any manual effort, limited in scope and prone to error. Moreover, on-site SDV may not be the best tool for identifying fraud. Given the questionable benefit of 100% SDV, recent literature highlights the utility of central statistical monitoring as well as the acceptability of statistical sampling of data for SDV. This session will discuss various issues concerning data quality. Sampling approaches to SDV, the role of statisticians and extensive computerized logic and validation checks for centralized monitoring of clinical trial data will be described. How will data standards impact centralized monitoring? Will such monitoring lead to an increase in the identification of fraudulent behavior at investigator sites? What is the motivation for fraudulent behavior? In practice, is it possible to distinguish between fraud and mere carelessness? Real examples will be given.

Ensuring Data Quality and Identifying Potential Fraud in Clinical Trials
Nancy L. Geller, National Heart, Lung, and Blood Institute

An Overview of FDA’s draft Guidance for Industry: Oversight of Clinical Investigations
Chrissy Cochran, CDER

Source Data Verification by Statistical Sampling
Vlad Dragalin, Aptiv Solutions

 

PS4d Hierarchical Modeling and Borrowing Strength Across Multiple Device Trials

09/18/13
8:30 AM - 9:45 AM
Thurgood Marshall West

Organizer(s): Bradley P. Carlin, University of Minnesota; Freda Cooner, FDA; Laura Thompson, FDA-CDRH

Chair(s): Bradley P. Carlin, University of Minnesota

Bayesian hierarchical models have a long history in medical device trial analysis, due to the frequent availability of large historical databases that can be used to formulate prior distributions. Unlike pharmaceuticals, devices often "evolve" less rapidly, meaning that historical information can be more comfortably assumed to be exchangeable with current trial information. However, this assumption can be difficult to check, and lead to lengthier, costlier, and less ethical trials when false. In this session we present work by well-known researchers in government, industry, and academia who have developed models appropriate in such situations. Methods discussed include the use of commensurate and power priors for adaptively borrowing strength from historical information when such borrowing is empirically justified, approaches for incorporating notions of partial exchangeability when trials may gathered into sensible subgroups, methods useful in post-market surveillance settings, and preliminary work on including data from observational studies. The FDA speaker will present suggestions for designing such trials for regulatory submission, including how to assess the extent of prior information to be borrowed, as well as operating characteristics. All speakers will address both theoretical and practical aspects of their work, including its application to a variety of real-world medical device datasets. Speakers (all have confirmed their intent to participate): 1. Sharon-Lise Normand, Harvard Medical School: "Considerations in Post-Market Surveillance: The Medical Device Epidemiology Network Methodology Center” 2. Ted Lystig, Medtronic Inc.: "Composite Kaplan-Meier and Commensurate Bayesian Models for Combining Current and Historical Survival Information” 3. Gene Pennello, FDA-CDRH: "Bayesian Design Considerations for Studies with an Informative Prior based on Hierarchical Modeling”

Commensurate Bayesian models for combining current and historical survival information
Theodore C Lystig, Medtronic, Inc

Bayesian design considerations for studies with an informative prior based on hierarchical modeling
Gene Pennello, FDA/CDRH

Considerations in Post Market Surveillance: The Medical Device Epidemiology Network (Mdepinet) Methodology Center
Sharon-Lise T. Normand, Harvard Medical School and Harvard School of Public Health

 

PS4e The Development and Evaluation of Endpoints for Clinical Trials

09/18/13
8:30 AM - 9:45 AM
Lincoln 5

Organizer(s): Meijuan Li, FDA/CDRH; Jingyu Luan, FDA

Chair(s): Jingyu Luan, FDA

In this session we explore the statistical development of endpoints for progressively degenerative diseases like Alzheimer’s Disease. Presentations will discuss approaches for developing better endpoints from existing instruments, as well as designing new batteries based on longitudinal datasets. Criteria for selecting tests and instruments for a battery; as well as criteria for evaluating batteries as outcome measures for clinical trials will be described by way of example. The talks will illustrate how informed item selection and rescaling can produce efficiencies in clinical endpoints along with improved psychometric properties and clinical meaningfulness. The session will consist of two presentations and a panel discussion. Dr. Laurie Burke from the FDA SEALD Division and Dr. Nicholas Kozaeur from Division of Neurology, CDER/FDA, Dr. Gary Romano, Head, Clinical Development Alzheimer’s Disease, Janssen R&D and Dr. Laura Lee Johnson, NCCAM-NIH, will be discussants.

Optimizing Quantitative Trait Outcome Measures for Clinical Trials of Chronic Progressive Disease
Steve Edland, University of California, San Diego

Criteria for Developing and Evaluating Outcome Measures in Clinical Trials
Nandini Raghavan, Janssen R&D

Discussant(s): Laurie B Burke, SEALD Division, FDA; Laura Lee Johnson, NCCAM-NIH; Nicholas Kozauer, Division of Neurology, FDA; Gary Romano, Janssen R&D

 

PS4f Small Clinical Trials for Rare Diseases: Challenges and Opportunities

09/18/13
8:30 AM - 9:45 AM
Lincoln 6

Organizer(s): Ohad Amit, GlaxoSmithKline; Freda Cooner, FDA; Nicole (Xiaoyun) Li, Merck & Co.; John Scott, FDA CBER/OBE/DB

Chair(s): Nicole (Xiaoyun) Li, Merck & Co.

The National Organization for Rare Disorders (NORD) has identified approximately 7,000 rare diseases which together affect almost 1 in 10 Americans. Having a rare disease is not rare. The development of products for rare disease indications is an increasing area of interest for both small and large pharmaceutical and biotechnology companies. Congress has devoted special legislative attention to the regulation of products for rare diseases, including the Orphan Drugs Act of 1983 and several provisions in the recently passed Food and Drug Administration Safety and Innovation Act (FDASIA). FDA has a dedicated Office of Orphan Products Development to help guide rare disease product applications through the regulatory process, and there are ongoing high level policy efforts focused on helping to ensure access to safe and effective products for rare diseases. The approval of products for rare disease indications generally relies on small clinical trials, which raise distinct and difficult statistical issues. These issues include the need to maximize efficiency, the inappropriateness of asymptotic methods for small samples and, in some cases, the use of historical controls and the need for thorough understanding of the natural history of a disease. In this session, experts from academia, FDA and industry will discuss the many statistical challenges and opportunities of small clinical trials for rare diseases, including general design principles for small clinical trials, use of historical controls, Bayesian adaptive opportunities in small trials, and challenges in implementing small trials for rare diseases.

Challenges in Designing Clinical Trials for Rare Diseases
Chris Coffey, University of Iowa

Historical Controls for Drug Development Clinical Trials
Marc Walton, FDA-CDER

A Bayesian Approach to Rare Disease Trials
Scott M Berry, Berry Consultants

Discussant(s): Ohad Amit, GlaxoSmithKline

 

PS5a Using longitudinal solid tumor data to improve dose selection and Phase III go/no-go decision making in Oncology drug development.

09/18/13
10:00 AM - 11:15 AM
Thurgood Marshall North

Organizer(s): Alan Hughes Hartford, Agensys; Jiang Liu, FDA; Andrew Marc Stein, Novartis Institute for Biomedical Research; Yaning Wang, FDA: Division of Pharmacometrics

Chair(s): Andrew Marc Stein, Novartis Institute for Biomedical Research; Yaning Wang, FDA: Division of Pharmacometrics

With the failure rate of Phase III oncology clinical trials greater than 50%, there is a need to improve the drug development decision-making process. Starting in 2009, new quantitative modeling methods have been developed for predicting Phase III outcomes by integrating longitudinal tumor data and outcome data from multiple studies within an indication. These studies span a range of cancer types and have been shown to have promise in predicting Phase III outcomes of new compounds. However, further validation of these methods is necessary before they can be widely used in Pharma.  This session will review current applications of longitudinal tumor models for dose selection and Phase III outcome prediction and then discuss the next steps for improving and validating these models.  The outline for the session is: 1 Overview of current methods and deficiencies for assessing therapeutic efficacy in Oncology (Michael Maitland); 2 Tumor Growth and Regression Rate Constants - One Value, Many Applications (Antonio Fojo); 3 Drug-independent models for predicting overall survival (Rene Bruno); 4 Panel discussion on model validation to maximize impact on clinical decision making.

Drug-independent models for predicting overall survival
Rene Bruno, Pharsight Consulting Services

Overview of current methods and deficiencies for assessing therapeutic efficacy in Oncology
Michael L. Maitland, University of Chicago Medical Center

Tumor Growth and Regression Rate Constants - One Value, Many Applications
Tito Fojo, NIH

 

PS5b Adaptive sample size increase with no alpha penalty when the unblinded interim result is promising

09/18/13
10:00 AM - 11:15 AM
Lincoln 6

Organizer(s): Pablo E. Bonangelino, FDA/CDRH; Eva R Miller, Quality Data Services; Jack Zhou, FDA; Tianhui Zhou, BMS

Chair(s): Brenda Gaydos, Eli Lilly and Company

Adaptive sample size increase based on unblinded interim results has been controversial. The controversy partially comes from the fact that some of the sample size re-estimation methods rely on non- standard tests and p-values to preserve the Type I error, and often involve down-weighting of the 2nd stage data. However, the seminal work by Chen, DeMets and Lan (Statistics in medicine 2004) proposed to control the Type I error by increasing the sample size only when the unblinded interim result is promising. One of the most appealing features of this method is that it relies on the conventional z-statistics for the final analysis. This approach is becoming popular within the adaptive design clinical trial community, especially after being extended by Gao, Ware and Mehta (2008), and further illustrated by Mehta and Pocock (2011) in their “promising zone” paper. We plan to invite statisticians from industry, academia and government to talk about the method and its applications, and to have a live debate on the methodological, regulatory and operational issues related to this popular method.

Sample size adjustment based on promising interim results and its application in confirmatory clinical trials
Joshua Chen, Merck and Co. Inc.

Sample size re-estimation and other issues in adaptive designs
Ping Gao, The Medicines Company

Discussant(s): Scott Emerson, University of Washington; Gordon Lan, Janssen R&D, Johnson & Johnson; Cyrus Mehta, Cytel Inc; Sue Jane Wang, FDA

 

PS5c Diagnostic based enrichment strategies for clinical trial success and demonstration of comparative effectiveness: are these two goals the same?

09/18/13
10:00 AM - 11:15 AM
Thurgood Marshall East

Organizer(s): Brent Burger, Cytel; Deepak B Khatry, MedImmune/Astra Zeneca; Jingjing Ye, FDA/CDRH

Chair(s): Deepak B Khatry, MedImmune/Astra Zeneca

In December 2012, the FDA issued a draft guidance to solicit comments on “Enrichment Strategies for Clinical Trials to Support Approval of Human Drugs and Biological Products.” With many therapies competing for market share in different diseases, payers are beginning to ask for increased evidence of comparative effectiveness for favorable reimbursement decisions. In order to successfully recuperate the costs of R&D investments, sponsors have to get therapeutics approved by regulatory agencies and also insure a high probability of the likelihood of payer reimbursement. Thus, demonstration of both clinical efficacy and effectiveness lie at the core of successful “stratified medicine,” which promises higher benefits to patients from increased personalization of treatments. In this session, we will bring together experts from the FDA, industry and academia to discuss how both the probability of clinical trial success and increased likelihood of reimbursement by payers can be optimized in the conceptualization and design of clinical trials, particularly at the phase 2 and 3 levels by using biomarkers as diagnostics.

A phase II / III biomarker-based Bayesian design
Marc Buyse, IDDI

Can a treatment be licensed on the basis of a post treatment biomarker (PTB)?
Andrew Stone, AstraZeneca

Diagnostic-based enrichment strategies for clinical trial success and demonstration of comparative effectiveness: are these two goals the same?
Robert Temple, FDA

 

PS5d Challenges in Postmarket Drug Safety Assessments using Propensity Score Methods

09/18/13
10:00 AM - 11:15 AM
Thurgood Marshall West

Organizer(s): Janelle K Charles, U.S. Food and Drug Administration; Paul Thomas DeLucca, Merck and Co., Inc.; Stephine Keeton, PPD; Yun Lu, FDA/CBER

Chair(s): Paul DeLucca, Merck & Co., Inc

The FDA currently requires drug manufacturers to conduct postmarketing studies when there are suspected safety issues arising from product use. Conducting randomized clinical trials for assessing safety is often not feasible, and more importantly not ethical. As an alternative, observational studies using large healthcare databases have been used to fulfill postmarketing requirements. There are numerous sources of bias and issues of confounding with such non-randomized studies that complicate drawing conclusions about drug safety. Matching and stratification have been used to address these issues, but are inadequate as the number of confounders increases. Therefore, there has been increased use of propensity scores methods which attempt to account for confounding; however, these methods also present challenges. For example, with large healthcare databases, determining the relevant variables to be included in the propensity score model can be problematic. Misspecification of the propensity score model could bias the exposure effect estimate. In this session, we will discuss challenges and possible solutions related to design and statistical analyses of drug safety studies using propensity score methods with emphasis on methods for variable selection.

Safety Assessment with Observational Studies: Experiences at FDA/CDER
Mark Steven Levenson, CDER/FDA

Propensity Scores Methods, An Epidemiologist’s Perspective
Edmond S Malka, PPD

Use of high-dimensional propensity scores in assessing the safety of medications
Jeremy A Rassen, Brigham and Women's Hospital and Harvard Medical School

 

PS5e Invited JBS Paper Session 2

09/18/13
10:00 AM - 11:15 AM
Lincoln 5

Organizer(s): Bruce Binkowitz, Merck and Co. Inc.

Chair(s): Bruce Binkowitz, Merck and Co. Inc

Alternative views on setting clinical trial futility criteria
Paul Gallo, Novartis Pharmaceuticals

Some Challenges with Statistical Inference in Adaptive Designs
James Hung, FDA

Challenges and Opportunities in the Design of Pre-Market Observational Comparative Studies Using Existing Data
Lilly Yue, FDA; Nelson Lu, U.S. FDA; Yun-ling Xu, FDA/CDRH

 

PS5f Endpoints to Hypotheses

09/18/13
10:00 AM - 11:15 AM
Thurgood Marshall South

Organizer(s): Bo Li, FDA; Ed Luo, Bausch; Vandana Mukhi, FDA; James D. Neaton, University of Minnesota; Hui Quan, sanofi

Chair(s): Vandana Mukhi, FDA/CDRH

A fundamental element of clinical trial design is the selection of endpoints. The endpoint may at first be defined simply as a clinical outcome, without reference to a specific parameter, scale of measurement, formulation of hypotheses, and method of statistical testing, thereby leaving a range of possibilities for those aspects of the study which collectively not only shape its operating characteristics but also can have bearing on its scientific objectives. This session focuses on things to consider in honing the investigational plan given such a range of possibilities.

Considerations for Design and Data Analysis of Noninferiority/Superiority Cardiovascular Trials
Peng-Liang Zhao, Sanofi

The Two Sides of Dichotomization, and Their Synthesis?
Heng Li, FDA/CDRH

Considerations in Defining and Summarizing Composite Outcomes in Clinical Trials
James D. Neaton, University of Minnesota

 

PS6a Diagnostics Town Hall Meeting

09/18/13
12:45 PM - 2:00 PM
Thurgood Marshall North

Organizer(s): Shanti Gomatam, FDA; Alicia Toledano, Biostatistics Consulting

Chair(s): Hope Knuckles, Abbott; Estelle Russek-Cohen, US FDA CBER

This session will act as an open-mike session in which general discussion from anyone involved in diagnostics can participate. The session will be informal with a goal of knowledge/idea sharing on several different diagnostic topics. The audience will have the opportunity to ask questions and have a dialogue with both industry and FDA present. Biologics, diagnostics and imaging experts will be present. Previously submitted questions will also be addressed by industry and FDA panelists. The session will open with current hot topics in the diagnostics industry – including IVD, imaging, biologics, companion diagnostics, as well as other diagnostic fields. Devices that are not considered diagnostics will not be discussed. Among the hot topics are guidance documents (draft or final) and their impact to FDA or industry, recent changes to CLSI guidance documents, and other clinical topics such as pediatric ages, pivotal trials, migration, co-development, and studies using slides.

 

PS6b Multiplicity Issues Big and Small, More or Less, in Personalized Medicine

09/18/13
12:45 PM - 2:00 PM
Thurgood Marshall South

Organizer(s): Xinping Cui, Dept. of Statistics, University of California, Riverside; Kathleen Fritsch, FDA; Jason C Hsu, The Ohio State University & Eli Lilly and Company; Yi Liu, Millennium: The Takeda Oncology Company

Chair(s): Kathleen Fritsch, FDA

This session discusses some key multiplicity issues in personalized medicine. The first presentation reviews fundamentally the reason for controlling (Type I) error rates, in a regulated industry. It reviews why controlling Tukey’s familywise error rate (FWER), not the Benjamini Hochberg false discovery rate (FDR), has been the practice in clinical trials. It will consider alternative to FWER control that may be useful in biomarker studies. The second presentation discusses biomarker thresholding to identify subgroups of patients with treatment benefit. In personalized medicine, continuous biomarker values are often dichotomized to classify patients into target and non-target populations. Cast in the setting of normally distributed responses that are modeled linearly (such as diabetes and psychiatry), this presentation provides a method of inferring which thresholds correspond to target populations that benefit from the treatment. By providing simultaneous confidence intervals for efficacy corresponding to all candidate thresholds, the proposed method allows for decision-making that takes into consideration medical impact based on both the size of the target population and efficacy in the target population. The third presentation discusses testing for SNPs predictive of clinical outcome. Such testing often starts with testing for different genetic effects within each SNP. This presentation suggests an alternative approach to current practice that is more informative for the purpose of drug development. For a single SNP, simultaneous confidence intervals for dominant, recessive, and additive effects on the clinical response are provided. The improvement of the within SNP inference is crucial for a better overall inference across the SNPs in assessing treatment efficacy. The fourth presentation discusses assumptions required for resampling-based multiple tests to be valid. Permutation is often thought of as a convenient technique of building a reference null distribution for multiple tests, a technique that automatically captures dependence in the data. In reality, nontrivial model assumptions are required for permutation testing to control multiple testing error rates. This presentation will clearly describe the assumptions, and show explicitly how misleading the p-values adjusted by permutation can be, conditionally and unconditionally, if the implicit assumptions do not hold.

An introduction to error rate concepts in biomarker studies
Jason C Hsu, The Ohio State University & Eli Lilly and Company

Thresholding of a Companion Diagnostic Test Confident of Efficacy in Targeted Population
Szu-Yu Tang, Ventana Medical Systems, Inc.

Confident Effect Method for Assessing the Effects of a SNP on Clinical Efficacy
Ying Ding, University of Pittsburgh

Issues in resampling-based multiple testing in personalized medicine
Eloise Kaizar, Ohio State University

 

PS6c Treatment effect estimate in adaptive clinical trial

09/18/13
12:45 PM - 2:00 PM
Thurgood Marshall East

Organizer(s): Jeff D Maca, Quintiles; Cristiana Mayer, Johnson & Johnson Pharmaceutical Research and Development; Ying Yang, FDA/CDRH; Yu Zhao, CDRH/FDA

Chair(s): Yun-ling Xu, FDA/CDRH

In recent years, the use of adaptive design based on accrued data has become very attractive in clinical research due to its flexibility and efficiency. Since the interim treatment effect estimate is often the foundation of the adaption decision-making at each stage of an adaptive design study, the treatment effect estimate for the selected adaptive choice might reflect an unusual distribution of patient observations. Therefore, at the end of study, the overall treatment effect estimate may be biased and difficult to interpret. In this session, we will discuss the impact on treatment effect estimation from trial adaptation and possible inference approaches from different perspective.

Estimation for Bayesian adaptive designs
Peter Mueller, UT Austin

Treatment effect estimate in adaptive clinical trial
Gordon Lan, Janssen R&D, Johnson & Johnson

Adaptive Designs: One Reviewer’s Perspective
Pablo E. Bonangelino, FDA/CDRH

 

PS6d Opportunities and challenges: The new FDA draft guidance for accelerated approval using pCR in breast cancer

09/18/13
12:45 PM - 2:00 PM
Thurgood Marshall West

Organizer(s): Yuqing Tang, FDA; Vivian Yuan, FDA; Lanju Zhang, Abbvie Inc; Yijie Zhou, Merck

Chair(s): Lanju Zhang, Abbvie Inc

May 29, 2012, the FDA issued a new draft guidance that provides recommendations on an accelerated approval of drugs for early-stage, high-risk breast cancer in the neoadjuvant setting. The accelerated approval based on the use of pathologic complete response rate as the surrogate end point can substantially reduce the time and expense whereas the usual development and approval process would take more than a decade. It brings new opportunities for sponsors and immense benefits for patients. Another important feature of the guidance is to allow using a single trial with pCR as the endpoint for an accelerated approval and with extended follow-up to show clinical benefits in disease free survival (DFS) or overall survival (OS) for regular approval. However, there are challenges in designing and implementing this single trial as a stone targeting two birds. For example, the first question is how to determine the trial size that can properly power both the pCR stage and DFS or OS stage. If the recruited patient population is large enough for the pCR end point, is it allowed to conduct data analysis before all patients are enrolled? If yes, how will the pCR data of the later enrolled patients be used? Another question is within how many years a regulatory approval has to be filed after an accelerated approval is granted based on the pCR end point. As the guidance goes, “We expect that our thoughts on how to use pCR appropriately as an endpoint for approval will evolve as we gain additional experience…” Some companies have begun taking advantage of this opportunity to design such trials. In this session, invited speakers from these companies will share their experiences and an expert from the FDA will update findings from a regulatory perspective. Statistical and logistical challenges are highlighted.

Some thoughts on the FDA draft guidance on the use of pCR as an endpoint for accelerated approval of treatment in high-risk early-stage breast cancer
Bo Yang, Abbvie Inc

Implications of recent FDA Guidance based on pCR in Early Breast Cancer
Ram Suresh, GlaxoSmithKline

“Goldilocks” Phase 3 Trials Evaluating Pathologic Complete Response (pathCR) and Event-free Survival (EFS) in High-risk Primary Breast Cancer”
Donald A. Berry, Anderson Cancer Center and Berry Consultants, LLC

Discussant(s): Rajeshwari Sridhara, DBV/OB/OTS/CDER/FDA

 

PS7a Meeting of SIGMEDD: Special Interest Group for Medical Devices and Diagnostics

09/18/13
2:15 PM - 3:30 PM
Thurgood Marshall North

Organizer(s): Scott M Berry, Berry Consultants; Lilly Yue, FDA

Chair(s): Scott M Berry, Berry Consultants

This is a town hall meeting to discuss all issues involved in the industry and regulation of medical devices and diagnostics. Additionally an update of the ASA Special Interest group SIGMEDD will be discussed.

 

PS7b Statistics in Next-Generation Sequencing Data: Method and Application

09/18/13
2:15 PM - 3:30 PM
Thurgood Marshall South

Organizer(s): Ching-Wei Chang, NCTR/FDA

Chair(s): Ching-Wei Chang, NCTR/FDA

Next-generation sequencing (NGS) has advanced the application of high-throughput sequencing technologies in genetic and genomic variation analysis. Whole transcriptome sequencing (RNA-seq) utilizes NGS to measure RNA levels of transcripts in a sample, and is expected to replace the microarray technology. RNA-seq data, which are measured in the form of counts, have been shown to more closely quantify the amount of mRNA produced by a gene than gene intensity levels measured by microarrays. However, the development of statistical methods to analyze RNA-seq data is ongoing. In this session, well-known researchers from government, industry, and academia will present their work and experiences on the method and analysis of RNA-seq data. The statistical issues of data quality, differential expressed gene identification, and clinical practice will be discussed to improve RNA-seq data analysis in order that the technology can successfully and reliably be used in clinical practice and regulatory decision making.

Impact of data analysis algorithm choice on RNA-seq gene expression estimation and downstream gene-based prediction
May D. Wang, Emory-Georgia Tech Cancer Nanotechnology Center

DAFS: Data-Adaptive Flag Method for RNA-Sequencing Data
Nysia George, NCTR

RNA-Seq versus Microarray Predictive Modeling: Lessons Learned from the Sequencing Quality Control (SEQC) Consortium Neuroblastoma Study
Russ Wolfinger, SAS Institute Inc

 

PS7c Applying Bayesian Networks to Manage Operational Risk

09/18/13
2:15 PM - 3:30 PM
Thurgood Marshall East

Organizer(s): Xuefeng Li, US Food and Drug Administration; Xiongce Zhao, NIH

Chair(s): Xiongce Zhao, NIH

A 2009 KPMG report, "Risk Management in the Pharmaceuticals and Life Sciences Industry", states that “… the priority for the pharmaceuticals and life sciences executives … is on improving risk processes, followed by improving data quality, and availability.” The report also states that the same executives see lack of expertise as a barrier for achieving this desired result. This session will challenge statisticians in industry and government to become leaders of the effort to improve risk management processes and data by implementing Bayesian Networks (BNs). As explained by Neil, Fenton, and Marquez [1], "... In practical terms, one of the major benefits from using BNs is in that probabilistic and causal relationships among variables are represented and executed as graphs and can thus be easily visualized and extended, making model building and verification easier and faster… The power, generality and flexibility offered by BNs are now widely recognized and they are being successfully applied in diverse fields, including risk analysis and decision support." Through this session, participants will be introduced to basic theory of BNs, learn the advantages of using them, and see examples of their application in the biopharmaceutical industry, such as in risk-based monitoring [2] and Risk Evaluation and Mitigation Strategies [3]. [1] Using Bayesian Networks and Simulation for Data Fusion and Risk Analysis, Martin Neil, Norman Fenton and David Marquez, Queen Mary, University of London, and Agena Ltd, UK [2] Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring (Draft Guidance for Industry August, 2011) [3] Format and Content of Proposed Risk Evaluation and Mitigation Strategies (REMS), REMS Assessments, and Proposed REMS Modifications (Draft Guidance for Industry September, 2009) I presented the concepts of BNs for risk-based monitoring and REMS at the DIA Canadian Conference in Nov, 2012 in a 20 minute talk titled "Evidence-Based Risk Management". However, this topic will require, and deserves, multiple and thorough statistical treatments and training; akin to adaptive design methods. Potential speakers have not been contacted. My idea is that there would be 3; theoretical, applied industry, and government representative. However, if this session is accepted, and because it is new, I think that a group organizing committee should decide. Initially I have in mind the authors of [1] given their new book on this topic.

Applying Bayesian Networks to Manage Operational Risk
John F Amrhein, McDougall Scientific Ltd.

A node is as good as its adjoining edge: Bayesian approach to a network of data
Meg Gamalo, FDA

 

PS7d Treatment Effect Assessment in Multi-Regional Clinical Trials

09/18/13
2:15 PM - 3:30 PM
Thurgood Marshall West

Organizer(s): Susan Huyck, Merck; Andrejus Parfionovas, U.S. FDA; Hui Quan, sanofi; Yuqing Tang, FDA

Chair(s): Susan Huyck, Merck; Yuqing Tang, FDA

Multi-Regional Clinical Trials (MRCTs) stimulate collaborative clinical research among different regions and offer opportunities to assess regional treatment effects in one study. MRCTs have increasingly become a common practice in new drug development. Nevertheless, existence of regional differences in treatment effect presents great challenges for the interpretation of clinical trial results. Thus it is important to project potential factors that may cause the treatment effect differences at the design stage or even during the trial (with data blinded) and find ways to reduce their impact. Also, methods for more formal statistical assessment of treatment effect consistency or heterogeneity should be pre-specified . Research on MRCTs is ongoing. New approaches, new considerations and new experiences are still needed to add to the research conducted and presented previously and can be helpful in addressing specific issues. In this session, speakers from FDA, industry and academia will present their research results and use real clinical trial data to illustrate statistical methods for the exploration of regional differences and factors to which the regional differences, if any, might be attributed. This session includes three presentations: Presentation 1: Title: Treatment Effect Difference in Multiple-Regional Trials for Alzheimer’s Disease Speaker: Jingyu (Julia) Luan, FDA Abstract: Alzheimer’s Disease (AD), a progressive neurodegenerative disorder, is a cognitive disorder of the elderly population. AD begins with memory loss and progresses to severe impairment of activities of daily living, leading to death approximately 8 years on average from time of diagnosis of dementia. The prevalence of AD in people over the age of 65 is 5-10%, increasing up to 50% in those over the age of 85. Alzheimer's Disease affects approximately 4.5 million people in the United States. Currently, there are five drugs (Cognex®, Aricept®, Exelon®, Razadyne®, Namenda®) marketed for Alzheimer's Disease in the US. Approximately 70% of the registration trials for Alzheimer’s Disease drugs were multi-regional clinical trials. In this presentation, the speaker will share results of a meta-analysis of the available registration trials for approved AD drugs. The talk will address whether there are any meaningful differences between US and non-US studies and among multiple regions and how these differences, if any, may impact future study designs, trial conducts and data analyses. Presentation 2: Title: Statistical Evaluation and Analysis of Regional Interactions: The PLATO Trial Case Study Speaker: Kevin J Carroll, Boyd Consultants Abstract: In a recent, a large cardiovascular outcomes MRCT known as ‘PLATO’, unexpected evidence of regional heterogeneity emerged during the analysis where by results in US patients appeared qualitatively different to results in non US patients. Much dialogue with FDA followed including an important Advisory Committee discussion at which the observed regional interaction ad associated statistical issues were a major focus. In evaluation of the data evidence emerged that concomitant long term aspirin dose was the likely explanatory factor. This led to US labelling that limited concomitant aspirin dosage and included wording warning of a potential loss of effect with higher aspirin doses. This example highlights the challenges of MRCTs in drug development and the critical role of reasoned statistical analysis when an unexpected regional interaction is encountered. This talk will discuss this example and the statistical thinking and methodology that went into the evaluation of the data and will offer recommendations for the future design and analysis of MCRTs. Presentation 3: Title: Designing and Analyzing Clinical Trials on the Dominantly Inherited Alzheimer’s Network (DIAN) Speaker: Chengjie Xiong, Washington University Abstract: This ongoing trial recruited participants who were either known to have a disease-causing mutation (presenilin 1 (PSEN1), presenilin 2 (PSEN2) and amyloid precursor protein (APP)) or who were at risk for such a mutation (the child or sibling of a proband with a known mutation) and unaware of their genetic status from the Dominantly Inherited Alzheimer’s Network (DIAN), a multicenter international longitudinal observational study supported by the National Institutes of Health (U01-AG032438). This talk will focus on novel design and analytic challenges that emerged from the trial, including multiple drugs with different mechanisms of action and a delayed treatment arm on the biomarker phase, multiple regions/sites on the biomarker phase (Phase II), randomization and minimization across multiple regions, power analyses, interim and final efficacy analyses, seamless transition to cognitive endpoint phase (Phase III), and the selection and analyses of primary cognitive endpoint for the Phase III trial across regions.

Treatment Effect Difference in Multiple-Regional Trials for Alzheimer’s Disease
Jingyu Luan, FDA

Statistical Evaluation and Analysis of Regional Interactions: The PLATO Trial Case Study

Designing and Analyzing Clinical Trials on DIAN
Chengjie Xiong, Washington University

Discussant(s): James Hung, FDA