Online Program

Return to main conference page
Friday, September 14
Fri, Sep 14, 9:15 AM - 9:55 AM
Atrium
Poster Session

Missing Data Imputation in Noninferiority Trials: A Simulation Study (300696)

*Brooke Ann Rabe, University of Arizona 

Keywords: non-inferiority, missing data, multiple imputation, intention-to-treat, per-protocol

Non-inferiority (NI) clinical trials are designed to show that an experimental treatment is therapeutically no worse than the standard of care. The design is used when a new treatment may be preferred for reasons other than efficacy such as lower cost, convenience, improved safety profile, etc. NI trials are by nature less conservative than superiority and placebo-controlled studies. They are more challenging to design and analyze and they are complicated by the presence of missing and incomplete data caused by patient withdrawal, loss-to-follow-up, or non-compliance with treatment protocols. For instance, missing data improperly handled can weaken sensitivity to differences between groups and bias results toward the alternative conclusion of non-inferiority. This is problematic if a trial’s primary estimand requires analysis of an intention-to-treat (ITT) population. Some regulatory agencies regard an analysis of the per-protocol (PP) population as equally important in the NI context. However, analyzing the subgroup of compliers and/or completers may largely negate the benefits of randomization via selection bias. This study’s primary objective was to compare multiple imputation and other methods for missing data handling in both ITT and PP analyses of a longitudinal NI trial. We conducted simulations of NI trials with both missing data and subject non-compliance under varying trial conditions (trajectory, reasons for dropout, and amount of missing data) and assessed these methods by estimating type I error rates, power and bias of effect. Single imputation, linear mixed models, and multiple imputation (with both missing at random and not at random assumptions) were among the methods investigated. Preliminary results show that type I error rates are poorly controlled when the drop-out mechanism seems related to treatment, that is, when the mechanism behaves differently between groups.