Online Program

Return to main conference page

All Times EDT

Wednesday, September 22
Wed, Sep 22, 1:00 PM - 2:00 PM
Virtual
Poster Session I

Statistical Considerations of Bioequivalence Studies Using In Vitro Permeation Test (IVPT) Data (302357)

View Presentation

*Neha Agarwala, University of Maryland, Baltimore County 
Nam Hee Choi, US Food and Drug Administration 
Sungwoo Choi, US Food and Drug Administration 
Jessica Kim, US Food and Drug Administration 
Elena Rantou, FDA/CDER 

Keywords: Topical products, IVPT, Reference-Scaled Bioequivalence, Outliers

For topical dermatological drug products, establishing bioequivalence (BE) between a generic (Test) and a reference listed drug (RLD) product based on the in vitro permeation test (IVPT) consists of comparing the rate and extent of drug permeation through excised human skin. The IVPT data analysis uses a reference-scaled approach and two statistical endpoints, those of cumulative amount penetrated (AMT) and maximum flux (Jmax). Commonly arising issues in this analysis can be the presence of aberrant data values, or outliers, and the existence of zero endpoint values along with numerous challenges that such values pose to the IVPT review. In the past, frequent questions raised, refer to, considerations for removing a donor due to “aberrant” data and related challenges, handling an unbalanced dataset if it is not feasible to replace a cell that was identified to be an outlier, validity of the normality assumptions associated with commonly used outlier tests, and handling zero values. Recently, substantial progress has been made, especially toward the direction of analyzing an unbalanced dataset. However, there are still open questions that need to be addressed such as: 1.Is it possible to impact an evaluation of BE by selecting a specific type of outlier test or by manipulating the significance level associated with the test? 2.Will the presence of outliers in a dataset, contribute to a higher within-reference variability (SWR), and consequently exaggerate the scaling and inflate type-I error rate? 3.Is it possible to develop approaches to identify such statistical outliers that are well differentiated from the dataset or are different from what would be expected due to inherent variability in the data? 4.How can we handle the zero endpoint values in such a way that we overcome the computational deficiency and also account for the small endpoint values in our analysis? 5.How all these these considerations affect the performance of the statistical test?