Statistical strategy for assay validation
View Presentation View Presentation
*Charles Y Tan, Pfizer Worldwide R&D 

Keywords: assay validation,equivalence tests

One of the significant methodological advances in the recent decade is the increasingly widespread recognition that traditional significant tests do not meet the needs of assay validation because it rewards less data and more variability. The trend has been moving toward interval conformance tests, a.k.a., equivalence tests. The recently updated USP bioassay chapters epitomize this welcome trend. In this talk, I’ll present a validation strategy that insists on interval conformance tests for (relative) accuracy and precision, but allows alternatives, such as Akaike's Information Criteria, to help us make decisions on deeper level characteristics, e.g., linear range, parallelism, poolability of variances, and etc. Acceptance criteria on (relative) accuracy and precision can be linked to the “intended use” more easily than those of deeper characteristics. I’ll argue that we should power our validation study to meet the acceptance criteria on (relative) accuracy and precision. Meanwhile, the strategy for deeper characteristics should be making sensible, data supported, decisions, not necessarily proving “beyond reasonable doubt”. The versatility of information criteria as alternatives to hypothesis testing is elucidated by several examples using AICC.