Online Program Home
My Program

Abstract Details

Activity Number: 229 - Advances in the Neyman-Pearson Classification
Type: Topic Contributed
Date/Time: Monday, July 29, 2019 : 2:00 PM to 3:50 PM
Sponsor: WNAR
Abstract #304660
Title: Neyman-Pearson Classification: An Umbrella Algorithm
Author(s): Xin Tong* and Yang Feng and Jingyi Jessica Li
Companies: University of Southern California and Columbia University and University of California, Los Angeles
Keywords: Neyman-Pearson classification; type I error; asymmetry

In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, ?, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than ? do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than ?, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2019 program