Abstract:
|
Weighted nearest neighbor (WNN) classifiers are fundamental non-parametric classifiers. There exists a vast room of flexibility in the choice of weights for the neighbors in a WNN classifier. In this talk, we will introduce a new locally weighted nearest neighbor (LWNN) classifier which adaptively assigns weights for different test data points. Given a training data set and a test data point x0, the weights for classifying x0 in LWNN are obtained by minimizing an upper bound of the estimation error of the regression function at x0. Similar to most other WNN classifiers, LWNN places larger weights on closer neighbors. However, in addition to the ranks of neighbors’ distances, the weights in LWNN also depend on the raw values of the distances. Our theoretical study shows that LWNN is minimax optimal when the marginal feature density is bounded away from zero. In the general case with an additional tail assumption on the feature density, the upper bound of the excess risk of LWNN matches the minimax lower bound up to a logarithmic term. Our numerical comparisons between LWNN and some competitors have further demonstrated its effectiveness.
|