Abstract:
|
In traditional learning theory, interpolation has usually been associated with overfitting and poor generalization. Modern machine learning algorithms cast doubt on this paradigm and often generalize well despite being expressive enough to fit random labels. On the other hand, neural networks for example are shown to be fail with adversarial perturbations even though standard accuracy is high. We describe the interaction between generalization of interpolating estimators and the occurrence of adversarial examples. In particular, we characterize in a unifying framework under which conditions generalization via interpolation is not possible and when an inherent tradeoff between adversarial and standard error is inevitable in the finite sample setting.
|