Abstract:
|
Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size n, the (inverse) target error 1/epsilon, and the (inverse) failure probability 1/delta. This work shows that O(1/epsilon) iterations of gradient descent with Omega(1/epsilon^2) training examples on two-layer ReLU networks of any width exceeding polylog(n, 1/epsilon, 1/delta) suffice to achieve a test misclassification error of epsilon. The analysis further relies upon a margin property of the limiting kernel, which is guaranteed positive, and can distinguish between true labels and random labels.
Joint work with Ziwei Ji.
|