Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 283 - Theoretical Advances in Deep Learning
Type: Invited
Date/Time: Wednesday, August 5, 2020 : 10:00 AM to 11:50 AM
Sponsor: IMS
Abstract #309307
Title: Polylogarithmic Width Suffices for Gradient Descent to Achieve Arbitrarily Small Test Error with Shallow ReLU Networks
Author(s): Matus Telgarsky*
Companies: University of Illinois - Urbana Champaign
Keywords:
Abstract:

Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size n, the (inverse) target error 1/epsilon, and the (inverse) failure probability 1/delta. This work shows that O(1/epsilon) iterations of gradient descent with Omega(1/epsilon^2) training examples on two-layer ReLU networks of any width exceeding polylog(n, 1/epsilon, 1/delta) suffice to achieve a test misclassification error of epsilon. The analysis further relies upon a margin property of the limiting kernel, which is guaranteed positive, and can distinguish between true labels and random labels.

Joint work with Ziwei Ji.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2020 program