Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 283 - Theoretical Advances in Deep Learning
Type: Invited
Date/Time: Wednesday, August 5, 2020 : 10:00 AM to 11:50 AM
Sponsor: IMS
Abstract #309301
Title: Good Linear Classifiers Are Abundant in the Interpolating Regime
Author(s): Jason Klusowski* and Ryan Theisen and Michael Mahoney
Companies: Rutgers University and UC Berkeley and UC Berkeley
Keywords:
Abstract:

Understanding how over-parameterized models can generalize well is a curious topic in modern learning theory. The widely used uniform convergence framework seeks to answer this question by bounding the test error of the worst-case model. In this talk, we revisit the statistical mechanics approach to learning, which instead attempts to understand the behavior of the typical model. To quantify this typicality in the setting of over-parameterized linear classification, we develop a method to accurately compute the fraction of interpolating classifiers which attain a given (small) test error value. Empirically, we find that in many regimes this fraction is nearly one, indicating that most linear classifiers have a low test error. We also observe an interesting phase transition: for a given training and testing set, there is a critical test error below which this fraction is identically zero and above which it quickly approaches one. Hence, this critical value characterizes the test performance of the typical classifier. We then study this phenomenon theoretically and derive simple expressions for the critical error value, which qualitatively replicates the empirical behavior.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2020 program