Online Program Home
My Program

Abstract Details

Activity Number: 23 - Recents Advances in Statistical Learning and Network Data Analysis
Type: Topic Contributed
Date/Time: Sunday, July 29, 2018 : 2:00 PM to 3:50 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #330403 Presentation
Title: Generalized Bias and Variance for Convex Regularized Estimators
Author(s): Pierre Bellec*
Companies: Rutgers University
Keywords: lasso; high-dimension; regression; penalization

Convex estimators such as the Lasso, the matrix Lasso and the group Lasso have been studied extensively in the last two decades, demonstrating great success in both theory and practice. 1) The bias and variance of linear estimators is easy to define and provide precise insights on the performance of linear estimators. How these notions be generalized to nonlinear convex estimators? 2) The performance guarantees of these estimators require the tuning parameter to be larger than some universal threshold, but the literature is mostly silent about what happens if the tuning parameter is smaller than this universal threshold. How bad is the performance when the tuning parameter is below the universal threshold? 3) The correlations in the design can significantly deteriorate the empirical performance of these nonlinear estimators. Is it possible to quantify the this deterioration explicitly? Is there a price to pay for correlations; in particular, is the performance for correlated designs always worse than that for orthogonal designs? We will provide some general properties of norm-penalized estimators and propose a generalization of the bias and the variance to answer these questions.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2018 program