Abstract:
|
We study the performance of l_q regularized least squares, also known as bridge estimators, for 0 < = q < = 2 under the asymptotic setting in which the number of observations grows proportionally with the number of predictors. In our analysis, the vector of regression coefficients is assumed to be sparse. Despite the non-convexity of bridge optimization for 0 < q< 1, they are still appealing for their closer proximity to the "ideal'' l_0 regularized least squares. In this talk, we analyze the properties of the global minimizer of the bridge estimators under the optimal tuning of the regularization parameter. Our goal is to answer the following questions: (i) Do non-convex regularizers outperform convex regularizers? (ii) Does q=1 outperform other convex optimization problems when the vector regression coefficient vector is sparse? We discuss both the predictive power and variable selection accuracy of these algorithms. If time permits, we also discuss algorithms that can provably reach the global minima of the non-convex problems in certain regimes.
This talk is based on a joint work with Haolei Weng, Shuaiwen Wang, and Le Zheng.
|