Online Program Home
My Program

Abstract Details

Activity Number: 314
Type: Contributed
Date/Time: Tuesday, August 2, 2016 : 8:30 AM to 10:20 AM
Sponsor: Section on Statistical Learning and Data Science
Abstract #318602 View Presentation
Title: Scaled Concave Penalized Regression
Author(s): Long Feng* and Cun-Hui Zhang
Companies: and Rutgers University
Keywords: Variable selection ; Concave penalization ; Restricted eigenvalue ; Oracle inequality ; Sign consistency ; Selection consistency

Over the past decade, concave penalized least square estimator (PLSE) has been studied intensively. It is been shown that the PLSE guarantees variable selection consistency under signi cantly weaker conditions than the Lasso. However, the error bounds for prediction and coefficients estimation in the literature still requires significant stronger condition than what Lasso requires. Ideally, selection, prediction and estimation property should only depend on lower restricted eigenvalue, does it achievable? In this paper, we give an affirmative answer.

We prove that the concave PLSE matches the oracle inequalities for prediction error and Lq coefficients estimation error for the Lasso, based only on the restrict eigenvalue condition. Furthermore, under a uniform signal strength condition, selection consistency does not require any additional conditions for proper concave penalties such as SCAD and MCP. Our theorem applies to all local solutions that computable by path following algorithms starting from the origin. We also developed a scaled version of concave PLSE, which jointly estimates the regression coefficients and noise level, with negligible computation cost.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2016 program

Copyright © American Statistical Association