Abstract:
|
Over the past decade, concave penalized least square estimator (PLSE) has been studied intensively. It is been shown that the PLSE guarantees variable selection consistency under signicantly weaker conditions than the Lasso. However, the error bounds for prediction and coefficients estimation in the literature still requires significant stronger condition than what Lasso requires. Ideally, selection, prediction and estimation property should only depend on lower restricted eigenvalue, does it achievable? In this paper, we give an affirmative answer.
We prove that the concave PLSE matches the oracle inequalities for prediction error and Lq coefficients estimation error for the Lasso, based only on the restrict eigenvalue condition. Furthermore, under a uniform signal strength condition, selection consistency does not require any additional conditions for proper concave penalties such as SCAD and MCP. Our theorem applies to all local solutions that computable by path following algorithms starting from the origin. We also developed a scaled version of concave PLSE, which jointly estimates the regression coefficients and noise level, with negligible computation cost.
|