Abstract:
|
$L_1$-regularized quantile regression provides a fundamental technique for analyzing high-dimensional data that are heterogeneous with potentially heavy-tailed random errors. We show that $l_1$-QR can achieve the near-oracle error bound for estimating the regression coefficients under conditions weaker than those in the literature; and that $l_1$-QR is almost optimal in a minimax sense without requiring the Gaussian error assumption. We provide both theoretical and numerical evidence for scenarios where $l_1$-QR can outperform LS-Lasso. Furthermore, we show that under some regularity conditions, any local solution of nonconvex penalized quantile regression can achieve the near oracle rate in high dimension.
|