547 – Contributed Oral Poster Presentations: Biopharmaceutical Section
On the Sensitivity of the Lasso to the Number of Predictor Variables
Cheryl Flynn
New York University
Clifford M. Hurvich
New York University
Jeffrey S. Simonoff
New York University
The Lasso is a computationally efficient procedure that can produce sparse estimators when the number of predictors (p) is large. Oracle inequalities provide probability loss bounds for the Lasso estimator at a deterministic choice of the regularization parameter. These bounds tend to zero if p is appropriately controlled, and are thus commonly cited as theoretical justification for the Lasso and its ability to handle high-dimensional settings. Unfortunately, in practice the regularization parameter is not selected to be a deterministic quantity, but is instead chosen using a random, data-dependent procedure. To address this shortcoming of previous theoretical work, we study the loss of the Lasso estimator when tuned optimally for prediction. Assuming orthonormal predictors and a sparse true model, we prove that the best possible predictive performance of the Lasso deteriorates as $p$ increases with positive probability. We further demonstrate empirically that the deterioration in performance can be far worse than suggested by the commonly held views in the literature and that this deterioration persists as the sample size increases.