Abstract:
|
Variable selection for high-dimensional problems yields a trade-off between statistical and computational efficiency. Penalization methods such as the LASSO solve a relaxation of the best subsets problem that run in polynomial time, but only achieve provable statistical guarantees for selection in limited settings. Is possible to achieve better statistical performance for variable selection in a computationally faster manner? We answer in the affirmative, introducing a new approach to variable selection that we term Algorithmic Regularization Paths. Our method quickly finds a sequence of sparse models, similar in spirit to regularization paths, by solving a series of linked subproblems associated with the Alternating Direction Methods of Multipliers (ADMM) algorithm. Here, we introduce this novel method, provide intuition for where the algorithm originates, study its theoretical properties, and draw connections to existing regularization methods. Empirical results show that our Algorithm Paths yield better performance in terms of variable selection and prediction error and are moreover computationally faster than all existing approaches.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.