Online Program

Return to main conference page
Friday, May 18
Computing Science
Distinguished Colleagues of Edward Wegman: Applications to Data Science
Fri, May 18, 10:30 AM - 12:00 PM
Grand Ballroom D
 

Bayesian Penalty Mixing with the The Spike and Slab Lasso (304744)

Presentation

*Edward George, University of Pennsylvania 

Keywords: Bayesian variable selection; High-dimensional regression; LASSO; Penalized likelihood

Despite the wide adoption of spike-and-slab methodology for Bayesian variable selection, its potential for penalized likelihood estimation has largely been overlooked. We bridge this gap by cross-fertilizing these two paradigms with the Spike-and-Slab Lasso, a procedure for simultaneous variable selection and parameter estimation in linear regression. A mixture of two Laplace distributions, the Spike-and-Slab Lasso prior induces a new class of self-adaptive penalty functions that arise from a fully Bayes spike-and-slab formulation, ultimately moving beyond the separable penalty framework. A virtue of these non-separable penalties is their ability to borrow strength across coordinates, adapt to ensemble sparsity information and exert multiplicity adjustment. With a path-following scheme for dynamic posterior exploration, efficient EM and coordinatewise implementations, the fully Bayes penalty is seen to mimic oracle performance, providing a viable alternative to cross-validation. Further elaborations of the Spike-and-Slab Lasso for fast Bayesian factor analysis illuminate its broad potential. (This is joint work with Veronika Rockova).