Shrinkage priors are widely used in high dimensional settings for variable selection, prediction and learning. There are currently two generic flavors for shrinkage priors - the first being having a global shrinkage parameter and the second, having individual shrinkage for each of the variables in question. There has been a lot of interest of late, notablywith the horse-shoe prior of Scott and Polson, 2010, which belongs to the second category with them being shown to outperform the global shrinkage parameter paradigm in experiments and theoretical settings.
We argue in this article that neither approach is optimal - both from theoretical settings and computational perspectives. We propose a new variant of shrinkage priors - which is a "middle-path" between the global and local approaches and show superior empirical performance and significant gains in computational efficiency. We apply our proposed algorithm to the setting of high dimensional genetic data and compare it against competing approaches.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.