Abstract:
|
Sparse signal recovery remains an important challenge in large scale data analysis and global-local (G-L) shrinkage priors have undergone an explosive development in the last decade in both theory and methodology. These developments have established the G-L priors as the state-of-the-art Bayesian tool for sparse signal recovery as well as default non-linear problems. While there is a huge literature proposing elaborate shrinkage and sparsity priors for high-dimensional real-valued parameters, there has been limited consideration of discrete data structures. In the first half of this talk, I will survey the recent advances in G-L shrinkage priors, focusing on theoretical optimality of these priors for both continuous as well as quasi-sparse count data. In the second half, I will discuss a few unexplored aspects of their behavior, such as, validity as a non-convex regularization method, adaptivity to heavy-tailed errors and extension to discrete data structures including sparse compositional data. I will offer some insights into these problems and point out future directions.
|