Abstract:
|
Sparse signal recovery remains an important challenge in large scale data analysis and global-local (G-L) shrinkage priors have undergone an explosive development in the last decade in both theory and methodology. These developments have established the G-L priors as the state-of-the-art Bayesian tool for sparse signal recovery as well as default non-linear problems. In the first half of my talk, I will survey the recent advances in this area, focusing on optimality and performance of G-L priors for both continuous as well as discrete data. In the second half, I will discuss a few unexplored aspects of their behavior, such as, validity as a non-convex regularization method, adaptivity to heavy-tailed errors and extension to large discrete data structures. I will offer some insights into these problems and point out future directions. (The asymptotic optimality of horseshoe decision rule was part of my PhD dissertation work under the guidance of Prof. J. K. Ghosh and this talk is dedicated to his memory).
|