Abstract:
|
we provide a framework for assessing the default nature of a prior distribution. Our approach is based on a property known as regular variation. We study this property for the popular class of global-local shrinkage priors. In particular, we show that the horseshoe priors, which were originally designed to handle sparsity, also possess the desirable property of having regular variation and thus are appropriate for default Bayes analysis. To illustrate our methodology, we solve a problem of non-informative priors due to Efron (1973) who showed that standard non-informative priors that are "flat" for a high-dimensional normal means model can be highly informative for a nonlinear parameter of interest. We consider four such problems in the current article when the underlying true parameter vector is sparse: (1) the sum of squares of normal means, (2) the maximum of normal means, (3) the ratio of normal means, and (4) the product of normal means. We show that recently proposed global-local shrinkage priors such as the horseshoe and horseshoe+ perform well in each case. Using properties of regular variation, we present a unified framework to demonstrate the reason for this lies in the a
|