Motivated by recent progress on the problem of providing adjusted inference after model selection, we explore analogous Bayesian methods. The focus is on the Gaussian linear model, and the framework of Yekutieli (2012) is adopted wherein the joint distribution of the parameters and the data incorporates a truncated likelihood and a pre-specified prior.
Existing frequentist work capitalizes on the fact that testing a one-dimensional linear hypothesis on the coefficients after selection reduces to working with a univariate truncated Gaussian distribution; by contrast, Bayesian inference requires to handle the full, complicated likelihood which involves also nuisance parameters. One of our main contributions is a tractable approximation to that likelihood, which can also be used in itself to obtain frequentist point estimates. By appending a prior to the approximate selective likelihood, we can take advantage of the computational flexibility of Bayesian methods; in addition, a prior can compensate for information lost by truncation, thereby improving the accuracy of inference.
This is joint work with Snigdha Panigrahi and Jonathan Taylor.
|