Abstract:
|
We devise a new framework for fully Bayesian variable selection and model averaging that computationally scales linearly with the number of predictors and involves no MCMC. Our approach exploits an asummed block-diagonal structure of the Gramm matrix, which might there be by design (as in wavelets, PCA regression, or certain experimental designs) or it might be the result of a pre-processing of the predictors. At the stated cost our approach returns posterior model inclusion probabilities, the highest posterior probability model, and Bayesian model averaged estimates of parameters
This is joint work with David Rossell (Warwick)
|