Abstract:
|
A main challenge in high-dimensional variable selection is to enforce sparsity adequately. Because of theoretical and computational difficulties in non-standard settings, most research efforts have been directed towards linear regression with Normal errors. Naturally, in actual applications errors may not be Normal, hence it is important to relax these assumptions and consider their implications. We extend the usual Bayesian variable selection framework for Normal linear models to more flexible errors that can capture asymmetries and heavier-than-normal tails. Importantly the error structure is learnt from the data, so that the model automatically reduces to Normal errors when the further flexibility is not needed. We show that the corresponding log-likelihoods are concave, leading to computationally efficient optimization and integration and hence rendering the approach practical in high dimensions. Further, although the models are slightly non-regular we show that one can obtain asymptotic characterizations in an M-open situation, i.e. where data are truly generated from a model outside the considered families. Our examples show that while the extra flexibility has no significant
|