Abstract:
|
Bayesian model selection is premised on the assumption that the data are generated from one of the postulated models, however, in many applications, all of these models are incorrect. When two or more models provide a nearly equally good fit to the data, Bayesian model selection can be highly unstable, potentially leading to self-contradictory findings. We explore using bagging on the posterior distribution ("BayesBag") when performing model selection -- that is, averaging the posterior model probabilities over many bootstrapped datasets. We provide theoretical results characterizing the asymptotic behavior of the standard posterior and the BayesBag posterior under misspecification, in the model selection setting. We empirically assess the BayesBag approach on synthetic and real-world data in (i) feature selection for linear regression and (ii) phylogenetic tree reconstruction. Our results demonstrate that BayesBag provides an easy-to-use and widely applicable approach that improves upon standard Bayesian model selection by making it more stable and reproducible.
|