Abstract:
|
Bayesian computation relies on approximations, be they from Monte Carlo error, as in Markov chain Monte Carlo, or from simplified posterior structures, as in variational Bayes, for example. Optimization often plays second fiddle to integration, considering the inference role of marginal posterior distributions. But there is a class of optimization schemes that has long been known to provide for approximate posterior sampling in certain models. I will review some recent developments in the domain of randomly weighted objective functions, drawing connections to model-guided nonparametric Bayesian inference including the weighted Bayesian bootstrap and the loss-likelihood bootstrap. The use of random weighting with clustering is of interest, and if time permits I will discuss improvements to large-scale testing in genomics based upon random weighting schemes.
Based in part on a CJS paper with Nick Polson and Jianeng Xu, and also on work with my PhD student Tun Lee Ng.
|