Abstract:
|
Markov chain Monte Carlo (MCMC) methods are frequently used to approximately simulate high-dimensional, multimodal probability distributions. In adaptive MCMC, the transition kernel is changed "on the fly" in the hope to speed up convergence. We study interacting tempering, an adaptive MCMC algorithm based on interacting Markov chains, which can be seen as a simplified version of the equi-energy sampler. Under easy to verify assumptions on the target distribution (on a finite space), we show that the interacting tempering process rapidly forgets its starting distribution. This holds true in many settings where the process is known to converge exponentially slowly to its limiting distribution. Consequently, we argue that convergence diagnostics that are based on demonstrating that the process has forgotten its starting distribution (as, for example, the popular Gelman-Rubin diagnostic) might be of limited use for adaptive MCMC algorithms like interacting tempering.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.