Activity Number:
|
344
|
Type:
|
Invited
|
Date/Time:
|
Wednesday, August 10, 2005 : 8:30 AM to 10:20 AM
|
Sponsor:
|
Section on Statistical Computing
|
Abstract - #302589 |
Title:
|
Convergence of Adaptative Importance Sampling Algorithms
|
Author(s):
|
Jean-Michel Marin and Arnaud Doucet*+ and Christian Robert
|
Companies:
|
Cambridge University and University Paris-Sud and CEREMADE and University Paris-Sud and CEREMADE
|
Address:
|
Signal Processing, Dept. of Engineering, Cambridge, International, CB3 ODS, United Kingdom
|
Keywords:
|
Bayesian inference ; MCMC algorithms ; adaptative algorithms ; iterated importance sampling ; Population Monte Carlo ; convergence
|
Abstract:
|
For numerous models, it is impossible to conduct an exact Bayesian inference. There are many cases where the derivation of the posterior distribution leads to intractable calculations. The Bayesian computational literature has been dominated by the development of simulation approximations based on Markov chains, the famous Markov Chains Monte Carlo (MCMC) methods. However, the more complicated the model, the more expensive the MCMC approaches become in terms of time and storage. As an alternative to the use of the MCMC approximations, we have shown the notion of importance sampling actually can be greatly generalized to encompass more adaptive and local schemes than thought previously. This leads to the Population Monte Carlo (PMC) algorithm. The essence of the PMC scheme is to learn from experience---to build an importance sampling function based on the performances of earlier importance sampling proposals. By introducing a temporal dimension to the selection of the importance function, an adaptive perspective can be achieved at little cost for a potentially large gain in efficiency. We give here some convergence theorems for different PMC strategies.
|