Abstract:
|
Small departures from model assumptions can lead to misleading inferences, especially as data sets grow large. Recent work has shown that robustness to small perturbations can be obtained by using a power posterior, which is proportional to the likelihood raised to a certain fractional power, times the prior. In many models, inference under a power posterior can be implemented via minor modifications of standard algorithms, however, mixture models present a particular challenge requiring new algorithms. We have found a simple and scalable algorithm that yields results very similar to the power posterior for mixture models, by modifying the standard Gibbs sampling algorithm to use power likelihoods for only the mixture parameter updates. Another challenge in the practical implementation of power posteriors is how to choose the power appropriately. We present a data-driven technique for choosing the power in an objective way to obtain robustness to small perturbations. We illustrate with real and simulated data, including an application to flow cytometry clustering.
|