Abstract:
|
Divide-and-conquer Bayesian methods consist of three steps: divide the data into smaller subsets, run a sampling algorithm in parallel, and combine parameter draws from all subsets. The combined parameter draws are used for computationally efficient posterior inference in massive data sets. The first two steps in the current methodology rely on the assumption that the observations are independent. We develop a divide-and-conquer method for Bayesian inference in parametric hidden Markov models. Our main contributions are two-fold. First, after partitioning the data into consecutive observations, we introduce a likelihood modification based on an approximation of the prediction filter for performing posterior computations on the subsets. Second, we show that the subset posterior distributions using modified likelihoods are asymptotically normal, which provides theoretical guarantees for almost all combination algorithms. We show that the combined posterior distribution is close to the true posterior distribution. The proposed method is easily implemented and computationally efficient. Our numerical results show that the proposed method outperforms its competitors.
|