Abstract:
|
In a Markov chain Monte Carlo experiment, the run length can be controlled with sequential estimates of the long-run variance (LRV). Classical LRV estimators that utilize overlapping batch means and Bartlett kernel are statistically efficient but cannot be updated in constant time nor space. While the statistics and engineering communities noticed this problem, their computationally efficient proposals had higher asymptotic mean squared errors (AMSE). In this paper, we develop a general framework to unify the two communities: Statistically, we propose several principle-driven estimators with super-optimal AMSE as compared with their non-recursive counterparts. We also derive the first sufficient condition for a general estimator to be updated in constant time or space. Computationally, we introduce mini-batch estimation to improve computational efficiency beyond traditional online estimation. Implementation issues such as automatic optimal parameters selection and multivariate extension are discussed. Practically, we discuss different applications in the two communities. Our experiments show that the finite-sample properties of our proposals match with the theoretical findings.
|