Abstract:
|
Marginalising over discrete latent random variables has become more relevant with the popularization of Hamiltonian Monte Carlo techniques in the Bayesian inference context, which only allow for continuous variables. For many major infinite series, custom algorithms have been developed which exploit specific features of each problem. General techniques, suitable for a large class of problems with limited input from the user are somewhat scant, however. Here we employ basic results from the theory of infinite series to investigate general, problem-agnostic algorithms to approximate these sums to within an arbitrary tolerance $\epsilon > 0$. We compare three tentative solutions to estimating the infinite sum of interest and obtain simple theoretical results which show under which conditions each technique succeeds in providing a truncated sum within the required tolerance. We provide a host of statistical applications where truncation is necessary and compare the error achieved by each approach, as well as the number of function evaluations necessary for each one. A comparison based on computation time is also provided.
|