Variational inference algorithms are well-known to be computationally tractable for large-scale models and data; but they are equally well-known to provide unreliable results and underestimate posterior uncertainty. In order for variational methods to be competitive with Markov Chain Monte Carlo (MCMC) and trusted in the statistical domain, they must come with rigorous finite-data guarantees. This talk will focus on boosting methods, i.e., those that incrementally build complex variational approximations up from simple component distributions. After reviewing some recent exciting developments in the area, the talk will introduce a new approach to variational boosting that comes with rigorous theoretical convergence guarantees. Unlike previous approaches, the method requires no ad-hoc regularization. Experiments on popular models show the practicality of the approach.