Abstract:
|
Bayesian algorithms such as Markov Chain Monte Carlo or Bayesian optimization can quickly become computationally prohibitive or even infeasible for high dimensional problems. In many applications, however, the underlying dynamics of a stochastic process typically can be represented in a lower dimensional space. We will review existing linear and nonlinear dimensionality reduction methods, such as Laplacian eigenmaps and restricted Boltzmann machines. Further, we will present some new results for nonlinear dimensionality techniques based on deep learning models. We will demonstrate our approach in the context of Bayesian optimization algorithms applied to a stochastic process defined by a complex agent-based model. Finally, we discuss directions for future research.
|