Abstract:
|
The arguable gold standard of black-box surrogate modeling at present is the Gaussian Process (GP), in large part due to its excellent uncertainty quantification properties. GP-based surrogate models have been greatly successful in performing a wide range of tasks such as optimization, sensitivity analysis, and model calibration of expensive simulators. However, the conjugate GP model assumes, of course, a normal error structure, making it unrealistic for certain applications. The past has seen extensive work in computing with stochastic processes with non-Gaussian marginal distributions, though these are typically still unimodal. Beyond multimodal error structures, modern kernel-based stochastic process inference can struggle with high dimensional inputs, multidimensional, complexly dependent outputs, and enforcement of physical constraints. We show how Generative Adversarial Networks (GANs), a deep learning framework best known for producing artificial natural images, fill all these gaps. However, we also note that off-the-shelf GANs fail to retain the GPs most important property of uncertainty quantification, which we remedy by enforcing a stochastic process prior.
|