Current practice in sequential optimal experimental design (OED) often relies on suboptimal approaches: batch design that chooses all experiments simultaneously with no information feedback, or myopic design that optimally selects the next experiment without accounting for future observations and dynamics. We instead propose a dynamic programming (DP) approach to sequential OED, for dynamical systems specified via differential equation models.
We employ a Bayesian framework that seeks to maximize expected information gain in parameters of interest. The solution to the DP problem is a policy that maps the current posterior and system state to the next experiment. We compute this policy using a one-step lookahead representation combined with approximate value iteration, over continuous design and state spaces, and iteratively generate state trajectories via exploration and exploitation. Within this framework, we use transport maps-e.g., monotone multivariate transformations of a standard Gaussian-to enable fast approximate Bayesian inference.
We demonstrate our approach via sequential inference in a convection-diffusion system modeling atmospheric contaminant transport.
|