Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 294 - Uncertainty Quantification in Deep Learning
Type: Invited
Date/Time: Wednesday, August 11, 2021 : 3:30 PM to 5:20 PM
Sponsor: Section on Bayesian Statistical Science
Abstract #316588
Title: Learning a Kernel-Expanded Stochastic Neural Network with Uncertainty Quantification
Author(s): Faming Liang*
Companies: Purdue University
Keywords: Imputation-Regularized Optimization; Latent variable model; global optimum; support vector regression; Uncertainty Quantification; Deep Learning
Abstract:

The deep neural network (DNN) suffers from many fundamental issues in machine learning. For example, it is often trapped into a local minimum in training, and its prediction uncertainty is hard to be assessed. To address these issues, we propose a kernel-expanded stochastic neural network (K-StoNet), which incorporates support vector regression as the first hidden layer and reformulates the neural network as a latent variable model. The former maps the input vector into an infinite dimensional feature space via a radial basis function kernel, ensuring absence of local minima on its training loss surface. The latter breaks the high-dimensional nonconvex neural network training problem into a series of low-dimensional convex optimization problems, and enables its prediction uncertainty easily assessed. The K-StoNet can be easily trained using the imputation-regularized optimization algorithm. The K-StoNet possesses a theoretical guarantee to asymptotically converge to the global optimum and enables the prediction uncertainty easily assessed. The performances of the new model in training, prediction and uncertainty quantification are illustrated using simulated and real data examples.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2021 program