Activity Number:
|
403
- Research Advances at the Interface of Uncertainty Quantification and Machine Learning for High-Consequence Problems
|
Type:
|
Invited
|
Date/Time:
|
Wednesday, August 10, 2022 : 10:30 AM to 12:20 PM
|
Sponsor:
|
Section on Statistics in Defense and National Security
|
Abstract #320621
|
|
Title:
|
Efficient Variational Approach to Sparse BNN for Model Compression
|
Author(s):
|
Diptarka Saha* and Feng Liang and Zihe Liu
|
Companies:
|
University of Illinois, Urbana-Champaign and University of Illinois, Urbana-Champaign and University of Illinois, Urbana-Champaign
|
Keywords:
|
Model Compression;
Pruning;
Feature Selection;
Bayesian Neural Network;
Bayes by Backproop;
Sparsity
|
Abstract:
|
Model Compression has drawn much attention within the deep learning community recently. Compressing a dense neural network offers many advantages including lower computation cost, deployability to devices of limited storage and memories, and importantly resistance to adversarial attacks. This may be achieved via node pruning or fully discarding certain input features. Here we demonstrate a novel strategy to emulate principles of Bayesian model selection in a deep learning setup. Given a fully connected Bayesian neural network with a spike-and-slab prior on its weights trained via a variational algorithm, we obtain the posterior inclusion probability for every node that typically gets lost. We employ these probabilities for pruning and feature selection on a host of simulated and real-world benchmark data and find evidence of better generalisability of the pruned model in all our experiments.
|
Authors who are presenting talks have a * after their name.