Abstract:
|
Bayesian inference makes more sense for modern neural networks than virtually every other model class, because these models can represent many compelling and complementary explanations for data, corresponding to different settings of their parameters. However, a number of myths have emerged about Bayesian deep learning in practice: (1) it doesn't work well in practice; (2) it's computationally inefficient; (3) it only helps with uncertainty estimates but not accuracy; (4) the priors are arbitrary and bad; (5) it is outperformed by "deep ensembles"; (6) the common practice of posterior tempering, leading to "cold posteriors", means that the Bayesian posterior is poor.
In this talk, I will dispel each of these myths, and discuss the success stories, future opportunities, and genuine challenges in Bayesian deep learning.
|