Abstract:
|
Standard deep learning methods, which have been used to achieve impressive results on a variety of problems, produce point predictions but do not provide measures of uncertainty. From a statistical and ethical perspective, the lack of uncertainty quantification in deep learning models is a major issue. Recent work has focused on the development of deep learning methods that include uncertainty estimates alongside point predictions. However, little attention has been given to the quality of the estimated uncertainties. We demonstrate that different models trained on the same data can produce vastly different uncertainties. We discuss the challenges of assessing the quality of uncertainty estimates and comparing models in terms of estimated uncertainties. We describe recent developments and sketch a path forward. Supported by the Laboratory Directed Research and Development program at Sandia National Laboratories, a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
|