Abstract:
|
Black-box predictors have become state-of-the-art for many complex tasks. However, the predictive uncertainty of these methods is often not properly quantified, and they may provide inappropriate predictions for unfamiliar data. We propose learning a decision function that can abstain from predicting to complement black-box models that output prediction sets. Drawing on ideas from decision theory and robust maximum likelihood estimation, we train the model by minimizing an uncertainty-aware loss function that encourages rejecting inputs when predictions may be poor. Since black-box methods are not guaranteed to output well-calibrated prediction sets, we also propose a framework that provides point estimates and confidence intervals for the true coverage of any prediction-set model as well as a mixture of K models via K-fold sample-splitting. When applied to predict in-hospital mortality and length-of-stay for ICU patients, our method rejects giving high-risk predictions and provides accurate inference for coverage. We also improve the reliability of an image classification model that is trained on data from a highly controlled environment and receives publicly submitted inputs.
|