Abstract:
|
Quantifying the predictive uncertainty of machine learning algorithms is a topic of great theoretical and applied interest. Without additional post-processing, models often fail to accurately represent uncertainty, with a tendency to make over-confident predictions. Conformal prediction and calibration have emerged as leading candidates for performing assumption-lean predictive inference, providing rigorous guarantees without making restrictive distributional assumptions on the data, beyond assuming test data are iid or exchangeable. However, deployed machine learning models inevitably encounter changes in the input data generating distribution. I will summarize some recent progress on this front, focusing on extensions of conformal prediction and calibration to handling distribution drift.
|