Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. Modern neural networks, unlike those from a decade ago, are poorly calibrated. Post-processing based calibration methods, such as, temperature scaling, isotonic regression, etc. are widely used in the community to calibrate deep learning models. In this talk, first I will introduce the desiderata for a good calibration. Next, I will discuss shortcoming of existing calibration and calibration evaluation methods. Finally, I will discuss some general approaches to overcome these shortcomings.