Abstract:
|
The performance of risk prediction models is often characterized in terms of discrimination and calibration. While the ROC curve is widely used for evaluating model discrimination, evaluating calibration has not received the same level of attention. Commonly used methods for model calibration involve subjective specification of smoothing or grouping factors. Leveraging the familiar ROC framework, we introduce the model-based ROC (mROC), the ROC curve that should be observed if a pre-specified model is calibrated in the sample. A fundamental result in that the empirical ROC and mROC curves for a sample converge asymptotically if the model is calibrated in that sample. Based on this, we propose a novel statistical test for calibration that does not require any smoothing or grouping. Simulations support the adequacy of the test. A case study puts these developments in a practical context. We conclude that mROC can easily be constructed and used to evaluate model calibration on the ROC plot, thus adding to the utility of ROC curve analysis in the evaluation of risk prediction models and promoting the assessment of model calibration as an often-neglected component in risk prediction.
|