Abstract:
|
The use of prediction models is increasingly common in clinical research and practice.As new knowledge is gained the natural next step is to determine if adding the new information to existing models improves their performance. To address this question, a common approach is to compare the performance of the two nested models using one of many global or threshold-based prediction metrics. Researchers often report a model's apparent performance. This approach has been shown to result in overly optimistic estimates of a model's performance, but it is still chosen because it allows use of the entire data set when building the model. To address the issue, researchers often report bias-corrected estimates based on bootstrapping. However, the current approach for obtaining bias-corrected metrics does not provide estimates of precision. Further, the behavior of prediction metrics, apparent or bias-corrected, in the context of incremental model improvement has not been studied. This work will provide information about the performance of predictions metrics commonly used when comparing the prognostic performance of two nested models.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.