Abstract:
|
Risk prediction models play an important role in selecting prevention and treatment strategies for various diseases. While it is common to observe poorer performance in a validation set compared to a development set, this difference is generally attributed to optimistic bias in measuring performance in the development set. However, this difference might be rather due to differences in the distribution of the predictors, which can strongly affect predictive performance. Conventional validation analysis does not take account of it. It could erroneously give a low rating to a useful risk prediction model even when the model is working for each subject in the validation set in the exactly same way as for those in the development set. Because results of validation studies ultimately determine which prediction models are adopted for research and clinical use, it is critical that validation methods be grounded in rigorous cross-study comparisons. We will present new inference procedures for estimating predictive performance measures in validation studies, systematically adjusting for differences in the distribution of predictors across studies.
|