Abstract:
|
Transfer learning uses a data model, trained to make predictions or inferences on data from one population, to make reliable predictions or inferences on data from another population. While the development of transfer learning methodology is a highly active area of research, most approaches focus on fine-tuning pre-trained neural network models that fail to provide crucial uncertainty quantification. We develop a statistical foundation for model predictions based on transfer learning; we mathematically and empirically demonstrate the validity of our approach in simple settings, and numerically illustrate the method's robustness to asymptotic approximations in more complex settings. Whereas many existing techniques are built on particular source models, our method is agnostic to the choice of source model. Our method also provides uncertainty quantification for predictions, which is mostly absent in the literature. We examine our method's performance in a simulation study and in an application to real data from the eICU Collaborative Research Database. In all cases, our approach highlights the flexibility to use different source models such as logistic regression or neural networks.
|