Abstract:
|
Common statistical measures of uncertainty like p-values and confidence intervals quantify the uncertainty due to sampling, i.e. the uncertainty due to not observing the full population. In practice, populations change between locations and across time. This makes it difficult to gather knowledge that replicates across data sets. We propose a measure of uncertainty that quantifies the distributional uncertainty of a statistical estimand, that is, the sensitivity of the parameter under general distributional perturbations within a Kullback-Leibler divergence ball. We also propose measure to estimate the stability of estimators with respect to directional or variable-specific shifts. The proposed measures would help judge whether a statistical finding is replicable across data sets in the presence of distributional shifts. Further, we introduce a transfer learning technique that allows estimating statistical parameters under shifted distributions if only summary statistics about the new distribution are available. We show the effectiveness of our method in simulations and real data.
|