Abstract:
|
Statistical uncertainty has many sources. P-values and confidence intervals usually quantify the overall uncertainty, which may include variation due to sampling and uncertainty due to measurement error, among others. Practitioners might be interested in quantifying only one source of uncertainty, for example, the uncertainty of a regression coefficient of a fixed set of subjects, which corresponds to the uncertainty due to measurement error ignoring the variation induced by sampling. In causal inference it is common to only account for uncertainty due to random treatment assignment. Motivated by these examples, we consider conditional estimation and conditional inference for parameters in parametric and semi-parametric models, where we condition on observed characteristics of a population. We derive a theory of conditional inference, including methods to obtain conditionally valid p-values and confidence intervals. Conditional p-values can be used to construct a hierarchy of statistical evidence with rigorous control of the family-wise error rate. In addition, the proposed approach allows to conduct transfer learning of conditional parameters, with rigorous conditional guarantees.
|