Abstract:
|
Bayesian analyses often iteratively follow "Box's loop;” namely, a practitioner starts with a model, performs inference, identifies areas for improvement, and expands to a new model that incorporates more-complex structure or additional prior knowledge relative to the old model. A Bayesian practitioner, then, needs a way to decide when to use the estimate from a new model and when to default to the estimate from an old model. We propose the "c-value" as a practical, frequentist measure of confidence in a new estimate relative to an old estimate. We show that it is unlikely that a computed c-value is large and that the new estimate has larger loss than the old. Just as a small p-value provides evidence to reject a null hypothesis, a large c-value provides evidence to use a new estimate in place of the old. We examine popular classes of competing Bayesian estimators, e.g. arising from hierarchical models and Gaussian processes. We show how to compute a c-value by first constructing a data-dependent high-probability lower bound on the difference in loss. We demonstrate the usefulness of our method in recent Bayesian data analyses.
|