Abstract:
|
Default estimators often fail to retain their ideal properties when used in non-regular settings. A classic example is the Winner's Curse bias--upward bias of generally unbiased estimators when reports are limited to significant results. Improvements to default estimators, such as Zhong and Prentice (2008)'s amelioration of the Winner's Curse bias, often distill to shrinkage, or the movement of a complex decision to a simple or parsimonious one. We introduce a single family of loss functions that motivates existing improvements to default estimators using a decision-theoretic framework. The loss function quantifies the trade-off between accuracy and parsimony, thus providing a normative evaluative standard. This Bayesian framework further allows for incorporation of prior skepticism or information about a parameter being tested. Our particular focus is on decision-theoretic justification for Zhong and Prentice (2008)'s estimators, its ability to reduce the Winner's Curse bias compared to other estimators in this newly-defined class, and how prior skepticism affects bias reduction.
|