Abstract:
|
Statisticians have long been aware of the limitations of using estimates from statistically significant hypothesis tests, including multiplicity, publication bias, data fishing or p-hacking, and vanishing effects. These problems have garnered broader attention with recent work showing lack of replicability of studies in a range of applied fields. We present a simple data-based method of adjusting statistically significant estimates by exploiting the relationship between the true sampling distribution and the truncated sampling distribution defined by the rejection region of the hypothesis test. Simulations show the method is an effective, though imperfect, solution to the problem of biases in published research.
|