Abstract:
|
Sensitivity analysis aims to assess how strong an unmeasured confounder needs to be to change our conclusion by a certain amount. The plausibility of such a change is then submitted to subjective scientific judgement. To calibrate this judgment, several researchers have proposed what is often referred to as "benchmarking": using statistics of observed confounders to "calibrate" the effects of assumed unobserved confounders (Imbens, 2003; Hosman et al., 2010; Blackwell, 2013; Dorie et al. 2016; Carnegie et al., 2016b; Hong et al., 2018). This paper shows that naive use of observed statistics to calibrate the strength of unobservables can lead to unintended and erroneous consequences. We further show how benchmarking affects current practice of sensitivity analysis, explains the nature of the relations between the two and demonstrate that, at least under certain circumstances, it is possible to make correct calibration if one reparemeterizes the bias function appropriately.
|