Online Program Home
My Program

Abstract Details

Activity Number: 482 - Causal Inference and Related Methods
Type: Contributed
Date/Time: Wednesday, August 1, 2018 : 8:30 AM to 10:20 AM
Sponsor: Section on Statistics in Epidemiology
Abstract #330445 Presentation
Title: When Confounders Are Confounded: Naive Benchmarking in Sensitivity Analysis
Author(s): Carlos Leonardo Kulnig Cinelli* and Judea Pearl and Bryant Chen
Companies: UCLA and UCLA and IBM
Keywords: causal inference; sensitivity analysis; benchmarking
Abstract:

Sensitivity analysis aims to assess how strong an unmeasured confounder needs to be to change our conclusion by a certain amount. The plausibility of such a change is then submitted to subjective scientific judgement. To calibrate this judgment, several researchers have proposed what is often referred to as "benchmarking": using statistics of observed confounders to "calibrate"  the effects of assumed unobserved confounders (Imbens, 2003; Hosman et al., 2010; Blackwell, 2013; Dorie et al. 2016; Carnegie et al., 2016b; Hong et al., 2018). This paper shows that naive use of observed statistics to calibrate the strength of unobservables can lead to unintended and erroneous consequences. We further show how benchmarking affects current practice of sensitivity analysis, explains the nature of the relations between the two and demonstrate that, at least under certain circumstances, it is possible to make correct calibration if one reparemeterizes the bias function appropriately.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2018 program