A major obstacle to developing evidenced-based policy is the difficulty in implementing randomized experiments to answer all causal questions of interest. When using a non-experimental study, it is critical to assess how much the results could be affected by unmeasured confounding. We present a set of graphical and numeric tools to explore the sensitivity of causal estimates to the presence of an unmeasured confounder when the outcome and/or treatment assignment exhibit multilevel structure. We characterize the individual-level confounder through two parameters that describe the relationships between 1) the confounder and the treatment assignment and 2) the confounder and the outcome variable. Our approach can be applied to both continuous and binary treatment variables
We demonstrate the efficacy of the method and its sensitivity to violations of the random effects assumption (group-level errors in treatment and outcome are not correlated) through simulations. We illustrate its potential usefulness in practice in the context of a non-randomized classroom-based nutrition intervention study.
|