West Coast Ballroom
Do Birds of a Methodological Feather Flock Together? (306677)*Carrie E Fry, Harvard University
Laura Anne Hatfield, Harvard Medical School
Keywords: causal inference, quasi-experimental design, counterfactuals, estimands, interrupted time series, difference-in-differences
One of the most commonly-used study set-ups in health policy evaluation involves the comparison of observations before and after intervention in a treated and comparison group. There are at least two frequently-used designs that make these comparisons: difference-in-differences (DID) and comparative interrupted time series (CITS). Both methods quantify the change in the treated group relative to the change in the comparison group before and after intervention and rely on counterfactual assumptions that are untestable. Despite their similarities, the methodological literature that develops CITS lacks the mathematical formality established for DID. In this paper, we identify the estimands for CITS via the potential outcomes framework and compare these to their DID counterparts. We show that the counterfactual assumptions for CITS and DID are similar, but the target estimands differ and the estimation of the treatment effect is constrained differently in each method. We suggest that the choice between CITS and DID relies on understanding the data generating mechanism to determine which set of constraints, assumptions, and target parameters are more relevant.