Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 437 - Misspecification, Robustness, and Model Assessment
Type: Topic-Contributed
Date/Time: Thursday, August 12, 2021 : 4:00 PM to 5:50 PM
Sponsor: Section on Bayesian Statistical Science
Abstract #317081
Title: An Automatic Finite-Sample Robustness Metric: Can Dropping a Little Data Change Conclusions?
Author(s): Ryan Giordano* and Rachael Meager and Tamara Broderick
Companies: Massachusetts Institute of Technology and London School of Economics and MIT
Keywords: robustness; sensitivity; variational Bayes; M-estimators; infinitesimal jackknife; regression
Abstract:

We propose a method to assess the sensitivity of statistical analyses to the removal of a small fraction of the sample. Analyzing all possible data subsets of a certain size is computationally prohibitive, so we provide a finite-sample metric to approximately compute the number (or fraction) of observations that has the greatest influence on a given result when dropped. We call our resulting metric the Approximate Maximum Influence Perturbation. At minimal computational cost, our metric provides an exact finite-sample lower bound on sensitivity for any estimator, so any non-robustness our metric finds is conclusive. We demonstrate that the Approximate Maximum Influence Perturbation is driven by a low signal-to-noise ratio in the inference problem, is not reflected in standard errors, does not disappear asymptotically, and is not a product of misspecification. Several empirical applications show that even 2-parameter linear regression analyses of randomized trials can be highly sensitive. While we find some applications are robust, in others the sign of a treatment effect can be changed by dropping less than 1% of the sample even when standard errors are small.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2021 program