Abstract:
|
Regression models form the workhorse of much of statistical practice. A curious aspect of regression is that statisticians tend to treat the predictors as fixed quantities. In a majority of applications, however, the data are observational and hence the predictors are as random as the response. In this talk we recount the historic roots as well as the theoretical justification for treating predictors as fixed. We examine when and why this treatment and its justification fail: It assumes the correctness of the model. If, however, models are approximations rather than generative truths, no such assumption should be made. This point of view should therefore guide us toward the adoption of inference based on sandwich or bootstrap estimators of standard error. We will, however, show that such "assumption-lean" inference comes at a cost: while standard errors based on sandwich and bootstrap estimators are "model-robust", they are also extremely non-robust in the sense of sensitivity to outlying observations. For this dilemma we have only few answers at this point.
|