Online Program

Return to main conference page
Thursday, May 17
Computational Statistics
Statistics Inference for High-Dimensional Regression
Thu, May 17, 10:30 AM - 12:00 PM
Grand Ballroom E
 

Selective Inference in Linear Regression (304367)

*Jonathan Taylor, Stanford University 

Keywords: conditional inference, selective inference, linear regression

We consider the problem of inference after selection in models such as linear regression models. This topic has received much attention recently both from a simultaneous viewpoint: POSI (Berk et al., AOS 2013) or knockoffs (Candes and Barber, AOS 2015), as well as a conditional approach: inference after LASSO (Lee et al, AOS 2016) and data splitting (Rinaldo et al: arxiv.org/1611.05401).

We consider the conditional approach, focusing specifically on the benefits of randomization in selective inference. One of the primary benefits is an increase in power (and shortening of reported confidence intervals) over non-randomized procedures. Another benefit, under appropriate assumptions on the randomization mechanism, is uniform consistency both in probability and in weak convergence of a pivotal quantity used to construct confidence intervals and carry out hypothesis tests. Finally, we describe a class of randomized algorithms which make inference in such problems feasible in wide generality.