Abstract:
|
We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters $p$ is comparable to or exceeds the sample size $n$, a successful approach uses an $\ell_1$-penalized least squares estimator, known as Lasso. Unfortunately, unlike for linear estimators (e.g., ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Recently, a line of work has addressed this problem by constructing a "de-biased" version of the Lasso estimator. In this talk, I will review this approach and show that the resulting confidence intervals have nearly optimal size. Further, when testing for the null hypothesis that a certain parameter is vanishing, this method has nearly optimal power. I will also discuss the performance of this method under optimal order of sample size. Time permitting, I will review applications to healthcare analytics and decision making, and discuss future perspectives for this research area.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.