Online Program Home
My Program

Abstract Details

Activity Number: 249
Type: Contributed
Date/Time: Monday, August 1, 2016 : 2:00 PM to 3:50 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #320076
Title: High-Dimensional Regularized Estimation in Time Series Under Mixing Conditions
Author(s): Kam Chung Wong* and Ambuj Tewari and Zifan Li
Companies: University of Michigan and University of Michigan and University of Michigan
Keywords: Time series ; High dimensional statistics ; Lasso ; Mixing ; nonlinear ; model mis-specification

High dimensional time series analysis is prominent nowadays. Existing theoretical results for the Lasso, however, require the i.i.d. sample assumption. Recent papers have extended the results to sparse Gaussian Vector Autoregressive (VAR) models. However, they rely critically on the fact that the true data generating mechanism (DGM) is a Gaussian VAR. We derive non-asymptotic inequalities for estimation and prediction errors of the Lasso estimate of the best linear predictor without assuming any underlying DGM of a special parametric form. Instead we only rely on stationarity and mixing conditions to establish consistency of the Lasso in the following two scenarios: (a) \alpha-mixing Gaussian random vectors, and (b) \beta-mixing sub-Gaussian random vectors. In particular, we will provide an alternative proof on the consistency of the Lasso for sparse VAR. In addition, we can extend the applicability of the general results to some non-Gaussian and nonlinear models. A key technical contribution of this work is to provide a Hansen-Wright type concentration inequality for \beta-mixing subgaussian random vectors, potentially applicable to study other convex and/or nonconvex structures.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2016 program

Copyright © American Statistical Association