Abstract:
|
High dimensional time series analysis is prominent nowadays. Existing theoretical results for the Lasso, however, require the i.i.d. sample assumption. Recent papers have extended the results to sparse Gaussian Vector Autoregressive (VAR) models. However, they rely critically on the fact that the true data generating mechanism (DGM) is a Gaussian VAR. We derive non-asymptotic inequalities for estimation and prediction errors of the Lasso estimate of the best linear predictor without assuming any underlying DGM of a special parametric form. Instead we only rely on stationarity and mixing conditions to establish consistency of the Lasso in the following two scenarios: (a) \alpha-mixing Gaussian random vectors, and (b) \beta-mixing sub-Gaussian random vectors. In particular, we will provide an alternative proof on the consistency of the Lasso for sparse VAR. In addition, we can extend the applicability of the general results to some non-Gaussian and nonlinear models. A key technical contribution of this work is to provide a Hansen-Wright type concentration inequality for \beta-mixing subgaussian random vectors, potentially applicable to study other convex and/or nonconvex structures.
|