Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 284 - Statistical Learning for Dependent and Complex Data: New Directions and Innovation
Type: Invited
Date/Time: Wednesday, August 5, 2020 : 10:00 AM to 11:50 AM
Sponsor: Section on Statistical Learning and Data Science
Abstract #309228
Title: High-Dimensional Sparse Nonlinear Vector Autoregressive Models
Author(s): Yuefeng Han* and Wei Biao Wu and Likai Chen
Companies: Rutgers University and University of Chicago and Washington University in St. Louis
Keywords: Vector Autoregressive Model (VAR); High dimension; Lasso estimator; Bernstein inequality; Serial correlation

High dimensional vector autoregressive (VAR) models have a wide range of scientific applications in econometrics, computational biology, climatology, and so on. Prior work has focused on linear VAR models. However, linear VAR approaches are somewhat restrictive in practice. In this talk, we introduce the non-parametric sparse additive model, a more flexible framework to address this challenge. Our method uses basis expansions to construct nonlinear VAR models. We provide convergence rates and model selection consistencies of the estimators in terms of the dependence measures of the processes, the moment condition of the errors, the sparsity condition and basis expansions. Our theory substantially extends earlier linear VAR models by allowing non-Gaussianity and non-linearity structures. As our main technical tools, we derive sharp Bernstein-type inequalities for tail probabilities for non-sub-Gaussian linear and nonlinear VAR processes. We also provide numerical experiments that support our theoretical results and display advantages of using nonlinear VAR model for a time series gene expression data set.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2020 program