Online Program Home
  My Program

Abstract Details

Activity Number: 304 - Statistical Learning: Dimension Reduction
Type: Contributed
Date/Time: Tuesday, August 1, 2017 : 8:30 AM to 10:20 AM
Sponsor: Section on Statistical Learning and Data Science
Abstract #324888
Title: Sequential Co-Sparse Factor Regression
Author(s): Aditya Mishra* and Kun Chen and Dipak K Dey
Companies: University of Connecticut and Department of Statistics, University of Connecticut and university of connecticut
Keywords: multivariate analysis ; reduced rank regression ; regularization ; singular value decomposition ; variable selection
Abstract:

In multivariate regression models, a sparse singular value decomposition of the regression component matrix is appealing for reducing dimensionality and facilitating interpretation. However, the recovery of such a decomposition remains very challenging, largely due to the simultaneous presence of orthogonality constraints and co-sparsity regularization. By delving into the underlying statistical data generation mechanism, we reformulate the problem as a supervised co-sparse factor analysis, and develop an efficient sequential computation procedure that completely bypasses the orthogonality requirements. At each sequential step, the problem reduces to a sparse multivariate regression with a unit-rank constraint. Nicely, each sequentially extracted sparse and unit-rank coefficient matrix automatically leads to co-sparsity in its pair of singular vectors. Each latent factor is thus a sparse linear combination of the predictors and may influence only a subset of responses. Our estimators enjoy the oracle properties asymptotically; a non-asymptotic error bound further reveals some interesting finite-sample behaviors of the estimators. We depict efficacy of our method through examples.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2017 program

 
 
Copyright © American Statistical Association