Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 288 - SLDS CSpeed 5
Type: Contributed
Date/Time: Wednesday, August 11, 2021 : 1:30 PM to 3:20 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #318588
Title: PCAN: Principal Component Analysis for Networks
Author(s): Jihui Lee* and James D. Wilson
Companies: Weill Medical College of Cornell University and University of San Francisco
Keywords: Network Representation Learning; Embedding; Principal Component Analysis
Abstract:

Network representation learning (NRL) is an important machine learning task that seeks meaningful low-dimensional representations, or embeddings, of network data. Despite its recent prominence, NRL methods generally suffer from a lack of interpretability. Inferential analyses of identified embeddings are difficult, limiting the application of NRL strategies. In this talk, we introduce a decomposition technique called Principal Component Analysis for Networks (PCAN) that identifies statistically meaningful embeddings of network samples. Not only does PCAN inherit interpretability from PCA, it also provides a straightforward strategy to visualize, cluster, and train predictive algorithms on a sample of complex networks. We provide a central limit theorem for the identified embeddings of PCAN when the observed sample is a collection of kernel-based random graphs, enabling a hypothesis testing for two sample comparisons. We investigate the utility of the PCAN through simulation studies and applications to network samples of functional connectivity of the brain and political co-voting behavior. Our findings reveal that PCAN is useful and straightforward for analyzing network samples.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2021 program