Online Program

Return to main conference page
Friday, May 31
Machine Learning
Recent Advancements in Deep Learning
Fri, May 31, 5:20 PM - 6:25 PM
Regency Ballroom AB
 

Statistical Evaluation of Long Memory in Recurrent Neural Networks (305185)

Presentation

*Alexander Greaves-Tunnell, University of Washington 
Zaid Harchaoui, University of Washington 

Keywords: deep learning, machine learning, time series, long-range dependence, long memory

Representation and learning of long-range dependencies is a central challenge confronted in modern applications of machine learning to sequence data. Yet despite the prominence of this issue, the basic problem of measuring long-range dependence, either in a given data source or as represented in a trained deep model, remains largely limited to heuristic tools.

We develop a novel statistical framework for investigating long-range dependence in current applications of deep sequence modeling, drawing on the well-developed theory of long memory stochastic processes. By analogy with their linear predecessors in the time series literature, we identify recurrent neural networks (RNNs) as nonlinear processes that simultaneously attempt to learn both a feature representation for and the long-range dependency structure of an input sequence. We derive testable implications concerning the relationship between long memory in real-world data and its learned representation in a deep learning architecture, which are explored through a semiparametric framework adapted to the high-dimensional setting.

Experiments illustrating this method confirm the presence of long memory in a diverse collection of natural language and music data, but show that a variety of RNN architectures fail to capture this property even after training to benchmark accuracy in a language model.