Conference Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 114 - Time Series Methods and Applications
Type: Contributed
Date/Time: Monday, August 8, 2022 : 8:30 AM to 10:20 AM
Sponsor: Section on Statistical Learning and Data Science
Abstract #322375
Title: Variational Objectives for Sequential Vartional AutoEncoders by Sequential Bayes Filtering
Author(s): Tsuyoshi Ishizone* and Tomoyuki Higuchi and Kazuyuki Nakamura
Companies: Meiji University and Chuo University and Meiji University
Keywords: variational inference; ensemble Kalman filter; deep sequential generative model; time-series prediction; filtering variational objectives; sequential variational auto-encoders
Abstract:

Deep sequential generative models have been used for various fields such as geoscience, materials informatics, and computer vision. This talk focuses on sequential variational auto-encoders (SVAEs) and variational inference (VI) methods for improving parameter learning. SVAEs expand variational auto-encoders into sequential structures. VI is a learning technique by maximizing evidence lower bound, a lower bound of the log marginal likelihood, instead of the intractable maximization. Some previous works propose VI combined with sequential Monte Carlo (SMC) to obtain a tighter lower bound and enhance parameter learning. These works have two drawbacks: low particle diversity and biased gradient estimates. Particle diversity means the representation capability of latent distribution by an ensemble of particles. The biased gradient estimates provide different learning directions from the correct directions. Thus, we propose a new VI method combined with the ensemble Kalman filter to overcome these drawbacks. The proposed method outperforms the previous methods regarding predicting ability and particle diversity. Detailed experimental results will be shown on the day.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2022 program