Online Program Home
  My Program

Abstract Details

Activity Number: 40 - Statistical Learning: Theory and Methods
Type: Contributed
Date/Time: Sunday, July 30, 2017 : 2:00 PM to 3:50 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #323555
Title: Constant-Gain Stochastic Gradient Algorithm in Nonstationary Environment
Author(s): Jingyi Zhu* and James C Spall
Companies: Johns Hopkins University Applied Mathematics and Statistics and Applied Physics Laboratory
Keywords: Stochastic Gradient ; Online Learning ; Nonstationary Optimization ; Error Bound
Abstract:

The stochastic gradient (SG) algorithm is widely applied in stochastic optimization. The important properties of convergence and asymptotic normality are based on the assumption of a stationary and unique optimizer. However, it is often the case that the target parameter and/or the underlying loss function drift over time and the notion of "convergence" no longer applies. To keep up in a nonstationary environment, the gain sequence of SG has to be non-decaying. Certainly, others have considered nonstationary problems under the general Robbins-Monro stochastic approximation setting and provided asymptotic stochastic big-O bounds of the tracking capability of constant-gain recursive algorithms. In contrast, this work aims to deliver a computable error bound of constant-gain SG algorithm for practical use. The case of interest requires the strong convexity of the time-varying loss function, but the restriction imposed on the drift of the target parameter is minimal.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2017 program

 
 
Copyright © American Statistical Association