Abstract:
|
The stochastic gradient (SG) algorithm is widely applied in stochastic optimization. The important properties of convergence and asymptotic normality are based on the assumption of a stationary and unique optimizer. However, it is often the case that the target parameter and/or the underlying loss function drift over time and the notion of "convergence" no longer applies. To keep up in a nonstationary environment, the gain sequence of SG has to be non-decaying. Certainly, others have considered nonstationary problems under the general Robbins-Monro stochastic approximation setting and provided asymptotic stochastic big-O bounds of the tracking capability of constant-gain recursive algorithms. In contrast, this work aims to deliver a computable error bound of constant-gain SG algorithm for practical use. The case of interest requires the strong convexity of the time-varying loss function, but the restriction imposed on the drift of the target parameter is minimal.
|