Conference Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 455 - Learning Under Nonstationarity
Type: Invited
Date/Time: Wednesday, August 10, 2022 : 2:00 PM to 3:50 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #319217
Title: Nonstationary Reinforcement Learning Without Prior Knowledge: An Optimal Black-Box Approach
Author(s): Chen-Yu Wei and Haipeng Luo*
Companies: University of Southern California and University of Southern California
Keywords: reinforcement learning; dynamic regret; black-box reduction
Abstract:

We propose a black-box reduction that turns a certain reinforcement learning algorithm with optimal regret in a (near-)stationary environment into another algorithm with optimal dynamic regret in a non-stationary environment, importantly without any prior knowledge on the degree of non-stationarity. By plugging different algorithms into our black-box, we provide a list of examples showing that our approach not only recovers recent results for (contextual) multi-armed bandits achieved by very specialized algorithms, but also significantly improves the state of the art for (generalized) linear bandits, episodic MDPs, and infinite-horizon MDPs in various ways. Specifically, in most cases our algorithm achieves the optimal dynamic regret $\tilde{O}(\min\{\sqrt{LT}, \Delta^{1/3}T^{2/3})$ where $T$ is the number of rounds and $L$ and $\Delta$ are the number and amount of changes of the world respectively, while previous works only obtain suboptimal bounds and/or require the knowledge of $L$ and $\Delta$.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2022 program