Activity Number:
|
149
- Statistical Learning for Decision Support
|
Type:
|
Contributed
|
Date/Time:
|
Monday, August 8, 2022 : 10:30 AM to 12:20 PM
|
Sponsor:
|
Section on Statistical Learning and Data Science
|
Abstract #322743
|
|
Title:
|
Generalized V-Learning Framework for Estimating Dynamic Treatment Regimes
|
Author(s):
|
Duyeol Lee* and Michael Kosorok
|
Companies:
|
Wells Fargo and University of North Carolina at Chapel Hill
|
Keywords:
|
V-learning;
Reinforcement learning;
Precision medicine;
Markov decision processes
|
Abstract:
|
Precision medicine is an approach that incorporates personalized information to efficiently determine which treatments are best for which types of patients. A key component of precision medicine is creating mathematical estimators for clinical decision-making. Dynamic treatment regimes formalize tailored treatment plans as sequences of decision rules. Recently, the V-learning method was introduced to estimate optimal dynamic treatment regimes. This method showed good performance compared to the existing reinforcement learning methods such as greedy gradient Q-learning. However, the complicated functional form of its loss function makes it difficult to apply modern machine learning methods for the estimation of value functions of treatment policies. We propose a generalized V-learning framework for estimating optimal treatment regimes. The proposed method adopts widely used loss functions and an iterative method to estimate value functions. Simulation studies show that the proposed method provides better performance compared with the original V-learning method.
|
Authors who are presenting talks have a * after their name.