Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 131 - The Future of Transportation: The Predicting Power of Driver Behavior Data
Type: Topic Contributed
Date/Time: Monday, August 3, 2020 : 1:00 PM to 2:50 PM
Sponsor: Transportation Statistics Interest Group
Abstract #312982
Title: Using Inverse Reinforcement Learning to Predict the Impact of Distraction on Future Driver Performance
Author(s): Mayuree Binjolkar* and Linda Ng Boyle
Companies: University of Washington and University of Washington
Keywords: Reinforcement Learning; Prediction; Bayesian; Driving Behavior; Simulation; Artificial Intelligence

A reinforcement learning (RL) approach is used to predict the impact of secondary task activities on driving performance. Based on RL theory, any driving action can be treated as maximizing a reward function in a Markov-Decision Process based formulation. The driver actions from a driving simulator study include vehicle speed, braking, accelerating, steering and eyes on the road time. These actions are considered in the context of the road and traffic environment. Given that the secondary task engagement is known, we can estimate the unknown reward function using maximum likelihood inverse reinforcement learning (MaxEntIRL). This method provides only one optimal policy as input to the IRL agent, and there is an assumption that the expert is infallible, which is not always the case. In this study, we compare the MaxEntIRL to a Bayesian inverse reinforcement learning model which can return a set of suboptimal policies, which accounts for a higher number of states and actions.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2020 program