AAAI Publications, The Thirtieth International Flairs Conference

Font Size: 
What-If Prediction via Inverse Reinforcement Learning
Masahiro Kohjima, Tatsushi Matsubayashi, Hiroshi Sawada

Last modified: 2017-05-03

Abstract


What happens if a new street is constructed in a city? What happens if a certain traffic regulation is executed in an exhibition hall? It is important to answer such questions in order to identify “good” operation scenarios for improving city and event comfort. In this paper, we propose a new method on a framework of inverse reinforcement learning (IRL) that can answer these and similar questions. Given any scenario among executable scenario candidates, the proposed method predicts the impact on people under the condition that the scenario is executed. The proposed method consists of three steps: parameter estimation, scenario integration, and prediction. In the parameter estimation step, our new IRL algorithm estimates both cost (reward) function and transition probability from past transition logs. Note that it is not necessary that the scenario to be conducted is executed in the past. In the scenario integration step, the estimated parameters are updated by scenario information, and prediction is conducted in the final step. We evaluate the effectiveness of the proposed method by experiments on synthetic and real car probe data.

Keywords


Inverse Reinforcement Learning; Linearly Solvable MDP;What-If Prediction

Full Text: PDF