Solving K-MDPs

Authors

  • Jonathan Ferrer-Mestres CSIRO
  • Thomas G. Dietterich Oregon State University
  • Olivier Buffet INRIA
  • Iadine Chadès CSIRO

DOI:

https://doi.org/10.1609/icaps.v30i1.6651

Abstract

Markov Decision Processes (MDPs) are employed to model sequential decision-making problems under uncertainty. Traditionally, algorithms to solve MDPs have focused on solving large state or action spaces. With increasing applications of MDPs to human-operated domains such as conservation of biodiversity and health, developing easy-to-interpret solutions is of paramount importance to increase uptake of MDP policies. Here, we define the problem of solving K-MDPs, i.e., given an original MDP and a constraint on the number of states (K), generate a reduced state space MDP that minimizes the difference between the original optimal MDP value function and the reduced optimal K-MDP value function. Building on existing non-transitive and transitive approximate state abstraction functions, we propose a family of three algorithms based on binary search with sub-optimality bounded polynomially in a precision parameter: ϕQK-MDP-ILP, ϕQ*dK-MDP and ϕa*dK-MDP. We compare these algorithms to a greedy algorithm (ϕQ Greedy K-MDP) and clustering approach (k-means++ K-MDP). On randomly generated MDPs and two computational sustainability MDPs, ϕa*dK-MDP outperformed all algorithms when it could find a feasible solution. While numerous state abstraction problems have been proposed in the literature, this is the first time that the general problem of solving K-MDPs is suggested. We hope that our work will generate future research aiming at increasing the interpretability of MDP policies in human-operated domains.

Downloads

Published

2020-06-01

How to Cite

Ferrer-Mestres, J., G. Dietterich, T., Buffet, O., & Chadès, I. (2020). Solving K-MDPs. Proceedings of the International Conference on Automated Planning and Scheduling, 30(1), 110-118. https://doi.org/10.1609/icaps.v30i1.6651