Proceedings:
No. 1: Thirty-First AAAI Conference On Artificial Intelligence
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 31
Track:
AAAI Technical Track: Reasoning under Uncertainty
Downloads:
Abstract:
A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this threshold constraint. In this work we go beyond both the ÒexpectationÓ and ÒthresholdÓ approaches and consider a Òguaranteed payoff optimization (GPO)Ó problem for POMDPs, where we are given a threshold t and the objective is to find a policy _ such that a) each possible outcome of _ yields a discounted-sum payoff of at least t, and b) the expected discounted-sum payoff of _ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks.
DOI:
10.1609/aaai.v31i1.11046
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 31