Track:
Contents
Downloads:
Abstract:
Decision-theoretic planning with risk-sensitive planning objectives is important for building autonomous agents or decision-support agents for real-world applications. However, this line of research has been largely ignored in the artificial intelligence and operations research communities since planning with risk-sensitive planning objectives is much more complex than planning with risk-neutral planning objectives. To remedy this situation, we develop conditions that guarantee the existence and finiteness of the expected utilities of the total plan-execution reward for risk-sensitive planning with totally observable Markov decision process models. In case of Markov decision process models with both positive and negative rewards our results hold for stationary policies only, but we conjecture that they can be generalized to hold for all policies.