A Markovian Model for Dynamic and Constrained Resource Allocation Problems

Camille Besse, Brahim Chaib-draa

An autonomous agent, allocating stochastic resources to incoming tasks, faces increasingly complex situations when formulating its control policy. These situations are often constrained by limited resources of the agent, time limits, physical constraints or other agents. All these reasons explain why complexity and state space dimension increase exponentially in size of considered problem. Unfortunately, models that already exist either consider the sequential aspect of the environment, or its stochastic one or its constrained one. To the best of our knowledge, there is no model that take into account all these three aspects. In this paper, we introduce a new model based on Dynamic Constraint Satisfaction Problems (DCSP) and Markov Decision Processes to address constrained stochastic resource allocation problems by using expressiveness and powerfulness of CSPs. We thus propose a framework which aims to model dynamic and stochastic environments for constrained resources allocation decisions and present some complexity and experimental results.

Subjects: 15. Problem Solving; 15.3 Control

Submitted: Apr 6, 2007

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.