A Decision-Theoretic Model of Assistance

Alan Fern, Sriraam Natarajan, Kshitij Judah, Prasad Tadepalli

There has been a growing interest in intelligent assistants for a variety of applications from organizing tasks for knowledge workers to helping people with dementia. In this paper, we present and evaluate a decision-theoretic framework that captures the general notion of intelligent assistance. The objective is to observe a goal-directed agent and to select assistive actions in order to minimize the overall cost. We formulate the problem as an assistant POMDP where the hidden state corresponds to the agent's unobserved goals. This formulation allows us to exploit (partial) domain models for both estimating the agent's goals and selecting assistive action. In addition, the formulation naturally handles uncertainty, varying action costs, and customization to specific agents via learning. We argue that in many domains myopic heuristics will be adequate for selecting actions in the assistant POMDP and present two such heuristics---one based on MDP planning, and another based on policy rollout. We evaluate our approach in two domains where human subjects perform tasks in game-like computer environments. The results show that the assistant substantially reduces user effort with only a modest computational effort.

Subjects: 12.1 Reinforcement Learning; 7.2 Software Agents

Submitted: May 30, 2006


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.