Published:
May 2003
Proceedings:
Proceedings of the Sixteenth International Florida Artificial Intelligence Research Society Conference (FLAIRS 2003)
Volume
Issue:
Proceedings of the Sixteenth International Florida Artificial Intelligence Research Society Conference (FLAIRS 2003)
Track:
All Papers
Downloads:
Abstract:
Reinforcement learning has become a widely used methodology for creating intelligent agents in a wide range of applications. However, its performance deteriorates in tasks with sparse feedback or lengthy inter-reinforcement times. This paper presents an extension that makes use of an advisory entity to provide additional feedback to the agent. The agent incorporates both the rewards provided by the environment and the advice to attain faster learning speed, and policies that are tuned towards the preferences of the advisor while still achieving the underlying task objective. The advice is converted to "tuning" or user rewards that, together with the task rewards, define a composite reward function that more accurately defines the advisor' ss perception of the task. At the same time, the formation of erroneous loops due to incorrect user rewards is avoided using formal bounds on the user reward component. This approach is illustrated using a robot navigation task.
FLAIRS
Proceedings of the Sixteenth International Florida Artificial Intelligence Research Society Conference (FLAIRS 2003)
ISBN 978-1-57735-177-1
Published by The AAAI Press, Menlo Park, California.