Track:
Contents
Downloads:
Abstract:
Many important real-world robotic tasks have high diameter, that is, their solution requires a large number of primitive actions by the robot. For example, they may require navigating to distant locations using primitive motor control commands. In addition, modern robots are endowed with rich, high-dimensional sensory systems, providing measurements of a continuous environment. Reinforcement learning (RL) has shown promise as a method for automatic learning of robot behavior, but current methods work best on low-diameter, low-dimensional tasks. Because of this problem, the success of RL on real-world tasks still depends on human analysis of the robot, environment, and task to provide a useful sensorimotor representation to the learning agent. A new method, Self-Organizing Distinctive-state Abstraction (SODA) solves this problem, by bootstrapping the robot's representation from the pixel-level of raw sensor input and motor control signals to a higher action-level consisting of distinctive states and extended actions that move the robot between these states. These new states and actions move the robot through its environment in large steps, allowing it to learn to navigate much more easily and quickly than it would using its primitive actions and sensations. SODA requires no hand-coded features or other prior knowledge of the robot's sensorimotor system or environment, and learns an abstraction that is suitable for supporting multiple tasks in an environment.