Learning to Follow Directions in Street View

Authors

  • Karl Moritz Hermann DeepMind
  • Mateusz Malinowski DeepMind
  • Piotr Mirowski DeepMind
  • Andras Banki-Horvath DeepMind
  • Keith Anderson DeepMind
  • Raia Hadsell DeepMind

DOI:

https://doi.org/10.1609/aaai.v34i07.6849

Abstract

Navigating and understanding the real world remains a key challenge in machine learning and inspires a great variety of research in areas such as language grounding, planning, navigation and computer vision. We propose an instruction-following task that requires all of the above, and which combines the practicality of simulated environments with the challenges of ambiguous, noisy real world data. StreetNav is built on top of Google Street View and provides visually accurate environments representing real places. Agents are given driving instructions which they must learn to interpret in order to successfully navigate in this environment. Since humans equipped with driving instructions can readily navigate in previously unseen cities, we set a high bar and test our trained agents for similar cognitive capabilities. Although deep reinforcement learning (RL) methods are frequently evaluated only on data that closely follow the training distribution, our dataset extends to multiple cities and has a clean train/test separation. This allows for thorough testing of generalisation ability. This paper presents the StreetNav environment and tasks, models that establish strong baselines, and extensive analysis of the task and the trained agents.

Downloads

Published

2020-04-03

How to Cite

Hermann, K. M., Malinowski, M., Mirowski, P., Banki-Horvath, A., Anderson, K., & Hadsell, R. (2020). Learning to Follow Directions in Street View. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11773-11781. https://doi.org/10.1609/aaai.v34i07.6849

Issue

Section

AAAI Technical Track: Vision