Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

Authors

  • Ruohan Zhang University of Texas at Austin
  • Calen Walshe University of Texas at Austin
  • Zhuode Liu University of Texas at Austin
  • Lin Guan University of Texas at Austin
  • Karl Muller University of Texas at Austin
  • Jake Whritner University of Texas at Austin
  • Luxin Zhang Carnegie Mellon University
  • Mary Hayhoe University of Texas at Austin
  • Dana Ballard University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v34i04.6161

Abstract

Large-scale public datasets have been shown to benefit research in multiple areas of modern artificial intelligence. For decision-making research that requires human data, high-quality datasets serve as important benchmarks to facilitate the development of new methods by providing a common reproducible standard. Many human decision-making tasks require visual attention to obtain high levels of performance. Therefore, measuring eye movements can provide a rich source of information about the strategies that humans use to solve decision-making tasks. Here, we provide a large-scale, high-quality dataset of human actions with simultaneously recorded eye movements while humans play Atari video games. The dataset consists of 117 hours of gameplay data from a diverse set of 20 games, with 8 million action demonstrations and 328 million gaze samples. We introduce a novel form of gameplay, in which the human plays in a semi-frame-by-frame manner. This leads to near-optimal game decisions and game scores that are comparable or better than known human records. We demonstrate the usefulness of the dataset through two simple applications: predicting human gaze and imitating human demonstrated actions. The quality of the data leads to promising results in both tasks. Moreover, using a learned human gaze model to inform imitation learning leads to an 115% increase in game performance. We interpret these results as highlighting the importance of incorporating human visual attention in models of decision making and demonstrating the value of the current dataset to the research community. We hope that the scale and quality of this dataset can provide more opportunities to researchers in the areas of visual attention, imitation learning, and reinforcement learning.

Downloads

Published

2020-04-03

How to Cite

Zhang, R., Walshe, C., Liu, Z., Guan, L., Muller, K., Whritner, J., Zhang, L., Hayhoe, M., & Ballard, D. (2020). Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6811-6820. https://doi.org/10.1609/aaai.v34i04.6161

Issue

Section

AAAI Technical Track: Machine Learning