Cubic LSTMs for Video Prediction

Authors

  • Hehe Fan University of Technology, Sydney
  • Linchao Zhu University of Technology, Sydney
  • Yi Yang University of Technology, Sydney

DOI:

https://doi.org/10.1609/aaai.v33i01.33018263

Abstract

Predicting future frames in videos has become a promising direction of research for both computer vision and robot learning communities. The core of this problem involves moving object capture and future motion prediction. While object capture specifies which objects are moving in videos, motion prediction describes their future dynamics. Motivated by this analysis, we propose a Cubic Long Short-Term Memory (CubicLSTM) unit for video prediction. CubicLSTM consists of three branches, i.e., a spatial branch for capturing moving objects, a temporal branch for processing motions, and an output branch for combining the first two branches to generate predicted frames. Stacking multiple CubicLSTM units along the spatial branch and output branch, and then evolving along the temporal branch can form a cubic recurrent neural network (CubicRNN). Experiment shows that CubicRNN produces more accurate video predictions than prior methods on both synthetic and real-world datasets.

Downloads

Published

2019-07-17

How to Cite

Fan, H., Zhu, L., & Yang, Y. (2019). Cubic LSTMs for Video Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8263-8270. https://doi.org/10.1609/aaai.v33i01.33018263

Issue

Section

AAAI Technical Track: Vision