Inception LSTM for Next-frame Video Prediction (Student Abstract)

Authors

  • Matin Hosseini University of Louisiana at Lafayette
  • Anthony S. Maida University of Louisiana at Lafayette
  • Majid Hosseini University of Louisiana at Lafayette
  • Gottumukkala Raju University of Louisiana at Lafayette

DOI:

https://doi.org/10.1609/aaai.v34i10.7176

Abstract

In this paper, we proposed a novel deep-learning method called Inception LSTM for video frame prediction. A standard convolutional LSTM uses a single size kernel for each of its gates. Having multiple kernel sizes within a single gate would provide a richer features that would otherwise not be possible with a single kernel. Our key idea is to introduce inception like kernels within the LSTM gates to capture features from a bigger area of the image while retaining the fine resolution of small information. We implemented the proposed idea of inception LSTM network on PredNet network with both inception version 1 and inception version 2 modules. The proposed idea was evaluated on both KITTI and KTH data. Our results show that the Inception LSTM has better predictive performance compared to convolutional LSTM. We also observe that LSTM with Inception version 1 has better predictive performance compared to Inception version 2, but Inception version 2 has less computational cost.

Downloads

Published

2020-04-03

How to Cite

Hosseini, M., Maida, A. S., Hosseini, M., & Raju, G. (2020). Inception LSTM for Next-frame Video Prediction (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13809-13810. https://doi.org/10.1609/aaai.v34i10.7176

Issue

Section

Student Abstract Track