Video Frame Interpolation via Deformable Separable Convolution

Authors

  • Xianhang Cheng Wuhan University
  • Zhenzhong Chen Wuhan University

DOI:

https://doi.org/10.1609/aaai.v34i07.6634

Abstract

Learning to synthesize non-existing frames from the original consecutive video frames is a challenging task. Recent kernel-based interpolation methods predict pixels with a single convolution process to replace the dependency of optical flow. However, when scene motion is larger than the pre-defined kernel size, these methods yield poor results even though they take thousands of neighboring pixels into account. To solve this problem in this paper, we propose to use deformable separable convolution (DSepConv) to adaptively estimate kernels, offsets and masks to allow the network to obtain information with much fewer but more relevant pixels. In addition, we show that the kernel-based methods and conventional flow-based methods are specific instances of the proposed DSepConv. Experimental results demonstrate that our method significantly outperforms the other kernel-based interpolation methods and shows strong performance on par or even better than the state-of-the-art algorithms both qualitatively and quantitatively.

Downloads

Published

2020-04-03

How to Cite

Cheng, X., & Chen, Z. (2020). Video Frame Interpolation via Deformable Separable Convolution. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10607-10614. https://doi.org/10.1609/aaai.v34i07.6634

Issue

Section

AAAI Technical Track: Vision