Towards Better Forecasting by Fusing Near and Distant Future Visions

Authors

  • Jiezhu Cheng Sun Yat-sen University
  • Kaizhu Huang Xi'an Jiaotong-Liverpool University
  • Zibin Zheng Sun Yat-sen University

DOI:

https://doi.org/10.1609/aaai.v34i04.5766

Abstract

Multivariate time series forecasting is an important yet challenging problem in machine learning. Most existing approaches only forecast the series value of one future moment, ignoring the interactions between predictions of future moments with different temporal distance. Such a deficiency probably prevents the model from getting enough information about the future, thus limiting the forecasting accuracy. To address this problem, we propose Multi-Level Construal Neural Network (MLCNN), a novel multi-task deep learning framework. Inspired by the Construal Level Theory of psychology, this model aims to improve the predictive performance by fusing forecasting information (i.e., future visions) of different future time. We first use the Convolution Neural Network to extract multi-level abstract representations of the raw data for near and distant future predictions. We then model the interplay between multiple predictive tasks and fuse their future visions through a modified Encoder-Decoder architecture. Finally, we combine traditional Autoregression model with the neural network to solve the scale insensitive problem. Experiments on three real-world datasets show that our method achieves statistically significant improvements compared to the most state-of-the-art baseline methods, with average 4.59% reduction on RMSE metric and average 6.87% reduction on MAE metric.

Downloads

Published

2020-04-03

How to Cite

Cheng, J., Huang, K., & Zheng, Z. (2020). Towards Better Forecasting by Fusing Near and Distant Future Visions. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3593-3600. https://doi.org/10.1609/aaai.v34i04.5766

Issue

Section

AAAI Technical Track: Machine Learning