Trimodal Attention Module for Multimodal Sentiment Analysis (Student Abstract)

Authors

  • Anirudh Bindiganavale Harish National Institute of Technology Karnataka
  • Fatiha Sadat Université du Québec à Montréal

DOI:

https://doi.org/10.1609/aaai.v34i10.7173

Abstract

In our research, we propose a new multimodal fusion architecture for the task of sentiment analysis. The 3 modalities used in this paper are text, audio and video. Most of the current methods deal with either a feature level or a decision level fusion. In contrast, we propose an attention-based deep neural network and a training approach to facilitate both feature and decision level fusion. Our network effectively leverages information across all three modalities using a 2 stage fusion process. We test our network on the individual utterance based contextual information extracted from the CMU-MOSI Dataset. A comparison is drawn between the state-of-the-art and our network.

Downloads

Published

2020-04-03

How to Cite

Harish, A. B., & Sadat, F. (2020). Trimodal Attention Module for Multimodal Sentiment Analysis (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13803-13804. https://doi.org/10.1609/aaai.v34i10.7173

Issue

Section

Student Abstract Track