Multimodal fusion with deep neural networks for audio-video emotion recognition

Multimodal fusion with deep neural networks for audio-video emotion recognition

Ortega, Juan D.S. and Senoussaoui, Mohammed and Granger, Eric and Pedersoli, Marco and Cardinal, Patrick and Koerich, Alessandro L.

arXiv 2019

Abstract : This paper presents a novel deep neural network (DNN) for multimodal fusion of audio, video and text modalities for emotion recognition. The proposed DNN architecture has independent and shared layers which aim to learn the representation for each modality, as well as the best combined representation to achieve the best prediction. Experimental results on the AVEC Sentiment Analysis in the Wild dataset indicate that the proposed DNN can achieve a higher level of Concordance Correlation Coefficient (CCC) than other stateof-the-art systems that perform early fusion of modalities at feature-level (i.e., concatenation) and late fusion at scorelevel (i.e., weighted average) fusion. The proposed DNN has achieved CCCs of 0.606, 0.534, and 0.170 on the development partition of the dataset for predicting arousal, valence and liking, respectively.