loading page

Unified multi-stage fusion network for affective video content analysis
  • Yun Yi,
  • Hanli Wang,
  • Pengjie Tang
Yun Yi
Gannan Normal University

Corresponding Author:[email protected]

Author Profile
Hanli Wang
Tongji University
Author Profile
Pengjie Tang
Tongji University
Author Profile

Abstract

Affective video content analysis is an active topic in the field of affective computing. In general, affective video content can be depicted by feature vectors of multiple modalities, so it is important to effectively fuse information. In this work, a novel framework is designed to fuse information from multiple stages in a unified manner. In particular, a unified fusion layer is devised to combine output tensors from multiple stages of the proposed neural network. With the unified fusion layer, a bidirectional residual recurrent fusion block is devised to model the information of each modality. Moreover, the proposed method achieves state-of-the-art performances on two challenging datasets, i.e., the accuracy value on the VideoEmotion dataset is 55.8%, and the MSE values on the two domains of EIMT16 are 0.464 and 0.176 respectively. The code of UMFN is available at: https://github.com/yunyi9/UMFN.
16 Jun 2022Submitted to Electronics Letters
16 Jun 2022Submission Checks Completed
16 Jun 2022Assigned to Editor
21 Jun 2022Reviewer(s) Assigned
10 Aug 2022Review(s) Completed, Editorial Evaluation Pending
11 Aug 2022Editorial Decision: Accept
Oct 2022Published in Electronics Letters volume 58 issue 21 on pages 795-797. 10.1049/ell2.12605