Next Article in Journal
A Multimodal Deep Learning Framework for Consistency-Aware Review Helpfulness Prediction
Previous Article in Journal
Design of an Interactive System by Combining Affective Computing Technology with Music for Stress Relief
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

MGMR-Net: Mamba-Guided Multimodal Reconstruction and Fusion Network for Sentiment Analysis with Incomplete Modalities

1
School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau 999078, China
2
College of Artificial Intelligence, Zhongkai University of Agriculture and Engineering, Zhongkai Road 501, Guangzhou 510225, China
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(15), 3088; https://doi.org/10.3390/electronics14153088 (registering DOI)
Submission received: 3 July 2025 / Revised: 21 July 2025 / Accepted: 31 July 2025 / Published: 1 August 2025
(This article belongs to the Special Issue Application of Data Mining in Decision Support Systems (DSSs))

Abstract

Multimodal sentiment analysis (MSA) faces key challenges such as incomplete modality inputs, long-range temporal dependencies, and suboptimal fusion strategies. To address these, we propose MGMR-Net, a Mamba-guided multimodal reconstruction and fusion network that integrates modality-aware reconstruction with text-centric fusion within an efficient state-space modeling framework. MGMR-Net consists of two core components: the Mamba-collaborative fusion module, which utilizes a two-stage selective state-space mechanism for fine-grained cross-modal alignment and hierarchical temporal integration, and the Mamba-enhanced reconstruction module, which employs continuous-time recurrence and dynamic gating to accurately recover corrupted or missing modality features. The entire network is jointly optimized via a unified multi-task loss, enabling simultaneous learning of discriminative features for sentiment prediction and reconstructive features for modality recovery. Extensive experiments on CMU-MOSI, CMU-MOSEI, and CH-SIMS datasets demonstrate that MGMR-Net consistently outperforms several baseline methods under both complete and missing modality settings, achieving superior accuracy, robustness, and generalization.
Keywords: multimodal sentiment analysis; Mamba-collaborative fusion module; state-space mechanism; Mamba-enhanced reconstruction module multimodal sentiment analysis; Mamba-collaborative fusion module; state-space mechanism; Mamba-enhanced reconstruction module

Share and Cite

MDPI and ACS Style

Yang, C.; Liang, Z.; Liu, T.; Hu, Z.; Yan, D. MGMR-Net: Mamba-Guided Multimodal Reconstruction and Fusion Network for Sentiment Analysis with Incomplete Modalities. Electronics 2025, 14, 3088. https://doi.org/10.3390/electronics14153088

AMA Style

Yang C, Liang Z, Liu T, Hu Z, Yan D. MGMR-Net: Mamba-Guided Multimodal Reconstruction and Fusion Network for Sentiment Analysis with Incomplete Modalities. Electronics. 2025; 14(15):3088. https://doi.org/10.3390/electronics14153088

Chicago/Turabian Style

Yang, Chengcheng, Zhiyao Liang, Tonglai Liu, Zeng Hu, and Dashun Yan. 2025. "MGMR-Net: Mamba-Guided Multimodal Reconstruction and Fusion Network for Sentiment Analysis with Incomplete Modalities" Electronics 14, no. 15: 3088. https://doi.org/10.3390/electronics14153088

APA Style

Yang, C., Liang, Z., Liu, T., Hu, Z., & Yan, D. (2025). MGMR-Net: Mamba-Guided Multimodal Reconstruction and Fusion Network for Sentiment Analysis with Incomplete Modalities. Electronics, 14(15), 3088. https://doi.org/10.3390/electronics14153088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop