Document Type

Article

Publication Date

2025

Journal / Book Title

International Journal of Information Systems in the Service Sector

Abstract

Multimodal sentiment analysis aims to attain a precise comprehension of emotions by integrating complementary textual, visual, and audio information. However, issues such as sentiment discrepancies between modalities, ineffective integration of multi-modal information, and the intricacy of order dependency significantly constrain the models' efficacy. The authors propose an LLM-guided Hierarchical Spatio-Temporal Graph Network (L-HSTGN). By multimodal large model feature enhancement, bidirectional spatio-temporal joint modeling, and dynamic gate fusion mechanism, they effectively address the aforementioned problems. Firstly, they produce cross-modal emotion pseudo-labels based on the multimodal large model, and the single-modal representation was optimized by combining adversarial regularization. Secondly, they develop a bidirectional spatio-temporal convolution module to concurrently extract local-global temporal characteristics and dynamic spatial correlations.

DOI

10.4018/IJISSS.388002

Rights

This article published as an Open Access article distributed under the terms of the Creative Commons Attribution License (CC-BY) (https://creativecommons.org/licenses/by/4.0/)

Published Citation

Jin, Yujie, et al. "LLM-Guided Multimodal Information Fusion With Hierarchical Spatio-Temporal Graph Network for Sentiment Analysis." IJISSS vol.16, no.1 2025: pp.1-15. https://doi.org/10.4018/IJISSS.388002

Share

COinS