Enhancing Multimodal Tourism Review Sentiment Analysis Through Advanced Feature Association Techniques

Enhancing Multimodal Tourism Review Sentiment Analysis Through Advanced Feature Association Techniques

Peng Chen, Lingmei Fu
Copyright: © 2024 |Pages: 21
DOI: 10.4018/IJISSS.349564
Article PDF Download
Open access articles are freely available for download

Abstract

The development of tourism services presents significant opportunities for extracting and analyzing customer sentiment. However, with the advent of multimodality, travel reviews have brought new challenges. Early methods for detecting such reviews merely combined text and image features, resulting in poor feature correlation. To address this issue, our study proposes a novel multimodal tourism review sentiment analysis method enhanced by relevant features. Initially, we employ a fusion model that combines BERT and Text-CNN for text feature extraction. This approach strengthens semantic relationships and filters noise effectively. Subsequently, we utilize ResNet-51 for image feature extraction, leveraging its ability to learn complex visual representations. Additionally, integrating an attention mechanism further enhances modality correlation, thereby improving fusion effectiveness. On the Multi-ZOL dataset, our method achieves an accuracy of 90.7% and an F1 score of 90.8%. Similarly, on the Ctrip dataset, it attains an accuracy of 83.6% and an F1 score of 84.1%.
Article Preview
Top

Sentiment classification is crucial for understanding tourism reviews, focusing on subjective emotions (Chen et al., 2020; Krishnan et al., 2022; Momani et al., 2022). Traditional single-modal analysis misses important cues in text and images (Ye et al., 2022).

Single-Modality Sentiment Classification Methods

Sentiment classification is crucial for understanding tourism reviews, focusing on subjective emotions (Chen et al., 2020; Krishnan et al., 2022; Momani et al., 2022). Traditional single-modal analysis, which often examines only text or images, misses important emotional cues (Ye et al., 2022). This study introduces an advanced algorithm combining BERT and Text-CNN for text extraction, ResNet-51 for image extraction, and an attention mechanism to integrate multimodal data, enhancing sentiment prediction accuracy.

Single-modality sentiment analysis has been pivotal in analyzing text (Wang & Shin, 2019) and images (Rao et al., 2020), using statistical methods like term frequency and inverse document frequency (Puh & Bagić, 2023). Advances in pretrained models like BERT (Devlin et al., 2018), GPT-2 (Veyseh et al., 2021), and RoBERTa have improved text sentiment analysis by capturing complex language structures. BERT-CNN integrations (Abas et al., 2022) further enhance the capture of emotional nuances. Prompt learning (Liu et al., 2023) has facilitated few-shot learning and improved semantic comprehension.

Visual sentiment analysis initially relied on handcrafted features such as composition and texture (Machajdik & Hanbury, 2010) and concepts like balance and harmony (Zhao et al., 2014). Innovations like adjective noun pairs (ANPs; Borth et al., 2013) and their emotional implications (Li et al., 2018) have been significant. Recently, deep neural networks and attention mechanisms have improved visual sentiment analysis by focusing on emotionally significant image areas (Yang et al., 2021; You et al., 2017). Integrating visual and textual analyses into a multimodal approach promises enhanced precision (Yang et al., 2021; Zhang et al., 2022).

Despite progress, challenges remain, including a scarcity of annotated datasets and model efficiency issues. Further research is needed to refine these multimodal techniques and improve their practical applications.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing