Emotion-Drive Interpretable Fake News Detection

Emotion-Drive Interpretable Fake News Detection

Xiaoyi Ge, Mingshu Zhang, Xu An Wang, Jia Liu, Bin Wei
Copyright: © 2022 |Pages: 17
DOI: 10.4018/IJDWM.314585
Article PDF Download
Open access articles are freely available for download

Abstract

Fake news has brought significant challenges to the healthy development of social media. Although current fake news detection methods are advanced, many models directly utilize unselected user comments and do not consider the emotional connection between news content and user comments. The authors propose an emotion-driven explainable fake news detection model (EDI) to solve this problem. The model can select valuable user comments by using sentiment value, obtain the emotional correlation representation between news content and user comments by using collaborative annotation, and obtain the weighted representation of user comments by using the attention mechanism. Experimental results on Twitter and Weibo show that the detection model significantly outperforms the state-of-the-art models and provides reasonable interpretation.
Article Preview
Top

Introduction

While providing great convenience to people’s daily life, social media also promotes the spread of fake news and has negative effects on society, economy, and culture. During major events such as the US presidential election (Allcott & Gentzkow, 2017), the COVID-19 pandemic (Diseases, 2020), and the Russian-Ukrainian conflict (Haq et al., 2022), social media platforms played an extremely critical role in distributing information while being bombarded with misinformation as well, such as a bunch of fake news. In this regard, the propagation of fake news must be detected and prevented.

A key element of fake news is emotional expression (Alonso et al., 2021). In most cases, various methods spread fake news to attract users’ attention and mislead them to comment and forward. Fake news publishers generally utilize emotionally arousing tactics to drive users to respond with more exaggerated fabrications.

Emotional elements are consequently considered enrichment features for fake news detection. Previous studies by Wu et al (2020) found emotional correlations and semantic conflicts between news content and user comments. Furthermore, Zhang et al. (2021) found that user comments often included sentiment relating to the emotion of news content. Apart from focusing on the feeling of the news content, they explored the sentiment of the news comments and the difference between the generations.

Though crucial for detecting fake news, emotional information is still far from being fully used in these studies, calling for further explorations. First, when using emotional features of user comments, there is no screening, and often only the first few user comments are directly used (Zhang et al., 2021), which happens the same while using semantic features (Shu et al., 2019). In particular, for datasets such as Weibo (Ma et al., 2016), where the number of user comments is extremely large, no research has yet been conducted on selecting the most relevant user comments to detect fake news. Second, the correlation between user comment sentiment and news content sentiment is not fully considered (Zhang et al., 2021), and in existing models, the sentiment feature representations of news content and user comments are usually extracted separately as detector features. Finally, the sentiment features in user comments are not exploited to provide reasonable interpretability for fake news detection. While explainable fake news detection often starts from the semantic perspective (Shu et al., 2019) and the forwarding relationship (Lu et al., 2020), existing models that use emotional features for fake news, detection has not considered the emotional perspective to provide reasonable explanations.

To address the abovementioned issues, we propose an Emotion-Drive Interpretable fake news detection model (EDI) that selects user comments based on their emotional value, and utilize Convolutional Neural Networks (CNN) to extract sentiment representations of news content and user comments. Finally, the correlation between the emotional features of news content and user comments are learned through Co-attention, and the representation of user comments emotional features is learned through Attention. Finally, the weights of co-attention and attention provide interpretations

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing