Article Preview
TopIntroduction
While providing great convenience to people’s daily life, social media also promotes the spread of fake news and has negative effects on society, economy, and culture. During major events such as the US presidential election (Allcott & Gentzkow, 2017), the COVID-19 pandemic (Diseases, 2020), and the Russian-Ukrainian conflict (Haq et al., 2022), social media platforms played an extremely critical role in distributing information while being bombarded with misinformation as well, such as a bunch of fake news. In this regard, the propagation of fake news must be detected and prevented.
A key element of fake news is emotional expression (Alonso et al., 2021). In most cases, various methods spread fake news to attract users’ attention and mislead them to comment and forward. Fake news publishers generally utilize emotionally arousing tactics to drive users to respond with more exaggerated fabrications.
Emotional elements are consequently considered enrichment features for fake news detection. Previous studies by Wu et al (2020) found emotional correlations and semantic conflicts between news content and user comments. Furthermore, Zhang et al. (2021) found that user comments often included sentiment relating to the emotion of news content. Apart from focusing on the feeling of the news content, they explored the sentiment of the news comments and the difference between the generations.
Though crucial for detecting fake news, emotional information is still far from being fully used in these studies, calling for further explorations. First, when using emotional features of user comments, there is no screening, and often only the first few user comments are directly used (Zhang et al., 2021), which happens the same while using semantic features (Shu et al., 2019). In particular, for datasets such as Weibo (Ma et al., 2016), where the number of user comments is extremely large, no research has yet been conducted on selecting the most relevant user comments to detect fake news. Second, the correlation between user comment sentiment and news content sentiment is not fully considered (Zhang et al., 2021), and in existing models, the sentiment feature representations of news content and user comments are usually extracted separately as detector features. Finally, the sentiment features in user comments are not exploited to provide reasonable interpretability for fake news detection. While explainable fake news detection often starts from the semantic perspective (Shu et al., 2019) and the forwarding relationship (Lu et al., 2020), existing models that use emotional features for fake news, detection has not considered the emotional perspective to provide reasonable explanations.
To address the abovementioned issues, we propose an Emotion-Drive Interpretable fake news detection model (EDI) that selects user comments based on their emotional value, and utilize Convolutional Neural Networks (CNN) to extract sentiment representations of news content and user comments. Finally, the correlation between the emotional features of news content and user comments are learned through Co-attention, and the representation of user comments emotional features is learned through Attention. Finally, the weights of co-attention and attention provide interpretations