Continuous Attention Mechanism Embedded (CAME) Bi-Directional Long Short-Term Memory Model for Fake News Detection

Continuous Attention Mechanism Embedded (CAME) Bi-Directional Long Short-Term Memory Model for Fake News Detection

Anshika Choudhary, Anuja Arora
Copyright: © 2022 |Pages: 24
DOI: 10.4018/IJACI.309407
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The credible analysis of news on social media due to the fact of spreading unnecessary restlessness and reluctance in the community is a need. Numerous individual or social media marketing entities radiate inauthentic news through online social media. Henceforth, delineating these activities on social media and the apparent identification of delusive content is a challenging task. This work projected a continuous attention-driven memory-based deep learning model to predict the credibility of an article. To exhibit the importance of continuous attention, research work is presented in accretive exaggeration mode. Initially, long short-term memory (LSTM)-based deep learning model has been applied, which is extended by merging the concept of bidirectional LSTM for fake news identification. This research work proposed a continuous attention mechanism embedded (CAME)-bidirectional LSTM model for predicting the nature of news. Result shows the proposed CAME model outperforms the performance as compared to LSTM and the bidirectional LSTM model.
Article Preview
Top

Introduction

Consuming online information is nowadays at its peak. It has become very easy for users to gain information sitting at home. Users are updated timely by online platforms, get recent stories, news and update their status. With the freedom of using new technologies, services, and free access to a wide variety of information, audiences are exposed to a daily dose of false content, conspiracy theories, hoaxes, click-bait popular headlines, junk-science as well as satire news. It became difficult to identify the root of false information. Users get attracted towards enticing headlines, titles and want to be a part of such hot topics, the discussion which increases the sense of chaos and insecurity in the population.

Fake news and its consequences carry the potential to destroy, ranging from a citizen's lifestyle to a country’s global relations. The most visible impact of fake news is political issues (Allcott & Gentzkow, 2017), where fake news led to manipulation of public thoughts, opinions, beliefs towards their democracies and government. Furthermore, the media also started publishing articles with flashy headlines and photos in order to maximize their revenue (click-bait) where financial incentive through advertisement is well inferred. The large extent of fake news can lead to breakdown and failure of economic loss at a big pace. It has many more consequences such as cyber-attack and phishing where researchers are involved to eradicate the root cause of the problem (Shu et. al., 2017; Ajao et. al., 2018). Therefore, in the past year, users have been lynched by mobs spurred by nothing more than rumors (Jin et al., 2017).

To stop the spread of false information created by bots also gain attention towards fake news stories. Bots are accounts that share misinformation more frequently than general accounts. Generally, their prime targets are those accounts that have similar thinking to share or repost the same news in their feed. It is found that the bots populate the social media space to deceive or mislead the users (Shao et al., 2018). As a result, fake news has become a challenging issue for big organizations such as Google, Facebook (Lyons, 2018; Sahoo et. al., 2021), Twitter, and for many more researchers who are constantly putting effort on their platforms to repel the spread of false information. To address this issue or to know the trust degree level of information, a system or process should be implemented which differentiates the original news source from the fabricated deliberate lies. Researchers have validated deep learning models with tensor factorization in the introduced tool Echo- FakeD and validated results on two datasets and achieved an accuracy of 92% (Kaliyar et. al., 2021).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing