Twitter-Based Disaster Response Using Recurrent Nets

Twitter-Based Disaster Response Using Recurrent Nets

Rabindra Lamsal (Jawaharlal Nehru University, India) and T. V. Vijay Kumar (Jawaharlal Nehru University, India)
Copyright: © 2021 |Pages: 18
DOI: 10.4018/IJSKD.2021070108
OnDemand PDF Download:
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Twitter has become the major source of data for the research community working on the social computing domain. The microblogging site receives millions of tweets every day on its platform. Earlier studies have shown that during any disaster, the frequency of tweets specific to an event grows exponentially, and these tweets, if monitored, processed, and analyzed, can contain actionable information relating to the event. However, during disasters, the number of tweets can be in the hundreds of thousands thereby necessitating the design of a semi-automated artificial intelligence-based system that can extract actionable information based on which steps can be taken for effective disaster response. This paper proposes a Twitter-based disaster response system that uses recurrent nets for training a classifier on a disaster specific tweets dataset. The proposed system would enable timely dissemination of information to various stakeholders so that timely response and proactive measures can be taken in order to reduce the severe consequences of disasters. Experimental results show that the recurrent nets outperform the traditional machine learning algorithms with regard to accuracy in classifying disaster-specific tweets.
Article Preview
Top

1. Introduction

Disasters may it be natural or human-made they lead to substantial loss of lives and damage millions worth of property. The commonly occurring hazards include floods, earthquakes, landslides, terrorist attacks, tsunamis, stampede, etc. People are likely to use social media overly during an hazard. As a result, social platforms such as Facebook and Twitter become an active source of information (Imran et al., 2015) for people to share information about their safety and query regarding the safety status of their loved ones. During such hours, the conversations available in the public domain result into accumulations of enormous amount of socially generated data. The generated data can be processed for extracting situational awareness information to enhance the efficacy of disaster response. The conversations also contain a significant amount of unnecessary information, which should be filtered out using effective methods to extract essential information related to an ongoing hazard. The extracted situational information can be quite beneficial to the first responders and decision makers to come up with useful and actionable plans.

Earlier studies (Abel et al., 2012; Ashktorab et al., 2014; Caragea et al., 2011; Imran et al., 2014; MacEachren et al., 2011; Purohit & Sheth, 2013; Sheth et al., 2010; Vieweg et al., 2010; Yin et al., 2015) show that tweets posted during hazardous events genuinely contribute to a better understanding of the events as they unfold. Unlike other social media platforms, conversations in Twitter are real-time and highly informal. There is a character limitation on Twitter for each tweet. As a result, people try to express their feelings informally, however, considering the burst in the number of tweets on Twitter during an ongoing hazard (Yin et al., 2015) and the ease of accessing those near real-time tweets through extremely user-friendly APIs, Twitter has become the major source of data for the research community working on crisis computing domain. Besides the official Twitter APIs, there are a couple of handy tools that can be used to download tweets based on hashtags, keyword or geographical region. Qatar Computing Research Institute (QCRI) provides tweet corpus containing millions of tweets specific to nineteen natural and human-made disastrous events including Nepal Earthquake 2015, India Floods 2014, Peshawar school attack, Ebola virus outbreak and Flight MH370 airline incident.

The number of tweets specific to any disastrous events can reach millions, with thousands of tweets being tweeted per minute. It is not possible for humans to manually go through each tweet and take necessary actions. However, monitoring, processing, analyzing and extracting essential information from the near real-time tweets can be achieved using the applications of artificial intelligence. In (Lamsal & Vijay Kumar, 2020a, 2020b), a semi-automated AI based disaster response system was designed using traditional classification-based machine learning algorithms, which was capable of understanding the thematic dimension of disaster-related tweets and classify them into various groups related to community requirements, loss of lives and infrastructure damages. In this paper, an attempt is made to improve the performance of the classification model of the disaster response system by using Deep learning.

1.1 Organization of the Paper

The paper is organized as follows. Section 2 briefly discusses Deep learning and its applicability to classifying twitter data. The proposed disaster response system for classifying disaster specific tweets is discussed in Section 3. Section 4 discusses experimental results followed by conclusion in section 5.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 14: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing