Data Gathering, Processing, and Visualization for COVID-19

Data Gathering, Processing, and Visualization for COVID-19

Copyright: © 2022 |Pages: 27
DOI: 10.4018/978-1-7998-8793-5.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The novel coronavirus has been impacting human society since 2019. The global death toll has exceeded 400 million as of October 2021. To better understand the dynamic of COVID-19, big data technologies can in principle be applied to provide the overview and the preview for how it spread spatially and temporally. This chapter introduces how big data helps in terms of tracking, processing, visualizing, and analyzing COVID-19 information by illustrating the major components of data visualization designs. Topics cover a broad range of cutting-edge techniques that deal with a variety of big data problems, such as data aggregation, data preprocessing, big data pipeline construction, data workflow architecture design, visual mining, and web-based dashboard development.
Chapter Preview
Top

Introduction

Since December 2019, a novel coronavirus, namely Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), which causes Coronavirus Disease-19 (COVID-19), has been spreading rapidly in the whole world. Due to its high human-to-human transmission efficiency and the severe consequence of the infection, COVID-19 has become one of the most significant health crises in modern history. World Health Organization (WHO) declared that COVID-19 is a global pandemic on March 11, 2020. According to WHO, a total of 236,599,025 individuals have been confirmed as infected cases worldwide, including 4,831,486 deaths, as of 6:49 CEST, October 8, 2021 (WHO, 2020). The highest numbers of COVID-19 cases have been reported in the United States of America (44.2M), followed by India (33.9M), Brazil (21.5M), the United Kingdom (8.05M), and Russia (7.58). Till date, the COVID-19 pandemic is ongoing despite the availability of medications and high vaccination rates in some regions, due to a series of COVID-19 variants, such as B.1.1.7 (Alpha), B.1.351 (Beta), B.1.617.2 (Delta), etc. (Rubin, 2021; Bernal et al., 2021). Fighting against the COVID-19 pandemic has become the main theme in the past two years. Research communities, public health organizations, government authorities, and industrial sectors have been involved to overcome the challenges of the pandemic, while many innovative disease control measures have been proposed by using cutting-edge technologies, such as big data analytics, artificial intelligence (AI), Internet of Things (IoT), and 5G (Rodríguez-Rodríguez et al., 2021; Siriwardhana et al., 2020).

Various state-of-the-art technologies have been applied to deal with the current urgency of the pandemic, while big data is the foundation that supports many other applications, such as epidemiological models, pandemic trend predictions, and machine learning tasks. One such noteworthy component is tracking major measurements of COVID-19 through visual mining tools, while big data processing plays a crucial role among the full cycle of the project development. Big data analytics is an automated process which uses a set of techniques or tools to access large-scale datasets for extracting useful information and insights. Such a process involves a series of customized and proprietary steps. It requires specific knowledge to handle and operate the workflow properly. Due to the nature of 4Vs (Volumes, Variety, Velocity and Veracity), a robust, reliable and fault-tolerant data processing pipeline is essential for any big data project. The sustainable automation can consolidate all tasks throughout ETL, data warehousing, and data visualization. The upstream vendor data will be ingested into a data lake, where source data is maintained and go through the data processing of cleaning, scrubbing, normalization and insight extraction. Prior to the visual mining and data analytical tasks, data should be preprocessed towards a user-friendly manner.

A massive amount of data visualizers and dashboards has been generated to track and analyze the feature of COVID-19 spreading patterns. Data can be classified into temporal measurements (i.e. infection, recovery, death counts, hospitalizations, test results over time), geographical information (i.e. disease measurements at different geo-spatial levels), and demographical data (such as disease measurements per race). However, gathering appropriate data for tracking the pandemic facets of COVID-19 is still challenging due to differences of regional reported records and the complexity of data formats. Data visualization, on the other hand, has become one of the most powerful tools for tracking data related to COVID-19, along with many web-based visualizers and dashboards. Prominent example dashboards include Coronavirus COVID-19 Global Cases by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU), WHO Coronavirus Dashboard, Coronavirus Data in the United States by New York Times (NYT), the U.S. Centers for Disease Control and Prevention (CDC) COVID Data Tracker, etc., in which big data techniques have been widely applied. It is important to compare different regions in terms of identifying potential emerging hotspots and assessing possible effects of disease control strategies by understanding the dynamics of the pandemic.

Key Terms in this Chapter

ETL: Stands for extract, transform, and load, which is the general procedure of delivering data from one data source to another.

Hadoop Ecosystem: A big data platform which offers a variety of services to solve the big data problems with four main components, such as HDFS, MapReduce, YARN, and Hadoop Common.

Big Data Pipeline: A sequence of data processing components connected in series, where the output of one part is the input of the next one, in which the pipeline can be operated in parallel or in time-sliced manner.

Apache Airflow: An open-source big data management platform, proposed by Airbnb as one of the most efficient data solutions to manage the industrial-level data workflow challenges.

Data Visualization: An efficient method to communicate with data and information through graphical representations.

Data Acquisition: A process to deal with physical conditions and transform the sample data into numerical values that can be read by computer.

Apache Spark: An open-source analytical engine for big data processing with an interface for programming entire clusters of implicit data parallelism and fault tolerance.

Data Workflow: A set of operations that processes information and data from raw to processed.

Complete Chapter List

Search this Book:
Reset