Sustainable Big Data Analytics Process Pipeline Using Apache Ecosystem

Sustainable Big Data Analytics Process Pipeline Using Apache Ecosystem

Jane Cheng, Peng Zhao
Copyright: © 2023 |Pages: 13
DOI: 10.4018/978-1-7998-9220-5.ch073
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article provides a comprehensive understanding of the cutting-edge big data workflow technologies that have been widely applied in industrial applications, covering a broad range of the most current big data processing methods and tools, including Hadoop, Hive, MapReduce, Sqoop, Hue, Spark, Cloudera, Airflow, and GitLab. An industrial data workflow pipeline is proposed and investigated in terms of the system architecture, which is designed to meet the needs of data-driven industrial big data analytics applications concentrated on large-scale data processing. It differs from traditional data pipelines and workflows in its ability of ETL and analytical portals. The proposed data workflow can improve the industrial analytics applications for multiple tasks. This article also provides bid data researchers and professionals with an understanding of the challenges facing big data analytics in real-world environments and informs interdisciplinary studies in this field.
Chapter Preview
Top

Introduction

Big data analytics is an automated process which uses a set of techniques or tools to access large-scale data to extract useful information and insight. This process involves a series of customized and proprietary steps. it requires specific knowledge to handle and operate the workflow properly. Due to 4V nature of big data (Volumes, Variety, Velocity and Veracity), it is required to build a robust, reliable and fault-tolerant data processing pipeline . The proposed approach will help application developers to conquer this challenge.

Apache Airflow is a cutting-edge technology for applying big data analytics, which can cooperate the data processing workflows and data warehouses properly. Apache Airflow was developed by Airbnb technical engineers, aiming to manage internal workflows in a productive way. In 2016, Airflow became affiliated by Apache and was made accessible to users as an open source. Airflow is a framework that can conduct the various job of executing, scheduling, distributing, and monitoring. It can handle either interdependent or independent tasks. To operate each job, a directed acyclic graph (DAG) definition file is required. In this definition file, a collection is included for developers to run and sectionalized by relationships and dependencies.

The sustainable automation can consolidate all tasks of ETL, data warehousing, and analytics on one technology platform. The upstream vendor data will be ingested into a data lake, where source data is maintained and gone through the data processing of cleaning, scrubbing, normalization and data insight extraction. In the next step of data mining tasks, data could be processed to perform study analytics for end users. Motivated by the current demand in big data analytics and industrial applications, this chapter is proposed to illustrate and investigate a novel sustainable big data processing pipeline using a variety of big data tools. The proposed data pipeline starts at the standard data processing workflow using Apache Airflow, using GitLab for source code control to facilitate peer code review, and uses CI/CD for continuous integration and deployment. Apache Spark has been used for the data computer process scaling with standardizing data in the data warehousing procedure. With data persistent in HDFS/ADLS, downstream system can choose either data visualization tool or API to access data. The objectives of this chapter are:

  • investigating most recent big data tools for constructing the novel data workflow architecture.

  • illustrating the major functional components of the proposed system architecture.

  • initializing a state-of-the-art data workflow architecture design that can be used in the industrial applications.

Top

Background

Due to the fast revolutionary of information technologies and systems, avalanche-like growth of data has prompted the emergence of new models and technical methods for distributed data processing, including MapReduce, Dryad, Spark (Khan et al., 2014). For processing large graphs, special purpose systems for distributed computing based on the data-flow approach were introduced (Gonzalez et al., 2014). Some systems focused on batch (offline) data processing, while other systems and services can handle the real-time (online) data processing, such as Storm, Spark Streaming, Kafka Streams, and Apache, which attract more attentions due to the users’ demands on its ability of rapid and smart responding to the incoming data (Zaharia et al., 2012). These systems can implement the distributed data processing operations, so that to support large volumes of incoming data and to fulfill high speed of data delivery. For distributed data processing, a crucial feature of existing data-driven software systems is the abstraction of the programmer from the details of the implementation of computations by using ready-made primitives. For example, distributed data-flow-operators use map and reduce. This makes simplification of writing programs possible, which can fit into the proposed model of computations. However, it may be still difficult to implement other classes of applications. MapReduce-based systems may not be an optimal choice for performing iterative algorithms and fully connected applications. Many professional solutions for diverse kinds of applications have been established to figure out the limitations of existing distributed data processing models and technologies (Suleykin & Panfilov, 2019a).

Key Terms in this Chapter

Data Pipeline: A sequence of data processing components connected in series, where the output of one part is the input of the next one, in which the pipeline can be operated in parallel or in time-sliced manner.

Apache Spark: An open-source analytical engine for big data processing with an interface for programming entire clusters of implicit data parallelism and fault tolerance.

Data Workflow: A set of operations that processes information and data from raw to processed.

HBase: A Hadoop ecosystem part which is a distributed database that is designed to store data in structured formats, which is scalable and distributed.

ETL: Stands for extract, transform, and load, which is the general procedure of delivering data from one data sources to another.

Big Data Analytics: The usage of analytical tools to deal with large, diverse datasets, including structured, unstructured, and semi-structured data, from multiple data sources, and in efficient and effective transactional processes.

Hadoop Ecosystem: A big data platform which offers a variety of services to solve the big data problems with four main components, such as HDFS, MapReduce, YARN, and Hadoop Common.

Apache Airflow: An open-source big data management platform, proposed by Airbnb as one of the most efficient data solutions to manage the industrial-level data workflow challenges.

Complete Chapter List

Search this Book:
Reset