A Novel Approach to Location-Aware Scheduling of Workflows Over Edge Computing Resources

A Novel Approach to Location-Aware Scheduling of Workflows Over Edge Computing Resources

Yin Li (Institute of Software Application Technology, Guangzhou, China & Chinese Academy of Sciences, Guangzhou, China), Yuyin Ma (Chongqing University, China) and Ziyang Zeng (Chonqing University, China)
Copyright: © 2020 |Pages: 13
DOI: 10.4018/IJWSR.2020070104
OnDemand PDF Download:
Available
$29.50
No Current Special Offers
TOTAL SAVINGS: $29.50

Abstract

Edge computing is pushing the frontier of computing applications, data, and services away from centralized nodes to the logical extremes of a network. A major technological challenge for workflow scheduling in the edge computing environment is cost reduction with service-level-agreement (SLA) constraints in terms of performance and quality-of-service requirements because real-world workflow applications are constantly subject to negative impacts (e.g., network congestions, unexpected long message delays, shrinking coverage, range of edge servers due to battery depletion. To address the above concern, we propose a novel approach to location-aware and proximity-constrained multi-workflow scheduling with edge computing resources). The proposed approach is capable of minimizing monetary costs with user-required workflow completion deadlines. It employs an evolutionary algorithm (i.e., the discrete firefly algorithm) for the generation of near-optimal scheduling decisions. For the validation purpose, the authors show that our proposed approach outperforms traditional peers in terms multiple metrics based on a real-world dataset of edge resource locations and multiple well-known scientific workflow templates.
Article Preview
Top

Introduction

In the past decades, the cloud computing paradigm has evolved as a major force to provide computing, storage and network services, which have been applied in various fields, e.g., scientific workflow execution (Li, 2018; Peng, 2018; Wang 2019; Guo 2019). However, the cloud computing paradigm can be ineffective in supporting IoT-based and time-critical applications due to the fact traditional cloud (Xia 2015) infrastructures far away from the edge while smart IoT devices are usually located at the edge of network. To address the above challenge, the edge computing paradigm (Li 2019; Chen 2020; Xiang 2020) is derived to satisfy the demanding requirements of low-latency, location-awareness, and mobility. This novel paradigm can be seen as a network edge cloud, which effectively compensates for the disadvantages of cloud computing such as communication latency. Edge resources are usually located close to end-user applications to better serve delay-sensitive and time-critical tasks. Due to the improvement of resource-user proximity, power consumption, network traffic, operating expenses, and fault tolerance of edge-oriented applications are improved as well.

Figure 1.

Edge computing deployment example

IJWSR.2020070104.f01

Although extensive research efforts were paid to the problem of scheduling workflows over cloud infrastructures with multiple objectives and constraints, which is known to be NP-hard. Scheduling workflows upon edge infrastructures can be intrinsically different and it remains a challenge how to optimize the cost of workflow execution with the proximity constraint, i.e., every edge service can only support users within its communication range. Figure 1 illustrates a good example of deploying and offloading tasks among edge nodes. It’s assumed that there are four edge servers in a particular area and each server covers a specific area. A user can offload computing tasks to any server within the coverage. User u6 can offload computing tasks to servers s2 and s3, and u7 can offload tasks to s2, s3, and s4. u1 can only offload tasks to s1. Since user u10 is out of the coverage of any server, the task cannot be offloaded to the server. Each user initiates a workflow with multiple tasks to be offloaded. Tasks belonging to the same user can be offloaded to different edge servers. For instance, tasks belonging to u5 can be offloaded to both s1 and s2.

In this paper, we study the problem of location-aware and proximity-constrained cost-efficient multi-workflow scheduling in the edge computing environment. We consider a multi-edge-user, multi-workflow, cost-reduction, and completion-time-constrained formulation and employ a discrete firefly algorithm (DFA) for solution. To validate our proposed approach, we conduct simulative case studies and show that our proposed method beats its peers in terms of cost and workflow completion time.

Table 1.
Variables and symbols used in this paper
IJWSR.2020070104.g01

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2023): Forthcoming, Available for Pre-Order
Volume 19: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 18: 4 Issues (2021)
Volume 17: 4 Issues (2020)
Volume 16: 4 Issues (2019)
Volume 15: 4 Issues (2018)
Volume 14: 4 Issues (2017)
Volume 13: 4 Issues (2016)
Volume 12: 4 Issues (2015)
Volume 11: 4 Issues (2014)
Volume 10: 4 Issues (2013)
Volume 9: 4 Issues (2012)
Volume 8: 4 Issues (2011)
Volume 7: 4 Issues (2010)
Volume 6: 4 Issues (2009)
Volume 5: 4 Issues (2008)
Volume 4: 4 Issues (2007)
Volume 3: 4 Issues (2006)
Volume 2: 4 Issues (2005)
Volume 1: 4 Issues (2004)
View Complete Journal Contents Listing