Recent Advances in Edge Computing Paradigms: Taxonomy Benchmarks and Standards for Unconventional Computing

Recent Advances in Edge Computing Paradigms: Taxonomy Benchmarks and Standards for Unconventional Computing

Sana Sodanapalli, Hewan Shrestha, Chandramohan Dhasarathan, Puviyarasi T., Sam Goundar
Copyright: © 2021 |Pages: 15
DOI: 10.4018/IJFC.2021010103
(Individual Articles)
No Current Special Offers


Edge computing is an exciting new approach to network architecture that helps organizations break beyond the limitations imposed by traditional cloud-based networks. It has emerged as a viable and important architecture that supports distributed computing to deploy compute and storage resources closer to the data source. Edge and fog computing addresses three principles of network limitations of bandwidth, latency, congestion, and reliability. The research community sees edge computing at manufacturing, farming, network optimization, workplace safety, improved healthcare, transportation, etc. The promise of this technology will be realized through addressing new research challenges in the IoT paradigm and the design of highly-efficient communication technology with minimum cost and effort.
Article Preview

2. Literature Study

Shaoyong Guoa (2021), the edge computing environment providing includes Base Station with server cluster and many kinds of computational access points close to the devices. All of these edge computing nodes can provide service execution for the container cluster. It is challenging to achieve an optimal real-time resource allocation scheme for the container cluster in the edge environment. Delay reduction is an important performance indicator for delay-sensitive applications. The end-to-end delay analysis in the existing work often focuses on the sum of all data packet delays in the service flow. Then a delay-sensitive resource allocation algorithm based on A3C (Asynchronous Advantage Actor-Critic) is proposed to solve this problem. Finally, we utilize the ESN (Echo state network) to improve the traditional A3C algorithm.

Kyuchang Lee (2021), Deep learning is one of the AI technologies that can analyze large-scale data effectively. Edge computing is another promising technology that improves the service provision of modern smart cities. The deep learning process is suitable for Edge computing environments as some processing layers can be shifted to the edges. Subsequently, the remaining data to be processed can be transferred to the cloud to be processed by residual layers. Once DL layers are distributed by the EC system, the edge nodes reduce the data size by processing certain data portions at the edges. Therefore, the network delay and computational overhead of the cloud server can be reduced. There is a Deep learning Layer Assignment in Edge Computing (DLAEC) algorithm. Here, a task means a unit of software that improves the quality of life (QoL), utilizes data from several devices and allows usage by multiple devices independently.

Yaser Mansouri(2020), Cloud computing is the delivery of centralized and virtualized computing, storage, services, and application resources over the Internet. However, cloud computing is not able to serve real-time IoT applications, as they require ultra-low latency, low jitter, high-demand bandwidth, mobility services, etc; The extension of computing services from centralized cloud-based paradigms to the edge of the network is called edge computing that boosts the overall efficiency of infrastructures by achieving ultra-low latency, reducing backhaul load, supporting mobility services, and increasing service resilience. These paradigms consist of connected resource-constrained devices such as smartphones, wearable gadgets, and so on. The fundamental technology of edge computing paradigms is resource virtualization that decouples hardware resources from software in order to run multiple tenants on the same hardware. Application requirements are the primary factors for the selection of virtualization types in IoT frameworks. As a result, there is a need for prioritizing the requirements, integrating different virtualization techniques, and exploiting hierarchical edge-cloud architecture.

Junyou Yang (2020), Adaptive dynamic programming belongs to the reinforcement learning field, which is an important branch of artificial intelligence. Policy iteration (PI) and value iteration (VI) are mainstream iterative model-based offline ADP methods. However, in practice, system mathematical models are generally unavailable. To realize the model-free purpose without using the identification schemes, an online dual-network-based action-dependent heuristic dynamic programming method and a critic-only Q-learning approach is presented. Researchers integrate the optimal control theory and artificial-intelligence-based algorithms to search for the optimal solution, which will be shown in the numerical simulation results. Finally, these optimal control strategies are applied to a benchmark microgrid system to demonstrate the effectiveness of performance optimization.

Complete Article List

Search this Journal:
Volume 6: 1 Issue (2023)
Volume 5: 2 Issues (2022): 1 Released, 1 Forthcoming
Volume 4: 1 Issue (2021)
Volume 3: 2 Issues (2020)
Volume 2: 2 Issues (2019)
Volume 1: 2 Issues (2018)
View Complete Journal Contents Listing