Dynamic Cache Management of Cloud RAN and Multi-Access Edge Computing for 5G Networks

Dynamic Cache Management of Cloud RAN and Multi-Access Edge Computing for 5G Networks

Deepika Pathinga Rajendiran, Yihang Tang, Melody Moh
Copyright: © 2020 |Pages: 33
DOI: 10.4018/978-1-7998-1152-7.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Using a cache to improve efficiency and to save on the cost of a computer system has been a field that attracts many researchers, including those in the area of cellular network systems. The first part of this chapter focuses on adaptive cache management schemes for cloud radio access networks (CRAN) and multi-access edge computing (MEC) of 5G mobile technologies. Experimental results run through CloudSim show that the proposed adaptive algorithms are effective in increasing cache hit rate, guaranteeing QoS, and in reducing algorithm execution time. In second part of this chapter, a new cache management algorithm using Zipf distribution to address dynamic input is proposed for CRAN and MEC models. A performance test is also run using iFogSim to show the improvement made by the proposed algorithm over the original versions. This work contributes in the support of 5G for IoT by enhancing CRAN and MEC performance; it also contributes to how novel caching algorithms can resolve the unbalanced input load caused by changing distributions of the input traffic.
Chapter Preview
Top

Introduction

The last decade has witnessed the rapid rise of intelligent mobile devices and Internet of Things (IoT) systems. According to a research done by the United Nation’s panel on global sustainability, by 2050, 70% of the world population will live in urban areas, which only cover 2% of the entire Earth surface, yet are responsible for 75% of the greenhouse gas emissions (United Nations, 2012). Based on this understanding, the concept of Smart Communities therefore needs the solutions and practices to advance the development and to allow the sustainability of urban environments. In particular, the use of Information and Communication Technologies (ICT) will provide the necessary backbone, not only for maintaining existing services but for enabling new ones; the Internet of Things (IoT) is among the most useful, hopeful ICT technologies for this purpose (Casana & Redondi, 2017).

The rapid growth of IoT also raises new challenges in resource constrained wireless networks. Fifth Generation (5G) mobile networks have been proposed to provide timely connectivity for these IoT and mobile devices in order to support the mounting services they deliver (Su & Moh, 2018). 5G relies on many fast-growing technologies including Cloud Radio Access Networks (CRAN), Multi-Access Edge Computing (MEC), Millimeter Wave, Massive Multiple-Input Multiple-Output (MIMO), etc. Among them, CRAN and MEC utilizes cloud and virtualization models, making 5G systems flexible, scalable, and cost-effective.

Cloud Radio Access Networks (CRAN) utilizes cloud computing model to support rapid growth of IoT devices. CRAN, which is a centralized and virtualized architecture is proposed for 5G networks and has better utilization of hardware and software resources.

Edge computing has become a standard in delivering services more efficiently and centralizing resources in such a way that reduces costs for service providers and clients. Based on the research conducted by Grand View Search, the global edge computing market size is projected to reach USD 3.24 billion by 2025 (Edge Computing Market Worth $3.24 Billion By 2025 CAGR: 41.0%”, n.d.). For a market with such huge potential, how to efficiently make use of it become a crucial question. Edge computing reduces latency because data does not have to traverse over the network to a data center or cloud for processing. However, this also means that data has to be processed or stored without the support from a centralized data center in some cases. This requires for the authors to figure out a way to use the resource smartly.

One of the ways to managing resource efficiently in the network is caching. Caching refers to a component that stores data so that future requests for that data can be served faster. It is a promising way to moderate the burden of traffic load (Ma et al., 2018). Hierarchical memory systems usually make use of cache to reduce access delay for time-critical applications. Cache has been introduced in both CRAN and MEC to increase the speed of cellular network services as well as to improve resource utilization (Hou, Feng, Qin, & Jiang, 2017).

The cost of these high-speed, carrier-grade cache resources is, however, extremely high, so their sizes are often limited. It is therefore necessary to efficiently manage these cache resources. In addition, as traffic load changes and the required Quality-of-Services (QoS) varies, it is critical to make the cache management adaptive. Edge Computing, unlike cloud computing, has limited computing and storage resource. As a result, which services are cached on the BS determines which application tasks can be offloaded to the edge server, thereby significantly affecting the edge computing performance (Chen & Xu, 2017).

This first part of this chapter focuses on adaptive cache management to effectively utilize the limited cache resources in CRAN and in MEC; a preliminary version has been presented (Pathinga Rajendiran & Moh, 2019). The major contributions may be summarized as follows:

Key Terms in this Chapter

Minimum Guarantee: A guaranteed minimum percentage of service or cache hit is promised to the users.

Edge Computing: Instead of retrieving data from Internet, data can be accessed from nearby sources to improve users experience.

Popularity Scoring: Since the cache size is small, files which has high score or “popular” are only stored in cache. Low score files are evicted if there is no space in cache.

Static Input: For Static System, the amount of user request for each User Equipment is fixed and will not change over time.

Cache Distribution or Cache Hierarchy: Cache is distributed according to the service level of users. For example, highest service level users might enjoy maximum cache size.

Service: Level Agreement: Preferential treatment for users and quality of service is attained to the users according to their service level.

Dynamic Input: For Dynamic Input, there is unbalanced distribution of user devices and virtual machines and distribution will continuously change over time.

Complete Chapter List

Search this Book:
Reset