Fog Caching and a Trace-Based Analysis of its Offload Effect

Fog Caching and a Trace-Based Analysis of its Offload Effect

Marat Zhanikeev (Tokyo University of Science, School of Management, Tokyo, Japan)
DOI: 10.4018/IJITSA.2017070104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Many years of research on Content Delivery Networks (CDNs) offers a number of effective methods for caching of content replicas or forwarding requests. However, recently CDNs have aggressively started migrating to clouds. Clouds present a new kind of distribution environment as each location can support multiple caching options varying in the level of persistence of stored content. A subclass of clouds located at network edge is referred to as fog clouds. Fog clouds help by allowing CDNs to offload popular content to network edge, closer to end users. However, due to the fact that fog clouds are extremely heterogeneous and vary wildly in network and caching performance, traditional caching technology is no longer applicable. This paper proposes a multi-level caching technology specific to fog clouds. To deal with the heterogeneity problem and, at the same time, avoid centralized control, this paper proposes a function that allows CDN services to discover local caching facilities dynamically, at runtime. Using a combination of synthetic models and real measurement dataset, this paper analyzes efficiency of offload both at the local level of individual fog locations and at the global level of the entire CDN infrastructure. Local analysis shows that the new method can reduce inter-cloud traffic by between 16 and 18 times while retaining less than 30% of total content in a local cache. Global analysis further shows that, based on existing measurement datasets, centralized optimization is preferred to distributed coordination among services.
Article Preview

Introduction

Traditional Content Delivery Networks (CDNs) (Buyya, 2008) have recently started migrating to clouds (Frank, 2013). Traditional CDNs implement a range of caching (Sivasubramanian, 2007) and request routing (Chen, 2005) techniques in order to optimize the Quality of Service (QoS) of content delivery. QoS in this paper is defined as the ability to sustain a given target data rate (throughput) from a given content replica to each of its end users (simply “users” from this point on). Network congestion at a CDN node directly and negatively affects QoS. Moreover, the whole point of a distributed CDN is to distribute replicas over a global network of storage/server nodes in such a way that a number of users at each node/server is below its congestion point. For more details on the QoS aspects of content delivery refer to a recent study in (Zhanikeev, 2015).

Having migrated to clouds, most of these techniques have to be revisited and in some cases replaced with cloud-compatible alternatives. For example, traditional CDNs would normally treat each location as a single node in a distributed network of replicas (Sivasubramanian, 2007). While this approach remains applicable in clouds at the global scale, cloud platforms can offer multiple kinds of caching at each location, which leads to more complex structures that represent the entire storage and distribution network. Recent literature discusses the MiniCache technology (Kuenzer, 2013) which can be used by individual Virtual Machines (VMs) to maintain an internal cache on a local physical hard-disk. With multiple virtual caches per hard-disk and multiple Physical Machines (PMs), each location can maintain a large number of independent replicas. The Local Hardware Awareness (LHA) technology (Zhanikeev, May 2015) used as part of the proposal in this paper is an alternative to the MiniCache technology. The difference in LHA is that apart from VM- and PM-based caching, it also offers the new DC-based (DC: Data Center) option which can help multiple PMs and VMs share a single replica at a given location.

The central point in this paper is that traditional clouds at network core are aware of the cloudification process and strive to extend their services to devices at network edge. Aggregates of DCs from multiple cloud providers into a single (virtual) cloud are referred to as federated clouds. Clouds that aggregate either both DCs and devices at network edge, or only devices at network edge, are referred to as fog clouds. Cloud-based Content Delivery Networks (CDNs), by the nature of the service, are striving to build fog clouds consisting of a relatively small number of large DCs and a large number of small clouds at network edge. Given the variety and scale of devices at network edge, managing such CDNs becomes a challenge. This paper discusses the recent literature on the subject (Manco, 2014; Frank, 2013) and shows how LHA-compatible fog clouds can drastically improve efficiency in such CDNs.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 11: 2 Issues (2018): 1 Released, 1 Forthcoming
Volume 10: 2 Issues (2017)
Volume 9: 2 Issues (2016)
Volume 8: 2 Issues (2015)
Volume 7: 2 Issues (2014)
Volume 6: 2 Issues (2013)
Volume 5: 2 Issues (2012)
Volume 4: 2 Issues (2011)
Volume 3: 2 Issues (2010)
Volume 2: 2 Issues (2009)
Volume 1: 2 Issues (2008)
View Complete Journal Contents Listing