Cost Optimization for Dynamic Content Delivery in Cloud-Based Content Delivery Network

Cost Optimization for Dynamic Content Delivery in Cloud-Based Content Delivery Network

S. Sajitha Banu (Mohamed Sathak Engineering College, India) and S. R. Balasundaram (National Institute of Technology, Tiruchirappalli, India)
Copyright: © 2021 |Pages: 15
DOI: 10.4018/JITR.2021100102
OnDemand PDF Download:
No Current Special Offers


Cloud computing is a technology to store, process, and manage the data virtually over remote data centers through the internet. Due to the rapid growth of cloud services, the content distribution network broadly uses them to deliver data all over the globe. Due to the rapid generation of the data, delivering on the network is a challenging problem. As the number of replicas increases, the storage cost will be increased. This is a major issue in cloud-based content delivery networks. To overcome this issue, the authors developed a new model for cloud-based CDN with cost optimization algorithm STLM (storage, traffic, latency, cost minimization) to reduce the number of replicas in order to optimize the cost of storage and cost of content delivery. The authors have compared their proposed STLM algorithm with other existing algorithms. They adopt simulation with YouTube e-learning data retrieval. The proposed algorithm is used to place the contents in an efficient way to the geologically dispersed proxy servers in the cloud to encounter quality of service (QoS) and quality of experience (QoE).
Article Preview


Content Delivery Network is a collection of millions of computers connected to the internet. Here, a finite number of servers provided by a service provider are interconnected over the globe. Since multiple servers are used, the load distributed should be maintained, substantially guiding the consumer to get quality content faster. A Content Delivery Network comprises of an origin server and a finite number of proxy servers. In a CDN, Content is created in the origin server and its duplicates are stored in their proxies. Proxy servers are constructed across pinned locations of the globe. If a client just requests a particular video content on the internet, the request data packet is initially transmitted to the geographically local proxy server and searches for the availability of content. If it is unavailable, the proxy server fetches the video contents from the origin server.

Earlier Content Delivery Networks (CDNs) such as Akamai, Mirror Image have hosted a huge number of data centers and edge servers across various parts of the world. Unfortunately, the cost incurred for building and hiring traditional CDNs are extremely high. Soon, many networks have started migrating to the cloud to gain numerous advantages such as lower cost, availability, and flexibility. The administration and video data distribution facilities are held by cloud suppliers. The cloud providers deliver cloud services to store the web data from the content suppliers, so the storage cost can be minimized. Meisong Wang et al. (2015), have given an overview of research dimensions and state-of-art in Cloud CDN discussing several challenges. CCDNs lessen the major restrictions of ancient CDNs. Cloud CDNs when compared with CDNs, have improved scalability, flexibility, elasticity, reliability and safety in contrast to threats and attacks, and the lesser costs for data storage and distribution as in (Mohammad, 2017).

A cloud-based CDN additionally emphasizes distributing video data to the users in a comparatively lesser period. To maintain the latency as small as possible, the content should be made available in a proxy server which is geographically local to the user. This data when retrieved minimizes the latency and traffic costs in content delivery. Moreover, content providers are levied high expenses for using the storage infrastructure of cloud suppliers. In a replication of virtual data centers, the traffic and latency prices are reduced through exploitation but the storage price gets enlarged (Xinjie Guan et al.,2014). To address this issue, countless storage pricing investigations are to be proposed.

Algorithms to improve content distribution efficiency by reducing costs have started emerging in the recent past. They are categorized into Latency-Minimization (LM) algorithm (Kangasharju et al., 2002), Traffic-Minimization (TM) algorithm (Borst et al., 2010), and Traffic-Latency-Minimization (TLM) algorithm (Xinjie Guan et al., 2014). The LM algorithms emerged minimizing latency cost. On the other hand; TM algorithms reduced the traffic cost. As this scenario prevails, TLM algorithms were proposed to substantially reduce both traffic and latency costs simultaneously. At this point, the real problem of extensive storage demand comes into the picture. To redress this issue, storage cost optimization must be carried out as an add-on to this resulting in the reduced mixed storage-traffic-latency cost. So, we have proposed an efficient optimization algorithm, named Storage-Traffic-Latency-Minimization (STLM) algorithm, to resolve this problem.

Further, the paper is ordered as follows. We have discussed the related works in section 2. We have formulated the problem statement in section 3. In Section 4, we have discussed the Push-Pull assisted dynamic content delivery. In Section 5 we have designed the proposed model and proposed the STLM algorithm. In Section 6, we have analyzed the simulation experiment and we have done result analysis upon the proposed STLM algorithm with existing algorithms. In Section 7, we have discussed the conclusion and future enhancements.

Complete Article List

Search this Journal:
Volume 16: 1 Issue (2023): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing