A Comparative Analysis of Content Delivery Network and Other Techniques for Web Content Delivery

A Comparative Analysis of Content Delivery Network and Other Techniques for Web Content Delivery

Meenakshi Gupta, Atul Garg
DOI: 10.4018/IJSSMET.2015100104
(Individual Articles)
No Current Special Offers


Web content delivery is based on client-server model. In this model, all the web requests for specific contents are serviced by a single web server as the requested contents reside only on one server. Therefore, with the increasing reliance on the web, the load on the web servers is increasing, thus causing scalability, reliability and performance issues for the web service providers. Various techniques have been implemented to handle these issues and improve the Quality of Service of the web content delivery to end-users such as clustering of servers, client-side caching, proxy server caching, mirroring of servers, multihoming and Content Delivery Network (CDN). This paper gives an analytical and comparative look on these approaches. It also compares CDN with other distributed systems such as grid, cloud and peer-to-peer computing.
Article Preview

1. Introduction

The nature of the World Wide Web is highly decentralized and distributed. However, the traditional web service model is centralized from the perspective of a single website. All the requests from end-users (clients) for a specific webpage are handled by single web server (origin server) hosting the requested contents as is shown in figure 1.

Figure 1.

Centralized web service model


In this model, the clients that are closer to the web server can access the contents faster than the clients that are away from it. The centralized approach results in scalability, reliability and performance issues for the web servers serving popular websites. With the growing number of requests, it increases the load on the server and the network link connecting to the server. When the requests exceed the server’s processing capacity or the bandwidth of the link connecting to the server, it results in the increase in access delay or even unavailability of the website (Hofmann & Beaumont, 2005). Keeping this in view, various techniques have been implemented to improve the Quality of Service (QoS) of web content delivery to end-users. These techniques include increasing the capacity of servers, clustering of servers, provision of caching on client-side and/or proxy servers, establishing mirror servers, using multiple Internet Service Provider (ISP) connections to server, outsourcing to Content Delivery Network service providers. Along with these techniques, efforts have been put to develop the algorithms such as selection of a server (Gromov & Chebotareva, 2014) to satisfy end-user’s request, balancing the load on servers along with reliability (Gupta et al., 2015) and scalability (Flavel et al., 2015) to improve the overall performance of the web. In (Juneja & Garg, 2012) collective intelligence of ant colony approach has been proposed for searching the optimal network path and balancing the load on web servers. Whereas, artificial neural network approach has been used in (Amin & Garg, 2014) to adapt the process used by users in accessing web contents based on web log files.

In this paper section 2 throws a light on the concept of QoS in the context of web content delivery. Different techniques to improve QoS are analyzed in section 3. Section 4 gives comparative analysis of these techniques. In section 5 CDN is compared with some other distributed systems. Section 6 concludes the paper.


2. Quality Of Service Of Web Content Delivery

The term Quality of Service has been used for expressing non-functional requirements for web services (Ran, 2003). It is a set of service requirements that needs to be met by a network. It is a measure of how a particular service responds to the requester and it is compared to the service expected. It is an important criterion for deciding about the quality of web content delivery to end users and may vary greatly because of dynamic and unpredictable nature of the Internet.

Various web applications differ in their QoS requirements. Data applications such as file transfer are not generally delay sensitive and can recover from packet loss via re-transmission. Media streaming applications require a fixed bandwidth. Business applications require security and transactional QoS. The best effort service approach to all applications may not be adequate. Further different users may have different QoS requirements depending on their needs. Human patience also imposes lower throughput bounds on applications such as web browsing. Therefore, QoS can be defined as providing service differentiation and performance assurance for Internet applications (Zhao et al., 2000). In (Statovci-Halimi & Franzl, 2013) the coexistence of quality engineering mechanisms with differentiated QoS charging has been suggested to control the speed and quality of future network development.

Complete Article List

Search this Journal:
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 2 Released, 4 Forthcoming
Volume 12: 6 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing