Green Routing Algorithm for Wired Networks

Green Routing Algorithm for Wired Networks

Akshay Atul Deshmukh, Mintu Jothish, K. Chandrasekaran
Copyright: © 2015 |Pages: 14
DOI: 10.4018/IJGC.2015070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With increase in uses and users of the internet, there has been an enormous increase in data transferred and power consumed in internet networks. With increasing penetration of the internet into developing areas of the world, internet traffic will increase substantially. The authors propose an algorithm which takes into consideration the temperature of servers and the bottlenecks in the network by modifying Linked State Packets of OSPF protocol. Using these parameters, the routing table is updated using a modified Dijkstra's algorithm. This algorithm differs from previous algorithms which switch network components such as links and routers to sleep mode or reduce transfer rate in links. Many of these algorithms require a centralized system and are not very scalable. Also, unlike many previous studies, the authors' algorithm does not require much modification to existing OSPF algorithm prevalent in networking today. Their algorithm delineates the coexistence of performance and energy conservation.
Article Preview
Top

Introduction

The Internet is a vast expanse of interconnected networks with numerous nodes, which exchange volumes of information among themselves. Many hardware components such as routers, bridges, gateways, firewalls and switches, for example, have been incorporated in the Internet to facilitate the interconnection of networks on various levels. The need to deliver packets from various sources to their respective destinations gave rise to the formulation of complex algorithms to manage and direct the traffic caused by packets. These algorithms form the basis of routing, which is a lifeline of the Internet.

The term routing refers to the selection of the best path between a source and a destination for efficient transport and delivery of packets. Routing algorithms are implemented in the network layer according to the five layer model. Routing algorithms maintain a routing table which remembers the routes to various destination nodes. The inclusion of more networks, especially those constituting user-centric environments, have paved the way to the development of efficient multi-hop and mobile routing algorithms which find the shortest possible paths to destination nodes.

Although efficient resource management schemes have been implemented by various modern routing algorithms, these routing schemes lack in energy conservation at the nodes. Swift technological advances have increased the access of the humans to the Internet. The requirement for managing the life of data servers efficiently has pushed the need for developing green routing algorithms which are energy efficient and thus reduce the production of carbon footprints.

Intermediate routers or workstations consume a particular amount of energy for processing packets in the network traffic. This consumption of energy in the form of electricity is converted to heat emissions in accordance to the Law of Conservation of Energy.

Numerous measures have been discovered, tested and implemented in these intermediate nodes to reduce heat output and hence reduce the reliability of cooling systems. In distributed networks, load balancing techniques are used which allot processes to different systems based on the threshold of the system in a multiprocessor environment. There have been advances in the processor industry in terms of the architectures of the processors and its placement in a computer system. Various new processor chips have reduced the generation of heat which has the added benefit of increasing the longevity of the computer system. These innovations have maintenance costs which can be reduced if the routing algorithms used are more eco-friendly in its implementation.

In 2010, Electricity consumption by global data centers accounted for about 1.1% to 1.5% of global electricity use. At the same time, Data centers consumed between 1.7% and 2.2% of the total electricity used in the United States (Koomey, 2011). Between 2000 and 2005, electricity use by United States data centers nearly doubled, and increased by approximately 36% between 2005 and 2010. In 2012, the increase in power consumption by data centres globally was 19%. Despite some recent gains in efficiency, data centers remain a substantial and growing energy consumer. Data center energy consumption is expected by industry analysts to grow at a rate of more than 9% per year through 2020 (from about 200 trillion end-use BTUs in 2008 to 600 trillion enduse BTUs in 2020) (Energy Star, 2012; Choi, 2009).

Every server has a cooling mechanism to dissipate the heat generated by it. A major portion of the energy consumption of a server is taken up by this cooling mechanism. The workload of the server is adjusted such that it operates less. This in turn, leads to less need for cooling. By reducing the cooling requirement of servers, energy consumption can be brought down drastically. Servers and data centres can be located in relatively warm countries or cooler locations. At a warmer location, the power consumed to cool a server will be greater than a server located at a cooler location.

Most routing protocols today give emphasis to taking a shorter path (less hops) to deliver a quicker transmission. But this happens at the cost of higher energy consumption. This also results in burdening the server with many requests. In such situations, the shortest route is not the most optimal route with respect to energy consumption.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 1 Issue (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 1 Issue (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing