Green Cloud Computing: Site Selection of Data Centers

Green Cloud Computing: Site Selection of Data Centers

Haibo Wang (Texas A&M International University, USA) and Da Huo (Central University of Finance and Economics, China)
DOI: 10.4018/978-1-4666-5788-5.ch012


This chapter considers the data center site selection problem in cloud computing with extensive reviews on site selection decision models. The factors considered in the site selection include economic, environmental, and social issues. After discussing the environmental impact of data centers and its social implications, the authors present a nonlinear multiple criteria decision-making model with green computing criteria and solve the problem by using a variable neighborhood search heuristic. The proposed model and solution methodology can be applied to other site selection problems to address the environmental awareness, and the results illustrate both the robustness and attractiveness of this solution approach.
Chapter Preview

1. Introduction

Cloud computing, also known as on-demand or utility computing, was introduced by Amazon in 2006. It allows firms and individuals to obtain computing power and software applications over the Internet, avoiding the expense of purchasing and maintaining their own hardware and software. Data accessed by the firms and individuals are permanently stored on powerful servers in the massive remote data center and updated over the Internet by its users. While the cost of computing is reduced for users, the availability of cloud computing relies on a number of remote data centers around the country or even all over the world for the large service providers of cloud computing. According to Emerson Network company(Lee & Kim, 1993), there were about 500,000 data centers around the world in 2011 and the number has been increasing dramatically in recent years due to the growing demand of cloud computing and social network media. These data centers provide critical computing infrastructure for millions of users around the world at the cost not only of capital but also of environmental impact. Baker reported these new data centers built by Google cost an average of $600 million each and the electric bills at each of these data centers run more than $20 million a year (Lenk & Rao, 1993). Some IT analyst pointed out the cost of power consumption in the data center might exceed the cost of hardware in the future (Sneide, 1995).

The decision on the data center location has direct impact on the performance of cloud computing and the criteria to evaluate the decision involve economic, environmental and social issues. Therefore, the data center site selection problems are important but hard to model. There are key environmental variables should be considered in the decision of data center construction (Liang & Wang, 1993):

  • 1.

    Suitable physical space

  • 2.

    Proximity to high-capacity Internet connections

  • 3.

    Affordable electricity or other alternative energy resources

In terms of physical space, the geographical condition, climate and weather, energy-saving natural features and safety are the main factors in the decision making process.

Most of the early models dealt with the site selection for data centers with the economic criteria where cheap land and tax incentive were important. However, the size of early data centers was moderately small and the power consumptions of the servers were relative low due to the processors and applications. The main purpose for early data centers was to provide internet access and to be shared by a number of organizations. This type of data centers was also known as an Internet data center (IDC). IDC obtained popularity during the booming. High level availability is the key concern for building an IDC. There are four levels of data centers according to the level of availability where level 1 has 99.671% availability and level 4 has 99.995% availability(ADC Telecommunications, 2006). Higher levels of data centers require more power to run the servers and clusters with well-designed cooling systems in the facility. Redundancy plays an important role in maintaining high levels of availability, which requires more accurate estimations of power consumption during the upfront planning. In the meantime, the design of cooling system evolved over the past decade based on the configuration of hardware. The low level data center concerns the use of adequate cooling equipment and the high level data center involves the design and structure of raised-floor system (Karki & Patankar, 2006). The design of air flow in the high level data center can reduce the long term power consumption and the cold-hot rack/shelf placement in the data center can be managed through the job queuing systems.

The recent development of distributed computing enables the creation of large size data centers, while the computational intensive applications in cloud computing require powerful processors which produce lots of heat during the computation and command extra cooling system. The environmental impact of large size data centers to cloud computing platform attracted lots of attention and green computing is the key to the problem.

Complete Chapter List

Search this Book: