An Ant-Colony-Based Meta-Heuristic Approach for Load Balancing in Cloud Computing

An Ant-Colony-Based Meta-Heuristic Approach for Load Balancing in Cloud Computing

Santanu Dam (Future Institute of Engineering and Management, India), Gopa Mandal (Kalyani Government Engineering College, India), Kousik Dasgupta (Kalyani Government Engineering College, India) and Parmartha Dutta (Visva-Bharati University, India)
DOI: 10.4018/978-1-5225-3129-6.ch009
OnDemand PDF Download:
List Price: $37.50


This book chapter proposes use of Ant Colony Optimization (ACO), a novel computational intelligence technique for balancing loads of virtual machine in cloud computing. Computational intelligence(CI), includes study of designing bio-inspired artificial agents for finding out probable optimal solution. So the central goal of CI can be said as, basic understanding of the principal, which helps to mimic intelligent behavior from the nature for artifact systems. Basic strands of ACO is to design an intelligent multi-agent systems imputed by the collective behavior of ants. From the perspective of operation research, it's a meta-heuristic. Cloud computing is a one of the emerging technology. It's enables applications to run on virtualized resources over the distributed environment. Despite these still some problems need to be take care, which includes load balancing. The proposed algorithm tries to balance loads and optimize the response time by distributing dynamic workload in to the entire system evenly.
Chapter Preview


Cloud computing is an entirely internet-based approach where all the applications and files are hosted on a cloud, it thrives application and services. Cloud computing is one of the most emerging technologies that provides standard for large scale distributed and parallel computing. It’s a framework for enabling applications to run on virtualized resources and accessed by common network protocol and standards. It provides computing and infrastructural resource and services in a very flexible manner that can be scaled up or down according to demand of the end user. Due to exponential growth of Internet in last decade cloud computing got solid platform to spread its era by providing virtualized hardware and software infrastructure over the Internet. Cloud uses high speed internet to disperse jobs from local or private PC to remote PC or Data Center. Computing service provided by cloud service provider may be used by individual or industry from anywhere of the world. Cloud’s on demand service coupled with pay as-you go model has attracted more and more user for better utility computing. Another reason that companies and end users are getting attracted is provisioning and deprovisioning, which reduce capital cost. Ensuring the QoS (ensuring better and fast service in stipulated time) and meeting the demand of the end users for resources in proper time is one of the main challenges of cloud service provider. Gartner defines cloud computing as: “A style of computing where massively scalable IT-related capabilities are provided as a service across the Internet to multiple external customers using internet technologies “(“Gartner Highlights Five Attributes of Cloud Computing” [on13 Dec 2013])

As per Prerakmody (“Cloud Computing” on-line (on 9 Jan 2014)) Cloud computing caters to the following needs:

  • 1.

    Dynamism: Cloud computing provides dynamism, facilitating scaling up and down demand of the resources as and when required according to our needs.

  • 2.

    Abstraction: Cloud computing provides abstraction to the end users. The end users do not need to take care for the OS, the plug-ins, web security or the software platform.

  • 3.

    Resource Sharing: Cloud computing provides resource sharing which allows optimum utilisation of resources in the cloud.

Any cloud service provider (CSP) makes available these services as computing, software and hardware as service. It is the sole responsibility of CSP to ensure QoS. If the above mentioned feature are maintained properly then it can be said that in coming decades cloud computing has a glorious future. But there are many problems that still need to be resolved; load balancing is one of them. Load balancing can be said to distribute dynamic workload across multiple nodes evenly in a distributed environment. Load balancing also needs to take into account for two major issues one is resource provisioning or allocation and other is task scheduling.

Load balancing is an essential task in cloud computing environment to achieve the maximum utilization of resources. Load balancing algorithm may be static or dynamic, centralized or distributed with their pros and cons. Static load balancing scheme are easy to implement and monitor but fails to model heterogeneous environment of the cloud. On the contrary dynamic algorithms are difficult to implement but best fitted in heterogeneous environment. Unlike centralized algorithm where all the allocation and scheduling decision are made by a single node. In distributed approach the load balancing algorithm are executed together by the all nodes present in the system. These nodes are interacting continuously among themselves to achieve load balancing. The advantage of distributed algorithm is that it provides better fault tolerance but needs higher degree of replication. Distributed load balancing can be of two forms co-operative and non-cooperative.

Load balancing also helps in scheduling task over different nodes of cloud environment. Moreover, load balancing is also an optimization technique. It ensures distribution of the total workload evenly in the entire system, such that each resources does approximately equal amount of works at any point of time. Whenever a certain node becomes overloaded it should be neutralized by some under loaded node. Hence by adapting load balancing algorithms service providers can manage their resources and maintains the QoS helping to increase throughput and minimize the response time.

Complete Chapter List

Search this Book: