Scheduling Strategies for Business Process Applications in Cloud Environments

Scheduling Strategies for Business Process Applications in Cloud Environments

Kahina Bessai (Centre for Research in Computing (CRI), University of Paris 1 Panthéon-Sorbonne, Paris, France), Samir Youcef (LORIA-INRIA-UMR 7503, University of Lorraine, Nancy, France), Ammar Oulamara (LORIA-INRIA-UMR 7503, University of Lorraine, Nancy, France), Claude Godart (LORIA-INRIA-UMR 7503, University of Lorraine, Nancy, France) and Selmin Nurcan (Centre for Research in Computing (CRI), University of Paris 1 Panthéon-Sorbonne, Paris, France)
Copyright: © 2013 |Pages: 14
DOI: 10.4018/ijghpc.2013100105

Abstract

The Cloud computing paradigm is adopted for its several advantages like reduction of cost incurred when using a set of resources. However, despite the many proven benefits of using a Cloud infrastructure to run business processes, it is still faced with a major problem that can compromise its success: the lack of guidance for choosing between multiple offerings. Moreover, when running business processes it is difficult to automate all tasks and several objectives often conflicting must be taken into account. For this, the authors propose a set of scheduling strategies for business processes in Cloud contexts. More precisely, the authors propose three bi-criteria complementary approaches for scheduling business processes on distributed Cloud resources while taking into account its elastic computing characteristic that allows users to allocate and release compute resources (virtual machines) on-demand and its business model based on pay as you go. Therefore, it is reasonable to assume that the number of virtual machine is infinite while the number of human resources is finite. Experiment results demonstrate that the proposed approaches present good performances.
Article Preview

1. Introduction

With the advent of Cloud computing paradigm, IT organizations need to think in terms of managing services rather than managing devices. The term of cloud computing (Buya et al., 2009) encompasses different architectures and services, but, in general cloud computing refers to the delivery of hardware, storage resources and/or softwares over the Internet. It can be defined as a cluster of distributed computer provisioning resources on-demand, computational resources or services to remote user over a network. The advantages of using an infrastructure such as cloud computing are undeniable and the most significant is a consequence of its three main features, namely (Mell et al.; Buya et al., 2009; Mladen et al., 2008): (i) on-demand self service which means that users can request and manage their own computing resources (i.e. resources are made available to the users as needed), (ii) pay as you go, which means that users are billed according to their real using and (iii) elasticity, which reflects the fact that users draw from a pool of computing resources, usually, in remote data centers, according to the need of their applications.

Therefore, the Cloud computing has quickly changed the way that compute resources can be used and allow users to access compute on the fly according to the application's need. For example, to run any desired software Amazon's EC2 provide a Web service through which users can boot an Amazon Machine Image. However, despite the proven benefits of using Cloud to execute business processes, users lack guidance for choosing between different offering while taking into account several objectives often conflicting.

Moreover, most existing workflow matching and scheduling algorithms (Zhao et al., 2009) consider only an environment in which the number of resources is assumed to be bounded. However, in distributed systems such as Cloud computing this assumption is in opposition to the usefulness of such systems. Indeed, the “illusion of infinite resources'' is the most important feature of Clouds (WfMC, 1999; Buya et al., 2009), which means that users can request, and are likely to obtain, sufficient resources for their need at any time. Additionally to this characteristic, a Cloud computing environment can provide several advantages that are distinct from other computing environments: i) computing resources can be elastically scaled on demand (i.e. the number of resources used to execute a given workflow can be changed at runtime), ii) computing resources are exposed as services and thereby a standardization interface is provided. Moreover, unlike scientific workflows, where generally their processing are fully automated, when executing workflow processes it is difficult to automate all theirs tasks. Indeed, certain tasks require validations that cannot be automated because they are subject to human intervention.

Furthermore, although there are efficient algorithms in the literature for workflow tasks allocation and scheduling for heterogeneous resources such as those proposed in grid computing context (Tordson et al., 2012; Breitgand et al., 2011; Van den Bossche et al., 2010; Van den Bossche et al., 2011), they usually assume a bounded number of computer resources on one hand. On other hand, the human resource dimension is not considered. Therefore, the existing approaches are not appropriate for matching and scheduling business processes in Cloud computing environment.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 11: 4 Issues (2019): Forthcoming, Available for Pre-Order
Volume 10: 4 Issues (2018): 3 Released, 1 Forthcoming
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing