Article Preview
TopIntroduction
Cloud computing has recently emerged as a compelling paradigm for solving large-scale data-intensive computing. Since these applications often demand massive computation requirements and due to the highly elastic environment in cloud computing, providers must be able to dynamically manage the available resources so that the latter can be optimized.
It is important for the Cloud resource managers to have scheduling strategies. Such an effort does not only attempt to minimize the overall completion time but also to meet the user specified QoS requirements. In addition, Buyya (2009) has highlighted that it is crucial also to consider the user’s preference in terms of their budget and deadline limitations. He has also stated that such scheduling policies are broadly termed as market-oriented scheduling policies.
As cloud computing solutions have gained attention in commercial markets, several IT vendors have also started offering prices in a flexible manner. These new pricing strategies allow cloud users to customize their virtual machines with different configurations and prices according to their budget and the quality of service (QoS) expectations. Unlike the traditional pricing model adopted by Amazon EC2 which charges users based on the number of predefined VM instances, this new performance-based pricing charge is based on the computation or storage resource used. That is to say; it costs more when a user chooses a faster resource. This is exemplified in the pricing model implemented by CloudSigma and ElasticHosts.
This pricing model mainly targets users who potentially need a lower performance level and would prefer a proportionally scaled price. With such a pricing scheme, the revenue model of cloud computing is radically changed, and an interesting challenge arising to a user may be how to properly select virtual resources configuration for their application so that the user’s cost can be minimized while the QoS (Quality of Service) can still be satisfied.
With the assumption that users are interested in completing a job within a specified deadline on clouds; that provision CPU resources with different configurations and prices; and given a submitted job; the objective of the current study is to produce a schedule. This schedule does not only specify job-resource mapping but also determine the CPU frequency. Besides, each fraction of the load will be executed on, so that the overall cost can be minimized, while the deadline requirement can still be met.
Motivated by this, our previous work (Majid & Chuprat, 2018) has proposed one scheduling algorithm named CARTDLT. It is the worker’s selection strategy being introduced and evaluated by simulation. In this paper, we extend this previous work by:
- •
Proposing user cost priority flag to enable the user to set their preference;
- •
Aggregating the resources into fast and slow speed and determining the amount of fraction of loads to be assigned to each of the resources according to the user’s preference on their cost priority.
The rest of this paper is structured as follows. Section 2 discusses the related work. Section 3 introduces an overview of the Real-time Divisible Load Theory (RT-DLT) concepts and the scheduling framework. The proposed Market-Oriented RT-DLT scheduling algorithm used in this paper is discussed in Section 4. Section 5 describes the simulation setup and result in analysis. Finally, this paper ends with the findings and conclusions in Section 6.
TopBackground
Divisible Load Scheduling (DLS) has been studied extensively because of its tractability and realistic approach (Ghanbari & Othman, 2014; Robertazzi, 2003). DLS is using the concept of task parallelization, and it is mainly applied for scheduling in the area of parallel and distributed computing. It is based on the Divisible Load Theory (DLT), where the computation can be partitioned into some arbitrary sizes. In addition, each partition can be executed independently by a processor (Sohn & Robertazzi, 1996).