Designing of Vague Logic Based Fair-Share CPU Scheduler: VFS CPU Scheduler

Designing of Vague Logic Based Fair-Share CPU Scheduler: VFS CPU Scheduler

Supriya Raheja, Reena Dadhich, Smita Rajpal
Copyright: © 2015 |Pages: 25
DOI: 10.4018/IJFSA.2015070103
(Individual Articles)
No Current Special Offers


CPU scheduler is the primary component of an operating system which supports multitasking. Fair Share CPU scheduler uses static policy to share the CPU time among different levels of system. Static sharing of CPU time can affect the performance of system. Moreover, Fair share scheduler does not consider the impreciseness and uncertainty related to tasks. The objective is to design a vague based fair share scheduler (VFS). VFS scheduler extends the research in the field of fair sharing of CPU time where the functions of VFS scheduler are threefold: firstly, it deals with the impreciseness of task. Secondly, it dynamically shares the CPU time among users as well as among tasks. Thirdly, it supplies the dynamic priority to each task. It improves the performance of system. The VFS scheduler has three modules, VIS-DCS vague inference system for sharing CPU time; VIS-DP vague inference system for assigning dynamic priority to each task and scheduling algorithm to schedule the tasks. The novelty of this approach is to introduce the vague logic based CPU scheduler. Simulation results show that the VFS scheduler has improved performance over the fair share scheduler.
Article Preview

2. State Of Art

The main work related to our research address the problem of static sharing of CPU time and the use of fair policies to favouring the users as well as tasks.

A common criticism regarding scheduling techniques discussed so far is the relationship between user and task. Fairness is one of the vital requirements of any scheduler. It is considered that each task belongs to a different user and tried to assign equal CPU services to all tasks (Sabin et al.,1996; Nie et al., 2011). Earlier schedulers were to be designed fairly in context of tasks. However, system usually supports various groups of related tasks. As the time goes, it becomes clear that the scheduler should be fair between users as well as tasks. . Fair Share scheduling technique has addressed this problem. For example UNIX and other multiuser systems make groups of tasks that belong to a particular user.

Fair share scheduler permits CPU time to be shared fairly among system groups or users in a system (Stallings, 2014, Bui et al., 2010). It is the fraction of CPU time that should be allocated to group of tasks that belongs to the same user. Let us consider an example where fair share scheduling will be applicable. Suppose a society group whose members are using one multiuser system. They are further divided into two groups. One is the society head and other is the secretaries of this group. The society head uses the system to perform important and intensive work. Whereas many secretaries use the system for less intensive work such as collecting information etc. The secretaries consumes more CPU time as compare to the society head as they are many. But the society head performs the important task over other tasks. However, if the system allows only 30% of the CPU time to secretaries’ work and the remaining 70% for society head, the society head would not suffer. In this manner, fair share scheduling ensures the fairness of CPU share.

Let us suppose three different users (U1, U2, and U3) in a system, where each is concurrently running one task. The scheduler divides the CPU time in such a way that each user can get 33.3% as its fair share. Suppose U3 starts another task, then the tasks running by U1 and U2 still get the same CPU share (33.3%) but the tasks of U3 now receive 16.7% of CPU share. Further, fair share (FS) scheduling allows division of users into groups and assigned the CPU share to groups as well. Scheduler first assigns the share of CPU time to groups then to users within group and later on among tasks of those users (Stallings, 2014).

Complete Article List

Search this Journal:
Volume 13: 1 Issue (2024)
Volume 12: 1 Issue (2023)
Volume 11: 4 Issues (2022)
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing