Alleviating the Thrashing by Adding Medium- Term Scheduler

Alleviating the Thrashing by Adding Medium- Term Scheduler

Moses Reuven, Yair Wiseman
DOI: 10.4018/978-1-60566-850-5.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A technique for minimizing the paging on a system with a very heavy memory usage is proposed. When there are processes with active memory allocations that should be in the physical memory, but their accumulated size exceeds the physical memory capacity. In such cases, the operating system begins swapping pages in and out the memory on every context switch. The authors lessen this thrashing by placing the processes into several bins, using Bin Packing approximation algorithms. They amend the scheduler to maintain two levels of scheduling - medium-term scheduling and short-term scheduling. The mediumterm scheduler switches the bins in a Round-Robin manner, whereas the short-term scheduler uses the standard Linux scheduler to schedule the processes in each bin. The authors prove that this feature does not necessitate adjustments in the shared memory maintenance. In addition, they explain how to modify the new scheduler to be compatible with some elements of the original scheduler like priority and realtime privileges. Experimental results show substantial improvement on very loaded memories.
Chapter Preview
Top

Introduction

One of the most substantial computer resources is the RAM. Multitasking operating system executes several processes simultaneously. Each one of the processes uses several sections of the memory. The connection of the memory and the scheduling strategy is an old subject for research (Zahorjan et al., 1991), (Wiseman and Feitelson, 2003).

Usually, most of the processes do not make use of the entire memory that has been allocated for them. This shows the way to the principle of virtual memory (Denning, 1970): Many processes have allocations in the virtual memory, but only the pages which are currently required will be physically stored in the memory; therefore, many more processes can be executed in parallel, while occupying less physical memory space.

Various operating systems implement the virtual memory concept using the paging model i.e. the operating system will load a memory page into the physical memory only if a process asks for it. If no free memory frame is available, the operating system will swap out a page from the physical memory to the secondary memory (hard disk). Different techniques for choosing which pages the operating system will swap out to the disk have been suggested over the years (Belady, 1966).

When large memory space is needed, swapping pages in and out the memory will consume a large portion of the CPU cycles. This situation is called Thrashing (Abrossimov et al., 1989). Thrashing causes a severe overhead time and as a result a substantial slowdown of the system. Some studies for alleviating the unwanted consequences of the thrashing have been carried out over the years (Galvin and Silberschatz, 1998).

In (Jiang and Zhang, 2001), (Jiang and Zhang, 2002), (Jiang, 2009), the authors propose giving one of the interactive processes a privilege. The process's pages will not be swapped out. As a result, the privileged process will be executed faster and therefore will free its memory allocation earlier. This feature may assist the operating system freeing enough memory fast and to get back to a normal behavior. However, this technique will be advantageous only if the memory allocations slightly exceed the physical memory. This technique will work like a First-In-First-Out scheduler if many processes produce a large memory excess. In such cases this FIFO continues and the system will keep on thrashing. Linux 2.6 version has a similar mechanism and its scheduler is described in section 2.

In (Batat and Feitelson, 2000) the authors suggest not admitting jobs that do not fit into the current available memory. The system waits for several processes to finish their execution and only when enough memory is freed, a new job can be admitted. The authors also discuss the dilemma how a memory size needed by a new job can be assessed. This technique is essentially very similar to the VMS technique that uses the “Balance Sets” method. However, the authors of this paper have implemented the “Balance Sets” concept for distributed systems.

In (Nikolopoulos, 2003) the author handles the thrashing problem by adjusting the memory needs of a process to the current available memory. This solution is quite different from the other solutions, because it modifies the processes instead of modifying the operating system.

Some hardware solutions for trashing are also have been suggested which are implemented in the cache (Gonzalez et al., 1997), (Chu and Ito, 2000). Typically LRU is the basic scheme that both hardware and software victim selection algorithms employ. However, the LRU algorithm is manipulated differently by hardware and software implementations. Naturally, hardware solutions must be much simpler for implementation; but on the other hand, hardware solutions can use data that the operating system does not know e.g. the cache can distinguish between an instructions block and a data block; while the operating system does not distinguish. Clearly, this parameter can be very useful for victim selection algorithms.

Complete Chapter List

Search this Book:
Reset