Disk-Based Search

Disk-Based Search

Stefan Edelkamp, Shahid Jabbar
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch076
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The need to deal with large data sets is at the heart of many real-world problems. In many organizations the data size has already surpassed Petabytes (1015). It is clear that to process such an enormous amount of data, the physical limitations of RAM is a major hurdle. However, the media that can hold huge data sets, i.e., hard disks, are about a 10,000 to 1,000,000 times slower to access than RAM. On the other hand, the costs for large amounts of disk space have considerably decreased. This growing disparity has led to a rising attention to the design of external memory algorithms (Sanders et al., 2003) in recent years. In a hard disk, random disk accesses are slow due to disk latency in moving the head on top of the data. But once the head is at its proper position, data can be read very rapidly. External memory algorithms exploit this fact by processing the data in the form of blocks. They are more informed about the future accesses to the data and can organize their execution to have minimum number of block accesses. Traditional graph search algorithms perform well as long as the graph can fit into the RAM. But for large graphs these algorithms are destined to fail. In the following, we will review some of the advances in the field of search algorithms designed for large graphs.
Chapter Preview
Top

Background

Most modern operating systems provide a general-purpose memory management scheme called Virtual Memory to compensate for the limited RAM. Unfortunately, such schemes pay off only when the algorithm’s memory accesses are local, i.e., it works on a particular memory address range for a while, before switching the attention to another range. Search algorithms, especially those that order the nodes on some particular node property, do not show such behaviour. They jump back and forth to pick the best node, in a spatially unrelated way for only marginal differences in the node property.

External memory algorithms are designed with a hierarchy of memories in mind. They are analyzed on an external memory model as opposed to the traditional von Neumann RAM model. We use the two-level memory model by Vitter and Shriver (1994) to describe the search algorithms. The model provides the necessary tools to analyze the asymptotic number of block accesses (I/O operations) as the input size grows. It consists of

  • M: Size of the internal memory in terms of the number of elements,

  • N >>M: Size of the input in terms of the number of elements, and

  • B: Size of the data block that can be transferred between the internal memory and the hard disk; transferring one such block is called as a single I/O operation.

The complexity of external memory algorithms is conveniently expressed in terms of predefined I/O operations, such as, scan(N) for scanning a file of size N with a complexity of Θ(N/B) I/Os, and sort(N) for external sorting a file of size N with a complexity of Θ(N/B logM/B (N/B)) I/Os. With additional parameters the model can accommodate multiple disks and multiple processors too.

In the following, we assume a graph as a tuple (V, E, c), where V is the set of nodes, E the set of edges, and c the weight function that assigns a non-zero positive integer to each edge. If all edges have the same weight, the component c can be dropped and the graphs are called as unweighted. Given a start node s and a goal node g, we require the search algorithm to return an optimal path wrt. the weight function.

Key Terms in this Chapter

Memory Hierarchy: Modern hardware has a hierarchy of storage mediums: starting from the fast registers, the L1 and L2 caches, moving towards RAM and all the way to the slow hard disks and tapes. The latency timings on different levels differ considerably, e.g., registers: 2ns, cache: 20ns, hard disk: 10ms, tape: 1min.

Graph: A set of nodes connected through edges. The node at the head of an edge is called as the target and at the tail as the source. A graph can be undirected, i.e., it is always possible to return to the source through the same edge – the converse is a directed graph. If given beforehand in the form of adjacency lists (e.g., a road network), we call it an explicit graph. Implicit graphs – another name for ‘state spaces’ – are generated on-the-fly from a start node and a set of rules/actions to generate the new states (e.g., a checkers game).

Delayed Duplicate Detection: In difference to hash tables that eliminate duplicate states on-the-fly during the exploration, the process of duplicate detection can be delayed until a large set of states is available. It is very effective in external search, where it is efficiently achieved by external sorting and scanning.

Value Iteration: Procedure that computes a policy (mapping from states to action) for a probabilistic or non-deterministic search problem most frequently in form of a Markov Decision Problem (MDP).

Search Algorithm: An algorithm that when given two graph nodes, start and goal, returns a sequence of nodes that constitutes a path from start to the goal, if such a sequence exists. A search algorithm generates the successors of a node through an expansion process, after which, the node is termed as a closed node. The newly generated successors are checked for duplicates, and when found as unique, are added to the set of open nodes.

Model Checking: It is an automated process that when given a model of a system and a property specification, checks if the property is satisfied by the system or not. The properties requiring that ‘something bad will never happen’ are referred as safety properties, while the ones requiring that ‘something good will eventually happen’ are referred as liveness properties.

Heuristic Function: A function that assigns a node, an estimated distance to the goal node. For example, in route-planning, the Euclidean distance can be used as a heuristic function. A heuristic function is admissible, if it never overestimates the shortest path distance. It is also consistent, if it never decreases on any edge more than the edge weight, i.e., for a node n and its successor n’, h(n) – h(n’) = c(n,n’).

Complete Chapter List

Search this Book:
Reset