Image Partitioning on Spiral Architecture

Image Partitioning on Spiral Architecture

Qiang Wu, Xiangjian He
Copyright: © 2010 |Pages: 33
DOI: 10.4018/978-1-60566-661-7.ch035
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Spiral Architecture is a relatively new and powerful approach to image processing. It contains very useful geometric and algebraic properties. Based on the abundant research achievements in the past decades, it is shown that Spiral Architecture will play an increasingly important role in image processing and computer vision. This chapter presents a significant application of Spiral Architecture for distributed image processing. It demonstrates the impressive characteristics of spiral architecture for high performance image processing. The proposed method tackles several challenging practical problems during the implementation. The proposed method reduces the data communication between the processing nodes and is configurable. Moreover, the proposed partitioning scheme has a consistent approach: after image partitioning each sub-image should be a representative of the original one without changing the basic object, which is important to the related image processing operations.
Chapter Preview
Top

Introduction

Image processing is a traditional area in computing science which has been used widely in many applications including the film industry, medical imaging, industrial manufacturing, weather forecasting etc. With the development of new algorithms and the rapid growth of application areas, a key issue emerges and attracts more and more challenging research in digital image processing. That issue is the dramatically increasing computation workload in image processing. The reasons can be classified into three groups: relatively low-power computing platform, huge image data to be processed and the nature of image-processing algorithms.

Inefficient computing is a relative concept. The microcomputer has been powerful enough in the last decade to make personal image processing practically feasible to the individual researcher for inexpensive image processing (Miller, 1993; Schowengerdt & Mehldau, 1993). In recent years, although such systems still functionally satisfy the requirements of most general purpose image-processing needs, the limited computing capacity in a standalone processing node has become inadequate to keep up with the faster growth of image-processing applications in such practical areas as real-time image processing and 3D image rendering.

The huge amount of image data is another issue which has been faced by many image-processing applications today. Many applications such as computer graphics, rendering photo realistic images and computer-animated films consume the aggregate power of whole farms of workstations (Oberhuber, 1998). Although the common sense of what is “large” to the image data being processed has changed over time, expression in Megabytes or Gigabytes is observed from the application point of view (Goller, 1999). Over the past few decades, the image to be processed has become larger and larger. Consequently, the issue of how to decrease the processing time despite the growth of image data becomes an urgent point in digital image processing.

Moreover, the nature of the traditional image-processing algorithms is another issue which reduces the processing speed. In digital image processing, the elementary image operators can be differentiated between point image operators, local imageoperators and global image operators (Braunl, Feyrer, Rapf, & Reinhardt, 2001). The main characteristic of point operator is that a pixel in the output image depends only on the corresponding pixel in the input image. Point operators are used to copy an image from one memory location to another, in arithmetic and logic operations, table lookup and image composition (Nicolescu & Jonker, 2002). Local operators create a destination pixel based on criteria that depend on the source pixel and the values of the pixels in some “neighbourhood” surrounding it. They are used widely in low-level image processing such as image enhancement by sharpening, blurring and noise removal. Global operators create a destination pixel based on the entire image information. A representative example of an operator within this class is the Discrete Fourier Transform (DFT). Compared with point operators, local operators and global operator are more computationally intensive.

As a consequence of the above, image processing-related tasks involve the execution of a large number of operations on large sets of structured data. The processing power of the typical desktop workstation can therefore become a severe bottleneck in many image-processing applications. Thus, it may make sense to perform image processing on multiple workstations or on a parallel processing system. Actually, many image-processing tasks exhibit a high degree of data locality and parallelism, and map quite readily to specialized massively parallel computing hardware (Chen, Lee, & Cho, 1990; Siegel, Armstrong, & Watson, 1992; Stevenson, Adams, Jamieson, & Delp, April, 1993).

Key Terms in this Chapter

Computer Networks: A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics.

Sequence Alignment: In bioinformatics, a sequence alignment is a way of arranging the primary sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences.

Computational Biology: Computational biology refers to hypothesis-driven investigation of a biological problem using computers, carried out with experimental or simulated data, with the primary goal of discovery and the advancement of biological knowledge.

Cluster Computing: A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

Divisible Load Theory: Divisible load theory is a methodology involving the linear and continuous modeling of partitionable computation and communication loads for parallel processing

Parallel Computing: Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently .

Bioinformatics: Bioinformatics is the application of information technology to the field of molecular biology. Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.

Complete Chapter List

Search this Book:
Reset