Rotation Invariant Texture Image Retrieval with Orthogonal Polynomials Model

Rotation Invariant Texture Image Retrieval with Orthogonal Polynomials Model

R. Krishnamoorthi, S. Sathiya Devi
DOI: 10.4018/978-1-4666-3906-5.ch017
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The exponential growth of digital image data has created a great demand for effective and efficient scheme and tools for browsing, indexing and retrieving images from a collection of large image databases. To address such a demand, this paper proposes a new content based image retrieval technique with orthogonal polynomials model. The proposed model extracts texture features that represent the dominant directions, gray level variations and frequency spectrum of the image under analysis and the resultant texture feature vector becomes rotation and scale invariant. A new distance measure in the frequency domain called Deansat is proposed as a similarity measure that uses the proposed feature vector for efficient image retrieval. The efficiency of the proposed retrieval technique is experimented with the standard Brodatz, USC-SIPI and VisTex databases and is compared with Discrete Cosine Transform (DCT), Tree Structured Wavelet Transform (TWT) and Gabor filter based retrieval schemes. The experimental results reveal that the proposed method outperforms well with less computational cost.
Chapter Preview
Top

1. Introduction

With the rapid growth of digital and information technologies, more and more multimedia data are generated and made available in digital form. Searching and retrieving relevant images in this huge volume of data is a difficult task and has created an urgent need to develop new tools and techniques. One such solution is the Content Based Image Retrieval (CBIR). As the image databases grow larger, the traditional keyword-based approach for retrieving a particular image becomes inefficient and suffers from the following limitations: (i) Vast amount of labor is required for manual image annotation and (ii) Limited capacity for retrieving the visual content of the image and subjectivity of human perception. Hence, to overcome these difficulties of manual annotation approach, content based image retrieval has emerged. CBIR is a collection of techniques and algorithms which enable querying the image databases with low level image content such as color, texture, objects and their geometries rather than textual attributes such as image name or other keywords (Rangachar Kasturi & Jain, 2002). Many image retrieval systems have been developed using all or some of these features. It includes Chabot (Ogle & Stonebraker, 1995), Photobook (Pentland, Picard, & Sclaroff, 1996), QBIC (Flickner et al., 1995), Virage (Batch et al., 1996), VisualSeek (Smith & Chang, 1997), MARS (Huang, Mehrotra, & Ramachandran, 1996), Netra (Ma & Manjunath, 1995), and Excalibur (Feder, 1997). The extensive literature and the state of art methods about content based image retrieval can be found in Datta, Li, and Wang (2005), Rui, Huang, and Chang (1999), Smeulders, Worring, Santini, Gupta, and Jain (2000), Kherfi, Ziou, and Bernardi (2004), Lew, Sebe, Djeraba, and Jain (2006), and Kokare, Chatterji, and Biswas (2002).

Among different visual characteristics for the analysis of different types of images, texture is reported to be prominent and vital low level feature (Jalaja, Bhagvati, Deekshatulu, & Pujari, 2005). Even though no standard definition exists for texture, Sklansky (Sklansky, 1978) defined the texture as a set of local properties in the image region with a constant, slowly varying or approximately periodic pattern and it is measured using its distinct properties such as periodicity, coarseness, directionality and pattern complexity for efficient image retrieval particularly on the aspects of orientation and scale (Tamura, Mori, & Yamawaki, 1976; Niblack et al., 1993). In a typical CBIR system, identification of the proper features that maximizes the differentiation of the texture is an important step. There are many categories of methods that exist for identifying and manipulating the texture: (i) Statistical methods (Gray Level Co-occurrence matrix (GLCM) (Haralick, 1979), (ii) Model Based methods such as Markov Random Fields (MRF) (Cross & Jain, 1983), Simultaneous Auto Regression (SAR) (Mao & Jain, 1992), Wold decomposition (Liu & Picard, 1996) and (iii) Signal Processing methods (Gabor filters) (Jain & Farroknia, 1991) and Wavelet Transforms (Chang & Kuo, 1992; Laine & Fan, 1993). Some of these techniques depend on the comparison values of second order statistics obtained from query and stored images (Eakins & Gratam, 1999) for measuring the texture similarity. In the case of GLCM, there is no well-established method of selecting the displacement vector d and computing the co-occurrence matrices for different values of d is not feasible. The SAR model and structural methods work fine only if the image has regular texture. Markov Random Field (MRF) model captures micro textures well, but fails to capture the regular and inhomogeneous textures. Manjunath and Ma (1996) have proposed a texture analysis scheme for image retrieval with the help of Gabor wavelet. They proposed a group of wavelets with each wavelet capturing energy at a specific frequency and a direction. The texture features are then extracted from this group of energy distribution. They compared the texture retrieval performance of Gabor wavelet with Pyramid Structured Wavelet Transform (PWT), Tree Structured Wavelet Transform (TWT) and Multi-resolution Auto Regressive Model (MARM) approach and claimed that the Gabor wavelet yields better performance than the conventional orthogonal wavelet based features.

Complete Chapter List

Search this Book:
Reset