Image Retrieval Techniques Using Content-Based Local Binary Descriptors: A Survey

Image Retrieval Techniques Using Content-Based Local Binary Descriptors: A Survey

Rakesh Asery, Ramesh Kumar Sunkaria, Puneeta Marwaha, Lakhan Dev Sharma
DOI: 10.4018/978-1-5225-2848-7.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter authors introduces content-based image retrieval systems and compares them over a common database. For this, four different content-based local binary descriptors are described with and without Gabor transform in brief. Further Nth derivative descriptor is calculated using (N-1)th derivative, based on rotational and multiscale feature extraction. At last the distance based query image matching is used to find the similarity with database. The performance in terms of average precision, average retrieval rate, different orders of derivatives in the form of average retrieval rate, and length of feature vector v/s performance in terms of time have been calculated. For this work a comparative experiment has been conducted using the Ponce Group images on seven classes (each class have 100 images). In addition, the performance of the all descriptors have been analyzed by combining these with the Gabor transform.
Chapter Preview
Top

Introduction

As of late the computerized library's storage capacity have grown explosively because of the development in field of web and advanced picture securing innovations. The difficulties produced before advanced picture database is storage, way to research, remote detection, education, and other applications given to the users (Rui, Huang & Chang, 1999; Smeulders, Worring, Santini et al., 2000; Kokare, Chatterji, & Biswas, 2002; Liu, Zhang, Lu et al., 2007). For efficient seek numerous streamlined procedures have been created that can consequently recover the query picture from the colossal database. To address this issue, one of the regularly embraced arrangements is Content-based Image ordering and recovery (CBIR). This CBIR framework has been identified in two classifications based on the ordering procedures utilized for highlight extraction from the database pictures, which are: text-based and content based. These elements extracted from every image are put away as a record highlight vector. As shown in Figure 1, the vector utilizing a coordinating foundation of every file vector is used to find some best coordinating image to the query image.

Figure 1.

Basic idea framework for CBIR

978-1-5225-2848-7.ch008.f01

In text-based recovery framework, pictures are defined with a printed portrayal and as indicated by comparability in view of content based database management systems (DBMS) the clients literary question picture is recovered. This approach has two essential imperfections. In the first place is the boundless level of work required for manual match for both little and immeasurable picture database. The second is the explanation mistake because of colossal substance in the image and the subjectivity of human discernment. These flaws may bring about purposeless matches in further recovery forms.

To conquer the text-based recovery framework shortcomings, content-based image ordering and recovery (CBIR) framework was proposed in the mid-1980s (Smeulders, Worring, Santini et al., 2000; Huang, Kumar, Mitra et al., 1997). In CBIR, based on different descriptor calculations, by using picture visual features, as color, shapes, texture have been proposed. It is exhausting to find a superior stable documentation of an image for all perceptual subjectivity, since picture representation additionally relies on upon conditions like change, scaling changes, and so forth. Along these lines, a consistent representation of an image is still an open research issue. A wide writing assessment in CBIR framework is exhibited in (Smeulders, Worring, Santini et al., 2000; Liu, Zhang, Lu et al., 2007; Huang, Kumar, Mitra et al., 1997; Moghaddam, Khajoie & Rouhi, 2003).

Complete Chapter List

Search this Book:
Reset