Feature Extraction in Content-Based Image Retrieval

Feature Extraction in Content-Based Image Retrieval

Jacob John Foley (School of Science and Technology, University of New England, Australia) and Paul Kwan (School of Science and Technology, University of New England, Australia)
DOI: 10.4018/978-1-4666-5888-2.ch583
OnDemand PDF Download:
No Current Special Offers

Chapter Preview



In recent decades, the increased usage and availability of digital cameras has created a vast amount of new information captured in the form of digital images. These images have been given an unprecedented level of accessibility through the Internet and sharing in social media. It is difficult to represent these images using text descriptions due to the amount of labour required to annotate large collections and the occurrence of inconsistencies in annotations caused by the differing perceptions of the individual annotators. This makes searching images using text-based methods ineffective (Rui, Huang & Chang, 1999).

New techniques in Content Based Image Retrieval (CBIR) are being developed to accommodate indexing and searching images using Feature Extraction. Feature extraction algorithms use the content of digital images to produce Feature Vectors, which represent the important details of an image in a concise form and allow for complex analysis of the source image.

Feature vectors can be compared using Similarity Measures that quantitatively describe the difference between two sets of feature vectors. A high similarity between feature vectors corresponds to a high likelihood that the two vectors are being used to represent the same or a similar object. Techniques can be applied to further enhance the results obtained from similarity measures, such as relevance feedback, context mining, supervised machine learning, object ontologies and semantic templates (Liu, Zhang, Lu & Ma, 2007).

CBIR has applications in fields such as medicine, defence, weather forecasting, security, personal photo collections, social photo sharing and digital cultural heritage. Any system which benefits from assisted photo organisation and identification is a potential application.

Datta, Joshi, Li & Wang (2008) classify searches as being made via association, where there is no clear goal initially but refined searches lead to the target image, via an aimed search, where a clear goal is known, or via a category search, where the user wishes to find images within a category with no clear goal. CBIR provides an effective means of performing all three kinds of search.

In this chapter, we examine the most common features for indexing and searching images. This provides an introduction to the concepts used in Feature Extraction. For further information on specific techniques and their implementations, refer to the Additional Reading section.



While the field of CBIR is being developed to address the issue of reliable and efficient description of images, it faces certain challenges. One of the greatest challenges to CBIR is the issue of the semantic gap.

In an example provided by Datta, Joshi, Li & Wang (2008), the reader is asked to consider what a “perfect” picture of a subject might be in terms of its features. Not only will this vary on the individual, but it will also be difficult to describe in terms of a set of features that correspond to the semantic concept of perfection. Even descriptions such as “find images of a cheerful crowd” have sufficient ambiguity that a computer program would have difficulty associating a set of feature vectors with the desired result.

An additional challenge relates to the selection of feature extraction algorithms. There is not a single correct approach to feature extraction for general purpose CBIR, nor a single type of feature which is demonstrably better across all applications than other features. This grants significant leeway for creative and exploratory approaches in the field. The similarity measures used to compare feature vectors also vary, with popular approaches using distances such as Euclidean distance, Minkowsky distance (Kaur & Jyoti, 2013), or trees using their edit distance (Yang, Kalnis & Tung, 2005).

Veltkamp & Tanase (2000) describe alternatives to CBIR. These include browsing an image database until the target image is located, specifying the target image in terms of a keyword or image query, or providing a sketch of the desired image and using relevance feedback to improve the results. These approaches are labor intensive and have less potential for computer automation than CBIR.

Key Terms in this Chapter

Semantic Gap: The difficulty of determining a set of image features that correspond to a certain semantic meaning.

Content-Based Image Retrieval (CBIR): The field of representing, organising and searching images based on their content rather than image annotations.

Feature Extraction Algorithm: The algorithm used to create feature vectors from image features.

Image Semantics: Meaning and significance of an image.

Image Annotations: Additional information provided about an image through manual annotation by a human observer or through computer analysis of the image.

Feature Vector: The abstract representation of an image feature which has been computed.

Similarity Measure: A comparison between feature vectors that returns a numerical representation of how similar they are.

Complete Chapter List

Search this Book: