High-Level Features for Image Indexing and Retrieval

High-Level Features for Image Indexing and Retrieval

Gianluigi Ciocca (Università degli Studi di Milano-Bicocca, Italy), Raimondo Schettini (Università degli Studi di Milano-Bicocca, Italy), Claudio Cusano (Università degli Studi di Pavia, Italy) and Simone Santini (Universidad Autónoma de Madrid, Spain)
Copyright: © 2015 |Pages: 10
DOI: 10.4018/978-1-4666-5888-2.ch585
OnDemand PDF Download:
$30.00
List Price: $37.50

Chapter Preview

Top

Background

The literature on content based retrieval has become so vast that any attempt at exhaustivity in a chapter like this would be daunting. Instead, in this section, we will briefly describe those works in the state of the art that deal with the problem of extracting high-level features from the images by following different technical development and general philosophy. Narrowing down the semantic gap can be achieved through the exploitation of different image indexing techniques.

Text/Ontology

The fist technique is text-based meaning that the image content is described in terms of textual keywords. This description can be manually provided to the system or obtained using a vocabulary, an “object-ontology” which provides a qualitative definition of high-level concepts from the low-level features. For example, an image region can be assigned the keyword “sky” if it is an “upper, uniform and blue-collared region.” The “blue” attribute can be itself obtained from low-level features concerning color distribution (e.g. an average RGB colour components between predefined thresholds that characterize a generic sky region). Examples of this kind of descriptors, not limited to colour but applied also to other low-level features, can be found in Ravishankar et al. (1993). Since an ontology is naturally hierarchical, that is, it not only contains a set of concepts but also relations between these concept, an object can receive multiple keywords based on its nature. Moreover, the keywords vocabulary can refers non only to objects within the image but also to the image itself as a whole. In this case, the attributes or concepts of interest are global ones (e.g. the image category such as “sunset,” “landscape,” “indoor,” “close-up,” etc.). An example of how an image can be hierarchically and globally described is ImageNet (Deng et al. 2009), an image database organized according to the WordNet (Miller et al. 1990) hierarchy, in which each node of the hierarchy contains hundreds and thousands of related images. Using a vocabulary of keywords, a query to retrieve similar images, can be performed by using traditional, text-based, information retrieval techniques.

Key Terms in this Chapter

Low-Level Features: Descriptors extracted from an image containing information about its visual properties.

Prosemantic Features: Image descriptor based on the affinity of the given image with respect to a set of image categories.

Content Based Image Retrieval System (CBIRS): A system that supports querying and retrieval of images exploiting information manually provided or automatically extracted from the images themselves.

High-Level Features: Descriptors derived from an image containing information about the semantic of its contents.

Complete Chapter List

Search this Book:
Reset