Content Based Image Retrieval Using Active-Nets

Content Based Image Retrieval Using Active-Nets

David García Pérez (University of Santiago de Compostela, Spain), Antonio Mosquera (University of Santiago de Compostela, Spain), Stefano Berretti (University of Firenze, Italy) and Alberto Del Bimbo (University of Firenze, Italy)
DOI: 10.4018/978-1-60566-174-2.ch005


Content-based image retrieval has been an active research area in past years. Many different solutions have been proposed to improve performance of retrieval, but the large part of these works have focused on sub-parts of the retrieval problem, providing targeted solutions only for individual aspects (i.e., feature extraction, similarity measures, indexing, etc). In this chapter, we first shortly review some of the main practiced solutions for content-based image retrieval evidencing some of the main issues. Then, we propose an original approach for the extraction of relevant image objects and their matching for retrieval applications, and present a complete image retrieval system which uses this approach (including similarity measures and image indexing). In particular, image objects are represented by a two-dimensional deformable structure, referred to as “active net.” Active net is capable of adapting to relevant image regions according to chromatic and edge information. Extension of the active nets has been defined, which permits the nets to break themselves, thus increasing their capability to adapt to objects with complex topological structure. The resulting representation allows a joint description of color, shape, and structural information of extracted objects. A similarity measure between active nets has also been defined and used to combine the retrieval with an efficient indexing structure. The proposed system has been experimented on two large and publicly available objects databases, namely, the ETH-80 and the ALOI.
Chapter Preview


Effective access to modern archives of digital images requires that conventional searching techniques based on textual keywords are extended by content-based queries addressing visual features of searched data. To this end, many solutions have been experimented which permit to represent and compare images in terms of quantitative indexes of visual features. In particular, different techniques have been identified and experimented to represent content of single images according to low-level features, such as color, texture, shape and structure, intermediate-level features of saliency and spatial relationships, or high-level traits modeling the semantics of image content (Del Bimbo, 1999; Gupta & Jain, 1997; Lew et al., 2006; Smeulders et al., 2000). In so doing, extracted features may either refer to the overall image (e.g., a color histogram), or to any subset of pixels constituting a spatial entity with some visual cohesion in the user perception (e.g., an object).

Among these approaches, image representations based on chromatic indexes have been largely used for general purpose image retrieval systems, as well as for object based search partially robust to changes in objects shape and pose. Several low level features have been considered. In particular, representations based on chromatic indexes have been widely experimented and comprise the basic backbone of most commercial and research retrieval engines such as QBIC (Flickner et al., 1995), Virage (Swain et al., 1991), Visual Seek (Smith & Chang, 1996) or Simplicity (Wang et al., 2001). This mainly depends on the capability of color-based models in combining robustness of automatic construction with a relative perceptual significance of the models.

However, approaches based on global image features are not appropriate for precise retrieval, accounting for perceptual details in the image. More suited to this end are region based solutions. In fact, much research has recently focused on region based techniques that allow the user to specify a particular region of an image and search for images containing similar regions. However, most existing region or object-based systems rely on color segmentation only.

Together with color, texture is a powerful discriminating feature, present almost everywhere in nature. Textures may be described according to their spatial, frequency or perceptual properties. Features of the appearing shape of imaged objects have also been used to represent image content through a variety of approaches. For the purpose of retrieval by shape similarity, representations are preferred such that the salient perceptual aspects of a shape are captured, and the human notion of closeness between shapes corresponds to the topological closeness in the representation space. As a consequence, as opposed to color information, other retrieval schemes are entirely based on shape content. Most of the work on region-shape recognition relies on matching sets of local image features (e.g., edges, lines and corners), usually through statistical analysis which disregard relational information among extracted features. Most of these methods have been proved to be adequate only for simple, flat, and man-made objects, but shape features alone are rarely adequate to discriminate objects for the purpose of object based retrieval. Only few approaches have tried to conjugate color and shape information to improve the significance of object representations.

Objective of this chapter is to present a complete content based image retrieval system, which is mainly targeted to provide effective and efficient object-based retrieval using chromatic as well as shape information of image objects. To this end, solutions are proposed for each of the components constituting a modern image retrieval system, namely, feature extraction, similarity matching and image indexing.

Complete Chapter List

Search this Book: