A Multi-Label Image Annotation With Multi-Level Tagging System

A Multi-Label Image Annotation With Multi-Level Tagging System

Kalaivani Anbarasan, Chitrakala S.
DOI: 10.4018/978-1-5225-3686-4.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The content based image retrieval system retrieves relevant images based on image features. The lack of performance in the content based image retrieval system is due to the semantic gap. Image annotation is a solution to bridge the semantic gap between low-level content features and high-level semantic concepts Image annotation is defined as tagging images with a single or multiple keywords based on low-level image features. The major issue in building an effective annotation framework is the integration of both low level visual features and high-level textual information into an annotation model. This chapter focus on new statistical-based image annotation model towards semantic based image retrieval system. A multi-label image annotation with multi-level tagging system is introduced to annotate image regions with class labels and extract color, location and topological tags of segmented image regions. The proposed method produced encouraging results and the experimental results outperformed state-of-the-art methods
Chapter Preview
Top

Automatic Image Annotation

Input images are automatically annotated into a set of pre-defined concepts representing objects or regions and are most suitable for image understanding. The automatic image annotation system accepts images as inputs and the system automatically annotates them without human intervention. Much research (Chih-Fong Tsai, 2012; Fu Hao, 2012; Pandya & Shah, 2014; Siddiqui et al., 2015) has progressed in the area of automatic image annotation based on a single / multiple concepts for a whole image or region of interest.

The process of describing image contents comprises region choosing, feature extraction and feature quantization. Region choosing can be done with any of these methods: fixed partition, segmentation and region saliency. Features are extracted from image regions and the pixel value of each region condensed into feature values. Features can be global or local, depending on the image region as a whole or a segmented image region. Image features extracted can be general or domain-specific. Feature quantization maps feature values of continuous spaces transformed into discrete spaces, and the entire process is shown in Figure 1.

Figure 1.

Semantics based image description model

978-1-5225-3686-4.ch003.f01

The three types of semantics-based image annotation are manual, automatic and semi-automatic. Manual and semi-automatic annotation involves manual intervention whereas automatic annotation annotates images automatically without human involvement. Automatic image annotation involves attaching image semantics to the test image based on the trained annotation model. Semantic image description is better option, owing to the following reasons.

  • Computational cost is reduced due to semantic similarity rather than visual similarity.

  • Image retrieval is based on semantic matching, rather than visual matching, for a user input query image.

  • A semantic description is better for image understanding than a visual feature image description.

Image annotation is a solution to bridge the semantic gap between low-level content features and high-level semantic concepts (Yang et al., 2011; Yildrim, 2013; Jisha, 2013). The purpose of annotation is to assign relevant keywords to an image to improve image retrieval accuracy, reducing irrelevant images in image retrieval systems.

Complete Chapter List

Search this Book:
Reset