Color Features and Color Spaces Applications to the Automatic Image Annotation

Color Features and Color Spaces Applications to the Automatic Image Annotation

Vafa Maihami (Semnan University, Iran) and Farzin Yaghmaee (Semnan University, Iran)
DOI: 10.4018/978-1-4666-9685-3.ch015
OnDemand PDF Download:


Nowadays images play a crucial role in different fields such as medicine, advertisement, education and entertainment. Describing images content and retrieving them are important fields in image processing. Automatic image annotation is a process which produces words from a digital image based on the content of this the image by using a computer. In this chapter, after an introduction to neighbor voting algorithm for image annotation, we discuss the applicability of color features and color spaces in automatic image annotation. We discuss the applicability of three color features (color histogram, color moment and color Autocorrelogram) and three color spaces (RGB, HSI and YCbCr) in image annotation. Experimental results, using Corel5k benchmark annotated images dataset, demonstrate that using different color spaces and color features helps to select the best color features and spaces in image annotation area.
Chapter Preview


Technology, in the form of inventions such as photography and television, has played a major role in facilitating the capture and communication of image data. Nowadays images play a crucial role in fields as diverse as medicine, journalism, advertisement, design, education and entertainment. The application of images in human communications is hard and the process of digitization does not in itself make image collections easier to manage. Images have low level features (color, texture, shape, etc.), while high level features are understood words from image. The gap between low level and high level features can be removed by image annotation (Dengsheng Zhang, 2012) (R. Datta, 2008) (Y. Liu, 2007). Image annotation is a process which produces metadata (text or keywords) from a digital image based on visual content of an image (Dengsheng Zhang, 2012); (Wang, 2011). The aim of image annotation is to assign the appropriate words to describe the images, see Figure 1. Image annotation can be done by both machines or by humans, but performing this action by humans is boring, costly and it is usually erroneous or ambiguous. In fact image annotation refers to an action which is automatic and performed by machines.

Figure 1.

Image annotation process

Most of existing methods of automatic image annotation make use of the visual content and they often rely heavily on supervised machine learning methods (J. Liu, 2009) (Su, 2011) (Y. Han, 2012) (Yang, 2012) (Zenghai Chen, 2013). The block diagram of an image annotation system is shown in Figure 2. To get the annotation model training phase is performed. An image dataset is used in this phase. Images are segmented and important feature are extracted to the learning model based on global feature-based or regional feature-based methods. Learning models are usually derived by machine learning algorithms such as neural networks, support vector machines and decision tree. An image annotation model is obtained from the dataset in the final step of the training phase. The second phase in image annotation is testing. The images that we want annotated can be used as input. The next stage is to perform pre-processing and feature extraction on the images. The existing features of the model are given and based on the model learned in the previous phase; finally image annotation is displayed in the output.

Figure 2.

The block diagram of an image annotation system

But in modern applications, especially real-time applications with a large and diverse visual content, a weak supervision which effectively and efficiently estimates tag/keywords relevance is needed. This paradigm is recently developed (X. Li, 2009) (Ro, 2013) (Lei Wu, 2013).

Complete Chapter List

Search this Book: