Utilizing Context Information to Enhance Content-Based Image Classification

Utilizing Context Information to Enhance Content-Based Image Classification

Qiusha Zhu (University of Miami, USA), Lin Lin (University of Miami, USA), Mei-Ling Shyu (University of Miami, USA) and Dianting Liu (University of Miami, USA)
Copyright: © 2013 |Pages: 17
DOI: 10.4018/978-1-4666-2940-0.ch006
OnDemand PDF Download:


Traditional image classification relies on text information such as tags, which requires a lot of human effort to annotate them. Therefore, recent work focuses more on training the classifiers directly on visual features extracted from image content. The performance of content-based classification is improving steadily, but it is still far below users’ expectation. Moreover, in a web environment, HTML surrounding texts associated with images naturally serve as context information and are complementary to content information. This paper proposes a novel two-stage image classification framework that aims to improve the performance of content-based image classification by utilizing context information of web-based images. A new TF*IDF weighting scheme is proposed to extract discriminant textual features from HTML surrounding texts. Both content-based and context-based classifiers are built by applying multiple correspondence analysis (MCA). Experiments on web-based images from Microsoft Research Asia (MSRA-MM) dataset show that the proposed framework achieves promising results.
Chapter Preview

1. Introduction

With the proliferation of digital photo-capture devices like cameras, cell phones, and camcorders, and the exponential growth of web 2.0, people especially the youths are accustomed to utilize photographs to record their daily lives and to share images on social network websites (Flickr, Twitter, Facebook, etc.) to demonstrate their seeing and feeling. The new trend of lifestyle raises an issue to multimedia data management area, namely how to effectively organize these image data. Generally speaking, image classification experiences two developing phases: text-based and content-based. Traditional text-based approaches, which could be traced back to 1970s, usually rely on manual annotation (such as tagging and labeling) to perform image classification. The construction of an index (or a thesaurus) is mostly carried out by documentalists who manually assign a limited number of keywords describing the image content. However, the processing speed cannot meet the requirements of fast and automatic organization and search of images nowadays. In order to automatically organize the great amount of increasing online images, learning focused on image content analysis has gained popularity over traditional text-based analysis (Liu, Zhang, Lu, & Ma, 2007).

Content-based image classification approaches have been introduced in the early 1990s to classify and index images on the basis of low-level and mid-level visual features derived from color, texture or shape information (Lew, Sebe, Djeraba, & Jain, 2006). Although significant improvements have been achieved by using low-level visual features, the content-based approaches still face many challenges such as semantic gap and varied image qualities. Semantic gap characterizes the difference between the semantic meaning of an image and the extracted low-level visual features. A lot of effort has been put into bridging this gap, but it is still difficult to conquer (Naphade et al., 2006). On the other hand, context information for images can be utilized to be complementary to content information. Compared to low-level visual features, context information may better capture the semantics of images under the assumption that the textual terms are actually related to the images. An example of such context information is the HTML surrounding texts associated with images in a web environment. Therefore, a better image classification performance can be achieved by utilizing the context information to enhance the content-based image classification. To classify texts, or perform text categorization (TC) which is defined as the task of labeling texts with thematic categories from a predefined set (Sebastiani, 2002), many techniques have been borrowed from information retrieval (IR) field. TF*IDF (term frequency-inverse document frequency) weighting scheme (Jones, 2004) is the most famous one and has achieved a great success.

In this paper, a novel two-stage image classification framework is proposed, which integrates content-based and context-based classification. A new TF*IDF weighting scheme is also introduced to calculate term weights for textual feature extraction. A classifier based on multiple correspondence analysis (MCA) transaction weights in (Lin, Shyu, & Chen, 2009) is trained by the visual features at the first stage. Then both predicted positive and negative results are refined by the classifiers trained by the textual features at the second stage. Fifteen concepts from MSRA-MM dataset (Li, Wang, & Hua, 2009) are used for evaluation, ranging from highly imbalanced datasets to balanced datasets. Experiments contain the evaluation of the proposed new TF*IDF weighting scheme and the whole framework. The proposed TF*IDF variant is compared with the conventional TF*IDF and two supervised term weighting methods. The evaluation of the framework is done by first comparing the separate content-based and context-based MCA classifiers with other seven existing well-known classifiers, and then comparing to these results, promising improvements are achieved by using our proposed framework. Furthermore, comparisons are made with two existing approaches that fuse visual and textual features (Kalva, Enembreck, & Koerich, 2007; Rafkind, Lee, Chang, & Yu, 2006) to be introduced in the Related Work Section. The experimental results demonstrate that our framework, via effective utilizing context-based information, can enhance content-based image classification performance.

Complete Chapter List

Search this Book: