Solving the Small and Asymmetric Sampling Problem in the Context of Image Retrieval

Solving the Small and Asymmetric Sampling Problem in the Context of Image Retrieval

Ruofei Zhang, Zhongfei (Mark) Zhang
DOI: 10.4018/978-1-60566-174-2.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter studies the user relevance feedback in image retrieval. We take this problem as a standard two-class pattern classification problem aiming at refining the retrieval precision by learning through the user relevance feedback data. However, we have investigated the problem by noting two important unique characteristics of the problem: small sample collection and asymmetric sample distributions between positive and negative samples. We have developed a novel approach to empirical Bayesian learning to solve for this problem by explicitly exploiting the two unique characteristics, which is the methodology of BAyesian Learning in ASymmetric and Small sample collections, thus called BALAS. In BALAS different learning strategies are used for positive and negative sample collections, respectively, based on the two unique characteristics. By defining the relevancy confidence as the relevant posterior probability, we have developed an integrated ranking scheme in BALAS which complementarily combines the subjective relevancy confidence and the objective similarity measure to capture the overall retrieval semantics. The experimental evaluations have confirmed the rationale of the proposed ranking scheme, and have also demonstrated that BALAS is superior to an existing relevance feedback method in the current literature in capturing the overall retrieval semantics.
Chapter Preview
Top

Introduction

Very large collections of images have become ever more common than before. From stock photo collections and proprietary databases to the World Wide Web, these collections are diverse and often poorly indexed; unfortunately, image retrieval systems have not kept pace with the collections they are searching. How to effectively index and retrieve semantically relevant images according to users’ queries is a challenging task. Most existing image retrieval systems, such as image search engines in Yahoo! (Yahoo! Search website) and Google (Google search website), are textual based. The images are searched by using the surrounding text, captions, keywords, etc. Although the search and retrieval techniques based on textual features can be easily automated, they have several inherent drawbacks. First, textual description is not capable of capturing the visual contents of an image accurately and in many circumstances the textual annotations are not available. Second, different people may describe the content of an image in different ways, which limits the recall performance of textual-based image retrieval systems. Third, for some images there is something that no words can convey. Try to imagine an editor taking in pictures without seeing them or a radiologist deciding on a verbal description. The content of images is beyond words. They have to be seen and searched as pictures: by objects, by style, by purpose.

To resolve these problems, Content-Based Image Retrieval (CBIR) has attracted significant research attention (Marsicoi et al., 1997; Ratan & Grimson, 1997; Sivic & Zisserman, 2003; Liu et al., 1998). In CBIR, a query image (an image to which a user tries to find similar ones) is imposed to the image retrieval system to obtain the semantically relevant images. The similarity between the query image and the indexed images in the image database is determined by their visual contents, instead of the textual information. Early research of CBIR focused on finding the “best” representation for image features, e. g., color, texture, shape, and spatial relationships. The similarity between two images is typically determined by the distances of individual low-level features and the retrieval process is performed by a k-nn search in the feature space (Del Bimbo, 1999). In this context, high level concepts and user’s perception subjectivity cannot be well modeled. Recent approaches introduce more advanced human-computer interaction (HCI) into CBIR. The retrieval procedure incorporates user’s interaction into the loop, which consists of several iterations. In each iteration, the user cast positive samples (relevant images) as well as negative samples (irrelevant images) for the returned results from the previous iteration. Based on user’s feedback, the retrieval system is able to adaptively customize the search results to the user’s query preference. This interaction mechanism is called relevance feedback, which allows a user to continuously refine his/her querying information after submitting a coarse initial query to the image retrieval system. This approach greatly reduces the labor required to precisely compose a query and easily captures the user’s subjective retrieval preference.

However, most approaches to relevance feedback, e. g., (Rui et al., 1998; Picard et al., 1996; Porkaew et al., 1999; Zhang & Zhang, 2004), are based on heuristic formulation of empirical parameter adjustment, which is typically ad hoc and not systematic, and thus cannot be substantiated well. Some of the recent work (Wu et al., 2000; MacArthur et al., 2000; Tieu & Viola, 2000; Tong & Chan, 2001; Tao & Tang, 2004) formulates the relevance feedback problem as a classification or learning problem. Without further exploiting the unique characteristics of the training samples in the relevance feedback for image retrieval, it is difficult to map the image retrieval problem to a general two-class (i.e., relevance vs. irrelevance) classification problem in realistic applications.

Complete Chapter List

Search this Book:
Reset