A Semi-Supervised Metric Learning for Content-Based Image Retrieval

A Semi-Supervised Metric Learning for Content-Based Image Retrieval

I. Daoudi, K. Idrissi
Copyright: © 2011 |Pages: 11
DOI: 10.4018/ijcvip.2011070104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, the authors propose a kernel-based approach to improve the retrieval performances of CBIR systems by learning a distance metric based on class probability distributions. Unlike other metric learning methods which are based on local or global constraints, the proposed method learns for each class a nonlinear kernel which transforms the original feature space to a more effective one. The distances between query and database images are then measured in the new space. Experimental results show that the kernel-based approach not only improves the retrieval performances of kernel distance without learning, but also outperforms other kernel metric learning methods.
Article Preview
Top

1. Introduction

Content-based image retrieval has received much interest in the last decades due to the large digital storage and easy access to images on computers and through the World Wide Web (Flickner et al., 1995). A common scheme used in CBIR is to first automatically extract from images a set of features (color, texture, shape, etc.) structured into descriptors (indexes). These indexes are then used in a search engine to compare, classify, rank, etc. the database content.

The two determining factors for image retrieval performances are on one hand the considered features to describe the images, and on the other hand the distance used to measure the similarity between a query and database images. It is well known that for a specific set of features, the performance of a content-based image retrieval system depends critically on the similarity or dissimilarity measure in use. Distance learning can be considered as one of the most interesting issue to improve the performances of CBIR systems and also to reduce the semantic gap.

Different learning strategies, such as supervised, unsupervised and semi-supervised distance metric learning are used to define a suitable similarity measurement for content-based image retrieval.

The supervised approach can be divided into two categories: In the first one the distance metric is learned in a global sense, i.e., to satisfy all the pairwise constraints simultaneously. A review of various learning methods of this category can be found in Xing, Ng, Jordan, and Russell (2003). In the second approach, distance metric is learned in a local sense, satisfying only local pairwise constraints. Several authors (Xiang, Nie, & Zhang, 2008; Yang, 2007; Goldberger, Roweis, Hinton, & Salakhutdinov, 2005), used this approach to learn appropriate distance metrics for ijcvip.2011070104.m01 classifier. Particularly, in Domeniconi, Peng, Heisterkamp, and Dai (2002), a Quasiconformal Kernel for nearest neighbor classification is proposed which adjusts the Radial Basis function by introducing weights based on both local consistency of class labels and labeling uncertainty. In Bermejo and Cabestany (2001), the authors propose a technique that computes a locally flexible distance metric using SVM. As proposed in Goldberger et al. (2005) and Peng, Heisterkamp, and Dai (2002), and Bermejo and Cabestany (2001) attribute some weights to the features, based on their relevance to the class conditional probabilities for each query. Hoi, Liu, Lyu, and Ma (2006) propose a simple and efficient algorithm to learn a full ranked Mahalanobis distance metric. This approach constructs a metric in kernel space, based on a weighted sum of class covariance matrices.

The main idea of unsupervised distance metric learning methods is to learn low-dimensional manifold where distances between most of observed data are preserved. These methods can be divided into nonlinear and linear approaches. The most popular methods for nonlinear unsupervised dimensionality reduction are ISOMAP (Tenenbaum, Silva, & Langford, 2000), Locally Linear Embeding (LLE) (Roweis & Saul, 2000), and Laplacian Eigenamp (LE) (Belkin & Niyogi, 2003). Locality Preserving Projections (LPP) (He & Niyogi, 2003) and Neighborhood Preserving Embedding (NPE) (He, Cai, Yan, & Zhang, 2005) are the linear approximation for LE and LLE, respectively. ISOMAP preserves the geodesic distances between any two data points, while LLE and LE focus on the preservation of the local neighbor structure. The well-known algorithms for the unsupervised linear methods are the Principal Component Analysis (PCA) (Schölkopf, Smola, & Muller, 1998), Multidimensional Scaling (MDS) (Cox & Cox, 1994) and the Independent components analysis (ICA) (Bar-Hillel, Hertz, Shental, & Weinshall, 2005).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing