Dealing with Higher Dimensionality and Outliers in Content-Based Image Retrieval

Dealing with Higher Dimensionality and Outliers in Content-Based Image Retrieval

Seikh Mazharul Islam, Minakshi Banerjee, Siddhartha Bhattacharyya
Copyright: © 2017 |Pages: 26
DOI: 10.4018/978-1-5225-1776-4.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter proposes a content based image retrieval method dealing with higher dimensional feature of images. The kernel principal component analysis (KPCA) is done on MPEG-7 Color Structure Descriptor (CSD) (64-bins) to compute low-dimensional nonlinear-subspace. Also the Partitioning Around Medoids (PAM) algorithm is used to squeeze search space again where the number of clusters are counted by optimum average silhouette width. To refine these clusters further, the outliers from query image's belonging cluster are excluded by Support Vector Clus-tering (SVC). Then One-Class Support Vector Machine (OCSVM) is used for the prediction of relevant images from query image's belonging cluster and the initial retrieval results based on the similarity measurement is feed to OCSVM for training. Images are ranked from the positively labeled images. This method gives more than 95% precision before recall reaches at 0.5 for conceptually meaningful query categories. Also comparative results are obtained from: 1) MPEG-7 CSD features directly and 2) other dimensionality reduction techniques.
Chapter Preview
Top

Introduction

High dimensional multiple array data and exponential information growth nowadays are being widely observed in the areas of multimedia retrieval, modern computer vision research (Ju, Sun, Gao, Hu, & Yin, 2015), social networking, text processing, gene expression dataset analysis etc., with respect to both dimensionality and sample size. The dimensionality is being increased double in every 20 month or so (Jensen & Shen, 2008). This high-dimensionality not only increases computational overhead and memory requirements in algorithms, but also adversely affects their performance in real applications. These high-dimensional data however, are not often distributed uniformly in their ambient space; instead they lie in or close to a low-dimensional space or manifold (Torki, Elgammal, & Lee, 2010), (Zhang, Huang, & Wang, 2010). Obtaining the low dimensional features set from high-dimensional data has been a challenging problem in machine learning research. Knowledge acquisition, refining is only valuable when it can be used efficiently and effectively. The accumulated data may contain the noise because of the technological imperfection while capturing these data and the source of the data may produce noisy data. In medical domain any kind of the inadequacy in the imaging device may produce noisy medical images and may be the reason of wrong diagnosis. In social media data is unstructured in nature. The excellent formal or informal contents, spam, grammatical mistakes in those contents, wrong spelling and punctuation marks all these issues are prevalent in social media. The useful knowledge extraction from the data as mentioned is very difficult. The dimensionality reduction in this context is very effective to get rid of the noisy and redundant features. It refers both the feature extractions and feature selections. In feature extraction the original features space is translated into new feature space with smaller dimension and the semantic meaning is lost. In feature selection, a subset of the original features set is extracted by eliminating features with little or no predictive information by minimizing redundancy and maximizing relevance to the target such as the class labels in classification.

Theoretically, the problem of higher dimensional data can be given using a K-Nearest Neighbors (KNN) classifier (Hastie, Tibshirani, & Friedman, 2008). Suppose the inputs are uniformly distributed in the unit hypercube [0, 1]p and task is to make a decision for a test point x’s belonging class by “growing” a hypercube around x until it contains a desired fraction r of the training data points. The expected edge length of this cube is given by ep(r) = r1/p as e(r)d= r. The p = 10 gives e10(0.01) = 0.011/10≈ 0.63 and e10(0.1) = 0.11/10≈ 0.80 indicate that to consider 1% or 10% of the data to form a local average, it requires to cover 63% or 80% of the range along each dimension around x. In this context the neighborhoods are no longer “local” in spite of the name “nearest neighbor”. Reducing r will not help because then only fewer observations will be taken for average which results in higher variance. The following Figure 1 is illustrating this curse of dimensionality.

Figure 1.

The illustration of curse of dimensionality. The right figure shows that, for any dimensions p, the side-length of the sub-cube required to consider a fraction r of the volume of the data. For p=10, to capture 10% of the data it requires to capture 80% of the range of each dimension.

978-1-5225-1776-4.ch005.f01

However, in the last couple of decades, a lot of progress has been observed in developing tools or formulating algorithms for dimensionality reduction and features selection (Celik, Logsdon, & Aase, 2014), (Jenatton, Obozinski, & Bach, 2010), (Kong, Wang, Teoh, Wang, & Venkateswarlu, 2005), (Lu, Plataniotis, & Venetsanopoulos, 2008) .

Complete Chapter List

Search this Book:
Reset