Combining Block DCV and Support Vector Machine for Ear Recognition

Combining Block DCV and Support Vector Machine for Ear Recognition

Zhao Hailong, Yi Junyan
DOI: 10.4018/IJITN.2016040104
(Individual Articles)
No Current Special Offers


In recent years, automatic ear recognition has become a popular research. Effective feature extraction is one of the most important steps in Content-based ear image retrieval applications. In this paper, the authors proposed a new vectors construction method for ear retrieval based on Block Discriminative Common Vector. According to this method, the ear image is divided into 16 blocks firstly and the features are extracted by applying DCV to the sub-images. Furthermore, Support Vector Machine is used as classifier to make decision. The experimental results show that the proposed method performs better than classical PCA+LDA, so it is an effective human ear recognition method.
Article Preview

1. Introduction

With the development of our society, the demands for identify validation has been increasing rapidly. So biometrics is receiving more and more attention in recent years and it plays an important role in almost every aspect of new security measures - from control access point to terrorist identification. Ear has certain advantages over other biometrics because of is desirable such as universality, uniqueness and permanence (Iannarelli, 1989; Chang, Bowyer, Sarkar & Victor, 2003).

In recent years, discriminant subspace analysis has been extensively studied in computer vision and pattern recognition. One popular method is Linear Discriminant Analysis, also known as the Fisher Linear Discriminant (FLD). It tries to find an optimal linear transformation which maximizes the between-class scatter and minimizes the within-class scatter3,4. To be more specific, in terms of the between-class scatter matrix IJITN.2016040104.m01 and the within-class scatter matrix IJITN.2016040104.m02, the Fisher’s Criterion can be written as


By maximizing the criterion IJITN.2016040104.m04, Fisher Linear Discriminant finds the subspaces in which the classes are most linearly separable. The solution that maximizes IJITN.2016040104.m05 is a set of the eigenvectors IJITN.2016040104.m06 which must satisfy


This is called the generalized eigenvalue problem. The discriminant subspace is spanned by the generalized eigenvectors. The discriminability of each eigenvector is measured by the corresponding generalized eigenvalue, i.e., the most discriminant subspace corresponds to the maximal generalized eigenvalue. The generalized eigenvalue problem can be solved by matrix inversion and eigentransform, i.e., applying the eigentransform on IJITN.2016040104.m08. Unfortunately, for many applications with high dimensional data and few training samples, such as ear recognition, the scatter matrix IJITN.2016040104.m09 is singular because generally the dimension of sample data is greater than the number of samples. This is known as the undersampled or small sample size problem (Sergios Theodoridis & Konstantinos, Koutroumbas, 2008; Friedman, 1989; Fukunaga, 1990).

In the last decade many methods have been proposed to solve this problem (Belhumeur, Hespanha, & Kriegman, 1997; Liu, & Wechsler, 1998; Yu & Yang, 2001; Chen, Liao, Lin, Ko & Yu, 2000; Huang, Liu, Lu & Ma, 2002). These methods have their problem respectively, which either remove the discriminant information useful to classification or has an expensive computing cost. Cevikalp11 et al. put forward a method called Discriminative Common Vector (DCV) and it has solved the above problems successfully. However, when the authors directly apply DCV on sample images with high-dimensions, the computational expense of training is still relatively large.

Complete Article List

Search this Journal:
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 1 Issue (2023)
Volume 14: 1 Issue (2022)
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing