Combining Block DCV and Support Vector Machine for Ear Recognition

Combining Block DCV and Support Vector Machine for Ear Recognition

Zhao Hailong, Yi Junyan
Copyright: © 2018 |Pages: 10
DOI: 10.4018/978-1-5225-5204-8.ch030
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In recent years, automatic ear recognition has become a popular research. Effective feature extraction is one of the most important steps in Content-based ear image retrieval applications. In this paper, the authors proposed a new vectors construction method for ear retrieval based on Block Discriminative Common Vector. According to this method, the ear image is divided into 16 blocks firstly and the features are extracted by applying DCV to the sub-images. Furthermore, Support Vector Machine is used as classifier to make decision. The experimental results show that the proposed method performs better than classical PCA+LDA, so it is an effective human ear recognition method.
Chapter Preview
Top

1. Introduction

With the development of our society, the demands for identify validation has been increasing rapidly. So biometrics is receiving more and more attention in recent years and it plays an important role in almost every aspect of new security measures - from control access point to terrorist identification. Ear has certain advantages over other biometrics because of is desirable such as universality, uniqueness and permanence (Iannarelli, 1989; Chang, Bowyer, Sarkar & Victor, 2003).

In recent years, discriminant subspace analysis has been extensively studied in computer vision and pattern recognition. One popular method is Linear Discriminant Analysis, also known as the Fisher Linear Discriminant (FLD). It tries to find an optimal linear transformation which maximizes the between-class scatter and minimizes the within-class scatter3,4. To be more specific, in terms of the between-class scatter matrix 978-1-5225-5204-8.ch030.m01 and the within-class scatter matrix 978-1-5225-5204-8.ch030.m02, the Fisher’s Criterion can be written as

978-1-5225-5204-8.ch030.m03
(1)

By maximizing the criterion 978-1-5225-5204-8.ch030.m04, Fisher Linear Discriminant finds the subspaces in which the classes are most linearly separable. The solution that maximizes 978-1-5225-5204-8.ch030.m05 is a set of the eigenvectors 978-1-5225-5204-8.ch030.m06 which must satisfy

978-1-5225-5204-8.ch030.m07
(2)

This is called the generalized eigenvalue problem. The discriminant subspace is spanned by the generalized eigenvectors. The discriminability of each eigenvector is measured by the corresponding generalized eigenvalue, i.e., the most discriminant subspace corresponds to the maximal generalized eigenvalue. The generalized eigenvalue problem can be solved by matrix inversion and eigentransform, i.e., applying the eigentransform on 978-1-5225-5204-8.ch030.m08. Unfortunately, for many applications with high dimensional data and few training samples, such as ear recognition, the scatter matrix 978-1-5225-5204-8.ch030.m09 is singular because generally the dimension of sample data is greater than the number of samples. This is known as the undersampled or small sample size problem (Sergios Theodoridis & Konstantinos, Koutroumbas, 2008; Friedman, 1989; Fukunaga, 1990).

In the last decade many methods have been proposed to solve this problem (Belhumeur, Hespanha, & Kriegman, 1997; Liu, & Wechsler, 1998; Yu & Yang, 2001; Chen, Liao, Lin, Ko & Yu, 2000; Huang, Liu, Lu & Ma, 2002). These methods have their problem respectively, which either remove the discriminant information useful to classification or has an expensive computing cost. Cevikalp11 et al. put forward a method called Discriminative Common Vector (DCV) and it has solved the above problems successfully. However, when the authors directly apply DCV on sample images with high-dimensions, the computational expense of training is still relatively large.

Complete Chapter List

Search this Book:
Reset