PCG-Based Biometrics

PCG-Based Biometrics

Takhellambam Gautam Meitei, Sinam Ajitkumar Singh, Swanirbhar Majumder
DOI: 10.4018/978-1-5225-5152-2.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

These days the wide usage of data has opened security vulnerabilities everywhere. This has led to research in the biometrics area for improving security. Presently with wide development of technology different forms of biometrics are being used in various applications. Thus, fingerprint and face are no longer the only ones being used in this field. The authors have concentrated on PCG as a biometric in this chapter. A very few sources are available in this area deeming it to be nascent. Recent proposals were examined, and it was observed that PCG reduces the risks of vulnerability faced by other biometric system. A simple biometric system would consist of steps like preprocessing, segmentation, feature extraction, and comparison or matching phase. In this chapter, some pre-processing steps as implemented by various authors using wavelets and other feature extraction techniques, implemented for the PCG biometric system by various researchers, are reviewed. Later, in the matching phase, Euclidean distance, GMM, FSR, VQ method are examined.
Chapter Preview
Top

Introduction

Latest advancements in technologies have driven us for the need of a secure identification of a particular person for security purposes. Various frauds and cybercrimes have led us to look out for a secure identification purpose. Previously people used passwords (something uniquely known only to us) or a token (proving we own something unique to identify ourselves). The chances of a password or a token getting stolen or shared are high, so Biometrics was introduced to reduce the vulnerabilities.

Biometrics plays an important role in securing our identity. It can be understood as a process or the ability of a system to identify a particular person based on some unique biological features or patterns such as fingerprints, facial recognition, DNAs, voice, eye- iris and retina, palm prints, signatures, etc. The data obtained are compared to a previously stored reference data or templates, and determines if the newly generated data could have been generated by the same person. So, a biometric authentication comprises of two phases, Enrollment phase and Authentication phase. In enrollment phase, as shown below in figure 1(a), a set of databases is created by capturing the patterns or features that provides information about each individual. In the authentication phase, the newly captured feature searches the template for a match. Biometric authentication runs in two modes, depending upon the application used, i.e., identification and verification modes.

  • 1.

    Identification Mode: It takes in information about the unique traits of a user, i.e. it captures the biometric information and searches the whole database for a match, to the captured information. Here, the classification module is trained previously with various sets of extracted features. The features of the input data from the user is then compared with the extracted features stored while training. The general block diagram of identification mode is shown below, Figure 1(b). After classification, the biometric system decides as to whose features does the input sample matches to.

  • 2.

    Verification Mode: This is similar to the identification mode, except for the classifier used. The identification mode classifier uses a 1:N classifier, while the classifier used here is 1:1 classifier. i.e. it is basically a yes or no decision. The system compares the captured data with previously stored information about the same individual and authenticates the particular individual. The block diagram for verification mode is shown below, Figure 1(c).

Figure 1.

General block diagrams of (a) Enrollment phase (b) Identification phase (c) Verification phase

978-1-5225-5152-2.ch001.f01

Pre-processing steps are employed here to minimize noise, making it ready for a better segmentation of the PCG signal, which will provide clear features in the feature extraction process. The extracted features will be stored in a database in the enrollment phase. During authentication, after the preprocessing steps, the feature extraction stage gives the unique information of an individual and finally in the classification stage, the data is compared to the previously stored information in in the template, as shown in the block diagrams in Figure 1.

Key Terms in this Chapter

Wavelet Transform: A transform that characterizes various functions in wavelet. It has a better advantage over Fourier transform as it deconstructs or constructs signal accurately.

Chirp Z-Transform: An algorithm for the assessment of z-transform that was developed to overcome the restrictions of Fast Fourier transform evaluating z-transform in a limited contour.

Variational Mode Decomposition: An algorithm developed to detect the maxima or minima for a signal after breaking up the signal into principle modes.

Gaussian Mixture Model: It can be looked at as a probabilistic model to label unknown parameters that have similar sets of data within an overall data sets.

First-to-Second Ratio (FSR): In the study for phonocardiogram signals, it can be explained as the ratio between the average power of the first heart sound and the second heart sound.

Short Time Discrete Fourier Transform: A modified discrete Fourier transform, where the signal is analyzed only for a short content of the overall frequency and phase, as the signal is never constant with time, practically.

Mel Frequency Cepstrum Coefficient: Mel frequency cepstrum coefficient can be defined as the overall coefficients together that makes up an mel frequency cepstrum. The frequency bands in mel frequency cepstrum is similarly set apart on a mel scale.

Complete Chapter List

Search this Book:
Reset