Feature Learning for Offline Handwritten Signature Verification Using Convolutional Neural Network

Feature Learning for Offline Handwritten Signature Verification Using Convolutional Neural Network

Amruta Bharat Jagtap, Ravindra S. Hegadi, K.C. Santosh
Copyright: © 2019 |Pages: 9
DOI: 10.4018/IJTHI.2019100105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In biometrics, handwritten signature verification can be considered as an important topic. In this article, the authors' proposed method to verify handwritten signatures are based on deep convolution neural network (CNN), which is s bio-inspired network that works as if there exists human brain. Deep CNN extracts features from the studied images, which is followed by cubic support vector machine for classification. To evaluate their proposed work, the authors have tested on three different datasets: GPDS, BME2 and SVC20, and have received encouraging results.
Article Preview
Top

Since last few decades, the signature has been officially acknowledged as the verification tool for business, official and financial. Signature verification has a rich state-of-the-art literature. Verifying a signature has been still an open problem since they vary from time to time (even from the same person) in addition to issues like scale and orientation. It is difficult to design a descriptor that can differentiate genuine and skilled forgery signatures. To address the difficulty in having distinguishing features that can improve performance, researchers have implemented several different techniques or methods, where one can observe the wide use of features that happened in computer vision. In (Jagtap & Hegadi, 2016) authors developed a work based on Eigen values (large and small Eigen values from upper and lower envelope of signatures) that were classified using the neural network, and achieved an accuracy of 98.1% and false acceptance rate of 1.9%. In (Yılmaz & Yanikoglu, 2016) authors presented a technique that was based on a combination of classifiers, where different local features such as local binary pattern, a histogram of oriented gradients were used. Their work returned 6.97% equal error rate on the dataset: GPDS-160. (Hu & Chen, 2013) Proposed three different pseudo-dynamic descriptors: (a) LBP (b) Gray-level co-occurrence matrix (GLCM) and (c) histogram of oriented gradients (HOG). With these features, on two different datasets: GPDS and CSD, in their experiments, SVM provided error rates of 7.66% (GPDS) and 7.55% (CSD) and Real Adaboost provided error rates of 9.94% (GPDS) and 11.55% (CSD). (Pal, Chanda, Pal, Franke, & Blumenstein, 2012) Proposed new feature by integrating speeded up robust feature (SURF) and Gabor- filter. Using GPDS dataset, they reported an accuracy of 97.05%. To learn features of writer-independent format, (Hafemann, Sabourin, & Oliveira, 2016) used deep convolution neural network model to extract features from another set of users. Reported experimental results on two datasets; GPDS 960 and Brazilian PUC-PR. The GPDS-960 dataset has significant improvement in existing results while they are improving performance in Brazilian PUC-PR. They extended their technique (Hafemann, Sabourin, & Oliveira, 2017) that includes comprehension of skilled forgeries from a subset of users in feature learning process. Using four different datasets: MCYT, GPDS, CEDAR and Brazilian PUC-PR, the minimum reported equal error rate (EER) was 1.72%. (Ribeiro, Goncalves, Santos, & Kovacec, 2011) Presented a model for offline handwritten signatures using deep learning which extracts high-level features. For verification, they used a two-step hybrid model that reduced false acceptance rate. The first step of the model identifies the owner’s signature while the second determines whether it is a genuine or forged signature. (Oquab, Bottou, Laptev, & Sivic, 2014) used CNN for mid-level representation of images of large scale datasets. Such representation can be used for many of visual recognition tasks.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 7 Issues (2022): 4 Released, 3 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing