CNN Customizations With Transfer Learning for Face Recognition Task

CNN Customizations With Transfer Learning for Face Recognition Task

Chantana Chantrapornchai, Samrid Duangkaew
Copyright: © 2019 |Pages: 19
DOI: 10.4018/978-1-5225-7862-8.ch003
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Several kinds of pretrained convolutional neural networks (CNN) exist nowadays. Utilizing these networks with the new classification task requires the retraining with new data sets. With the small embedded device, large network cannot be implemented. The authors study the use of pretrained models and customizing them towards accuracy and size against face recognition tasks. The results show 1) the performance of existing pretrained networks (e.g., AlexNet, GoogLeNet, CaffeNet, SqueezeNet), as well as size, and 2) demonstrate the layers customization towards the model size and accuracy. The studied results show that among the various networks with different data sets, SqueezeNet can achieve the same accuracy (0.99) as others with small size (up to 25 times smaller). Secondly, the two customizations with layer skipping are presented. The experiments show the example of SqueezeNet layer customizing, reducing the network size while keeping the accuracy (i.e., reducing the size by 7% with the slower convergence time). The experiments are measured based on Caffe 0.15.14.
Chapter Preview
Top

Introduction

Face recognition is one of the recognition task that has various applications in many areas such as surveillance, access control, video retrieval, interactive gaming etc. (Huang, Xiong, & Zhang, 2011) The techniques for face recognition start from face identification, face feature extraction, and matching feature template. Face identification is a process which identifies the face bounding box of images. Feature extraction is an important phase for creating a recognition model. Several methods have been used such as local feature extraction, template creation, eigenface, etc. Recently, deep learning has been popularly used for image classification. With a deep network, it can also be used to perform image feature extraction (Yosinski, Clune, Bengio, & Lipson, 2014).

Currently, there are many popular pretrained deep neural net models for recognition tasks. However, choosing a proper one requires a lot of training experiments. Also, the model is adopted in the embedded device, the large model may not be deployed.

While most of the literatures are focused on the model accuracy, in this work, we study the two aspects of the available pretrained models: the accuracy and model size. The models studied are AlexNet, GoogLeNet, SqueezeNet. The face recognition task is used as a classification benchmark since it has various applications on embedded platform. The experiment methodology demonstrated to serve the following goals.

  • 1.

    To find the performance of all these nets, towards face recognition tasks and compare with the consumed resources and times.

  • 2.

    To selectively transfer the pretrained weights to help accelerate the accuracy convergence.

  • 3.

    To reduce the model size via the layer customization.

Without pretrained models, constructing a face recognition model requires many hundred thousand iterations and million images for training. Adopting these networks via transfer learning can speed up the number of training iterations. Meanwhile, it may be possible to customize the architecture and weight adopting, which can lead to the smaller network with the similar performance.

Public face data sets contains many thousand to million subjects. The large ones are such as MegaFace (Kemelmacher-Shlizerman, Seitz, Miller, & Brossard, 2016) containing around 600,000 subjects, containing 10 million images. Another example is the data set from Institute of Automation, Chinese Academy of Sciences (CASIA) which contains 400,000 images (Yi, Lei, Liao, & Li, 2014). To find a proper network, one needs to train against these large data sets which is very time consuming. In this paper, our study contains extensive experiments to explore a variety of existing networks with pretrained weights, fine tuning and customizing them.

Top

Background

Currently, there are many existing works that apply deep learning in face recognition. Most works deploy a deep network for the face detection task. Some of the work requires a special loss function while some required special training labels.

One of the popular works that uses neural nets to perform face recognition is OpenFace (Amos, Ludwiczuk, & Satyanarayanan, 2016), which uses the dlib library to detect faces. The pose estimation and affine transformation to align eyes and noses at the same positions are performed afterwards. Next, the embedding of each face is generated by the deep neural net. The embedding is then used for classification by a conventional approach such as Support Vector Machine (SVM). The generated model can be very large depending the number of subjects. Wen et.al. used deep neural net where the approach is based on the center loss (Wen, Zhang, Li, & Qiao, 2016). The center loss approach tries to find a center for deep features of each class and to minimize the distances between the deep features and their corresponding class centers. The authors combined center loss with softmax loss to enhance the recognition accuracy. Parkhi et.al. presented a deep face network that utilizes the triplet loss to learn face embedding (Parkhi, Vedaldi, & Zisserman, 2015). Their approach constructs a face classifier by a scoring vector and tuning it using the triplet loss. These approaches require the special loss layer to be computed in the network which requires effort in changing code the existing model.

Key Terms in this Chapter

SIFT Transform: Scale-invariant feature transform. It is a kind of transformation which is used to find local features such as keypoints which tolerate to operations such as rotate and scale operations.

Eigenface: A principal components of a distribution of faces (eigenvectors). It is a covariance of matrix of the set of face images.

Fire Module: The sequence of layers that perform 1) squeezing by using conv1x1, 2) expanding into conv1x1 and conv3x3, and 3) concatenating them.

Keypoints: Points in an image that are interesting. They do not change when they are applied affine transformation.

Complete Chapter List

Search this Book:
Reset