Face Recognition: A Tutorial on Computational Aspects

Face Recognition: A Tutorial on Computational Aspects

Alexander Alling (University of Rochester, USA), Nathaniel R. Powers (University of Rochester, USA) and Tolga Soyata (University of Rochester, USA)
DOI: 10.4018/978-1-4666-8853-7.ch020
OnDemand PDF Download:
No Current Special Offers


Face recognition is a sophisticated problem requiring a significant commitment of computer resources. A modern GPU architecture provides a practical platform for performing face recognition in real time. The majority of the calculations of an eigenpicture implementation of face recognition are matrix multiplications. For this type of computation, a conventional computer GPU is capable of computing in tens of milliseconds data that a CPU requires thousands of milliseconds to process. In this chapter, we outline and examine the different components and computational requirements of a face recognition scheme implementing the Viola-Jones Face Detection Framework and an eigenpicture face recognition model. Face recognition can be separated into three distinct parts: face detection, eigenvector projection, and database search. For each, we provide a detailed explanation of the exact process along with an analysis of the computational requirements and scalability of the operation.
Chapter Preview

Face Recognition Problem

A facial recognition application is a computer program that identifies faces in a scene presented to it and matches them to its own database of people. This type of application was once considered one of the most difficult to achieve goals in the field of computing, though significant strides have been made since the early days of computer vision. There are many different schemes available for computer object recognition. There are two broad categories of objection recognition methods, which have some overlap. They are photometric (appearance) based methods and geometric (feature) based methods. Photometric applications use things like skin color and a person's contrast with the image background to aid in face recognition. A simple method based on features is edge-detection, finding discontinuities in image brightness or color and comparing them to a database. Many other recognition algorithms are based on this same idea: finding regularities (or irregularities) in an image, and comparing them to a database. Many different types of features can be picked out and compared. Edges, corners, and 'blobs' are the three main types. The most diverse type is blob detection, where a blob is a region of a picture with a consistent property (color, intensity, etc.) that differs from the surrounding regions (Lindeberg, 1991).

In the mid 1960's, the first successful face recognition system was developed by Woodrow Wilson Bledsoe. It was almost entirely a manual operation where a worker would have to record the coordinates of facial features such as the center of each eye, the tip of the nose, and so on. On average, each face's dataset could be recorded in about 90 seconds (FBI, 2011). A computer was then used to determine which face from the database most closely matched the new face. In the 1970's, Goldstein, Harmon, and Lesk created a similar system based on features such as hair color and lip thickness (Goldstein, Harmon, & Lesk, 1987). Truly autonomous face recognition quickly gained steam after that. In 1988, the eigenface method was developed by Lawrence Sirovich and Michael Kirby. Their algorithm using Principle Component Analysis is still one of the primary methods for real time face recognition (Sirovich & Kirby, 1987). In recent years its accuracy in varied lighting and angle settings has been surpassed; but it is still one of the best freely available algorithms capable of running in real time.

Complete Chapter List

Search this Book: