Parallel Computing in Face Image Retrieval: Practical Approach to the Real-World Image Search

Parallel Computing in Face Image Retrieval: Practical Approach to the Real-World Image Search

Eugene Borovikov, Szilárd Vajda, Girish Lingappa, Michael C. Bonifant
DOI: 10.4018/978-1-5225-0889-2.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Modern digital photo collections contain vast multitudes of high-resolution color images, many containing faces, which are desirable to retrieve visually. This poses a problem for effective image browsing and calls for efficient Content Based Image Retrieval (CBIR) capabilities ensuring near-instantaneous visual query turn-around. This in turn necessitates parallelization of many existing image processing and information retrieval algorithms that can no longer satisfy the modern user demands, when executed sequentially. Hence a practical approach to Face Image Retrieval (FIR) is presented. It utilizes multi-core processing architectures to implement its major modules (e.g. face detection and matching) efficiently without sacrificing the image retrieval accuracy. The integration of FIR into a web-based family reunification system demonstrates the practicality of the proposed method. Several accuracy and speed evaluations on real-word data are presented and possible CBIR extensions are discussed.
Chapter Preview
Top

Introduction

In 2002 the global capacity of the digital data storage has apparently exceeded that of the analog data storage, which can officially be marked as the beginning of the digital age (Hilbert & López, 2011). Since then the volume of visual data being generated, stored and shared over the Internet has been steadily growing due to the dramatically decreasing prices on the high capacity digital storage and due to the virtually omnipresent high-resolution inexpensive digital cameras. The web image collection today can easily account for millions of high-resolution true-color digital photographs, which calls for very efficient image processing (IP) algorithms involving

  • Variable compression rates,

  • Important visual feature extraction,

  • Smart visual indexing and feature clustering

that would provide for a reasonably instantaneous content based image retrieval (CBIR) user experience in many practical web-based multimedia (MM) applications. The traditional sequential image processing algorithms (accessing one pixel at a time) clearly cannot satisfy the modern performance requirements, and some smart IP acceleration becomes a necessity. Fortunately, many of the IP and MM algorithms can be parallelized using various hardware/software solutions, including multi-core central processing units (CPU) and massively parallel graphics processor units (GPU) currently available in the consumer-level computers (Cullinan, Wyant, & Frattesi, 2012; Prinslow, 2011; Robson, 2008) and even in some mobile devices (Lee, Kyung, Park, Kwak, & Koo, 2015). The set of potential MM applications that can benefit from the accelerated image processing includes family/organization photo-album visual browsers, multi-dimensional medical image organizers, automatic visual surveillance in public areas, visual search for missing people and pets, etc.

In particular, let’s consider building a practical face image retrieval (FIR) system for a family reunification application (Thoma, Antani, Gill, Pearson, & Neve, 2012) in disaster scenarios, where information about missing and found people is collected in several modes including semi-structured text (e.g. name, gender, age, location, etc.) and unconstrained images (e.g. digital photos with human faces shot in arbitrary settings), as shown in Figure 1. Such open web collections may contain hundreds of thousands of records, yet multiple simultaneous visual queries need to be instantaneous (i.e. answered within about a second). Such a system should obviously implement an IP module that satisfies all of the mentioned modern CBIR requirements, plus the high-level functions such as accurate and efficient face localization and matching, retrieving the most visually similar candidate faces for the given query face (Borovikov, Vajda, Lingappa, Antani, & Thoma, 2013).

Figure 1.

Typical unconstrained digital photos in post-disaster family reunification image data-sets

978-1-5225-0889-2.ch006.f01

Given the dynamic nature of the web image datasets, where the number of images can change anytime and we cannot assume to encounter more than one photo per person, we had to take the single image per person (SIPP) approach, which cannot train any person-specific statistical models for face recognition. It rather needs to rely on efficient, yet discriminative image descriptors that capture the essential image features (Jacobs, Finkelstein, & Salesin, 1995; Lowe, 2004; Raoui, Bouyakhf, Devy, & Regragui, 2011; Salembier & Sikora, 2002) in a compact and fast-to-match image signature, providing an instantaneous query turn-around experience. Such an approach needs to formalize the necessary methodology and implement all the mentioned capabilities to work on the unconstrained digital images (as in Figure 1), using various parallel computing techniques utilized on multi-core CPUs and GPUs, and confirm its performance with the experimental results on several public datasets.

Complete Chapter List

Search this Book:
Reset