Face and Object Recognition Using Biological Features and Few Views

Face and Object Recognition Using Biological Features and Few Views

J.M.F. Rodrigues (University of the Algarve, Portugal), R. Lam (University of the Algarve, Portugal), K. Terzić (University of the Algarve, Portugal) and J.M.H. du Buf (University of the Algarve, Portugal)
DOI: 10.4018/978-1-4666-6252-0.ch004
OnDemand PDF Download:
List Price: $37.50


In recent years, a large number of impressive face and object recognition algorithms have surfaced, both computational and biologically inspired. Only a few of these can detect face and object views. Empirical studies concerning face and object recognition suggest that faces and objects may be stored in our memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex, and end-stopped cells that provide input for a multiscale line and edge representation, keypoints for dynamic feature routing, and saliency maps for Focus-of-Attention. All these combined allow us to segregate faces. Events of different facial views are stored in memory and combined in order to identify the view and recognise a face, including its expression. The authors show that with five 2D views and their cortical representations it is possible to determine the left-right and frontal-lateral-profile views, achieving view-invariant recognition. They also show that the same principle with eight views can be applied to 3D object recognition when they are mainly rotated about the vertical axis. Although object recognition is here explored as a special case of face recognition, it should be stressed that faces and general objects are processed in different ways in the cortex.
Chapter Preview

Recent 3D Face And Object Recognition Methods

Because of the limitations of 2D approaches and with the advent of 3D scanners, face-recognition research has expanded from 2D to 3D with a concurrent improvement in performance. There are many face-recognition methods in 2D and 3D, including facial expression recognition; for detailed surveys see Bowyer et al. (2006), Abate et al. (2007), Li & Jain (2011) and Sandbach et al. (2012). Rashad et al. (2009) presented a face-recognition system that overcomes the problem of changes in facial expressions in 3D range images by using a local variation detection and restoration method based on 2D principal component analysis. Ramirez-Valdez & Hasimoto-Beltran (2009) also considered facial expression in recognition. A 3D range image is modelled by the finite-element method with three simplified layers representing the skin, fatty tissue and the cranium. Muscular structures are superimposed in the 3D model for the synthesis of expressions. Their approach consists of three main steps: a denoising algorithm, which removes long peaks in the 3D face samples; automatic detection of control points, to detect particular landmarks such as eyes and mouth corners, nose tip, etc.; and registration of the 3D face model to each face with neutral expression in the training database. Berretti et al. (2010) took into account 3D geometrical information and encoded the relevant information into a compact graph representation. The nodes of the graph represent equal-width, iso-geodesic facial stripes. The edges between pairs of nodes are labelled by descriptors, and referred to as 3D weighted walkthroughs that capture the mutual relative spatial displacement between all node pairs in the corresponding stripes. Fadaifard et al. (2013) presented a 3D curvature scale-space representation for shape matching, and applied it to face recognition. The representation is obtained by evolving the surface curvatures according to the heat equation; this process yields a stack of increasingly smoothed surface curvatures that is useful for keypoint extraction and descriptor computations. The scale parameter is used for automatic scale selection, which is applied to 2D scale-invariant shape-matching applications.

Complete Chapter List

Search this Book: