Configural Processing Hypothesis and Face-Inversion Effect

Configural Processing Hypothesis and Face-Inversion Effect

Sam S. Rakover, Sam S. Rakover
Copyright: © 2011 |Pages: 18
DOI: 10.4018/978-1-61520-991-0.ch017
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Perception and recognition of faces presented upright are better than Perception and recognition of faces presented inverted. The difference between upright and inverted orientations is greater in face recognition than in non-face object recognition. This Face-Inversion Effect is explained by the “Configural Processing” hypothesis that inversion disrupts configural information processing and leaves the featural information intact. The present chapter discusses two important findings that cast doubt on this hypothesis: inversion impairs recognition of isolated features (hair & forehead, and eyes), and certain facial configural information is not affected by inversion. The chapter focuses mainly on the latter finding, which reveals a new type of facial configural information, the “Eye-Illusion”, which is based on certain geometrical illusions. The Eye-Illusion tended to resist inversion in experimental tasks of both perception and recognition. It resisted inversion also when its magnitude was reduced. Similar results were obtained with “Headlight-Illusion” produced on a car‘s front, and with “Form-Illusion” produced in geometrical forms. However, the Eye-Illusion was greater than the Headlight-Illusion, which in turn was greater than the Form-Illusion. These findings were explained by the “General Visual-Mechanism” hypothesis in terms of levels of visual information learning. The chapter proposes that a face is composed of various kinds of configural information that are differently impaired by inversion: from no effect (the Eye-Illusion) to a large effect (the Face-Inversion Effect).
Chapter Preview
Top

Introduction

One of the most studied effects in research on face perception and recognition is the Face-Inversion Effect. Perception and recognition of a face are better when it is presented upright than when it is presented in inverted. This effect is greater in faces than in non-face objects (buildings, cars) (e.g., Rakover, 2002; Rakover & Cahlon, 2001; Valentine, 1988; Yin, 1969) and is obtained in experimental tasks of perception and recognition alike (e.g., Freire, Lee, & Symons, 2000; Rossion & Gauthier, 2002). It is explained by the “Configural Processing” hypothesis, which proposes that inversion disrupts configural information processing (spatial relations among facial features) and/or holistic information (facial information is perceived as a whole Gestalt) and leaves the processing of featural information (eyes, nose, and mouth) comparatively intact (e.g., Bartlett, Searcy, Abdi, 2003; Diamond & Carey, 1986; Leder & Bruce, 2000; Leder & Carbon, 2006; Maurer, Le Grand & Mondloch, 2002; Rakover, 2002; Rhodes, Brake, & Atkinson, 1993; Searcy & Bartlett, 1996; Tanaka & Farah, 1993, 2003; Yovel & Kanwisher, 2008).

Facial configural information concerns the spatial relations among facial features and is usually defined as follows: “”In the face perception literature, the term ‘configural” refers to spatial information. … The span of configural information can be small (e.g., specifying the relationship between two adjacent components) or it may be large (e.g., specifying the relationship between nonadjacent components separated by large distances, or specifying the relationship among all of the components in the face)” (see Peterson & Rhodes, 2003, p. 4). However, reviewing the pertinent literature, Maurer, Le Grand & Mondloch (2002) noted that there is no agreement on the meaning of the term ‘configural processing’. They suggested distinguishing three types of configural processing: (a) detection of first-order information (eyes above nose, which is above mouth), (b) detection of second-order information (distances among facial features), and (c) detection of holistic information (the face is perceived as a gestalt). From their analysis, they concluded that inversion affected all types of configural processing, but particularly the last two. Following these analyses, the present research conceived of configural information as consisting of all distances among facial features.

Two major popular mechanisms for explaining configural processing of faces have been proposed. The “Face-Specific Mechanism” hypothesis, which suggests a special cognitive mechanism for processing facial stimuli, and the “Expertise” hypothesis, which suggests that configural-holistic information can be learned similarly in both faces and non-face objects. (For discussions of these hypotheses, see Ashworth, Vuong, Rossion, & Tarr, 2008; Diamond & Carey, 1986; Farah, Tanaka, & Drain 1995; Gauthier & Tarr, 1997; Liu & Chaudhuri, 2003; Maurer, Le Grand & Mondloch, 2002; McKone, Crookes & Kanwisher, in press; Nachson, 1995; Rakover & Cahlon, 2001; Rossion & Gauthier, 2002; Tanaka & Farah, 2003.) Recently, however, Robbins & McKone (2007) suggested on the basis of an extensive review and their own findings that the Expertise hypothesis is unfounded (for a debate see Gauthier & Bukach, 2007, and McKone & Robbins, 2007).

The present chapter has the following major goals. First, I shall briefly discuss whether the Configural Processing hypothesis provides a necessary condition for the Face-Inversion Effect.

Complete Chapter List

Search this Book:
Reset