Article Preview
TopAnatomy And Physiology Of The Visual Pathways
The human visual system consists of multiple, parallel streams that process different information, and each stream constitutes a set of the sequential processes. They are sometimes referred to as channels. Light increments (ON) and decrements (OFF), motion, stereoscopic depth, color, shape, etc., are processed separately and simultaneously. There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways (Figure 1). The former is responsible for carrying information about the form and color of an object because of its ability to detect stimuli with high spatial frequencies and color, while the latter plays an important role in detecting motion due to its ability to respond to high temporal stimuli (Livingstone, & Hubel, 1998; Tobimatsu, & Celesia, 2006). There is considerable cross talk between the two systems and much evidence supporting that these systems are integrated in a distributed network.
Figure 1. Recent concepts of the parallel pathways. Adopted from Tobimatsu, Goto, Yamasaki, Nakashima, Tomoda, & Mitsudome, 2008.
We have been studying the functions of the P- and M-pathways with evoked potentials by manipulating the characteristics of the visual stimulus (Arakawa, Tobimatsu, Kato, & Kira, 1999; Tobimatsu, 2002; Tobimatsu, & Kato, 1998; Tobimatsu, Celesia, Haug, Onofrj, Sartucci, & Porciatti, 2000; Tobimatsu, Shigeto, Arakawa, & Kato, 1999; Tobimatsu, Tomoda, & Kato, 1995; Tobimatsu, Goto, Yamasaki, Tsurusawa, & Taniwaki, 2006). Information on the characteristics of a face is first processed in the fusiform gyrus (V4) and carried by the P-pathway (Vuilleumier, Armony, Driver, & Dolna, 2003). Information on the motion of an object is processed in the MT/V5, and the information is carried by the M-pathway (Rizzolatti, & Matelli, 2003).
TopFace Perception
Event-related potentials (ERPs) elicited by facial stimuli were recorded at multiple scalp sites in normal subjects. As shown in Figure 2, visual stimuli are decomposed into several spatial frequencies (SFs) (Tobimatsu, Goto, Yamasaki, Nakashima, Tomoda, & Mitsudome, 2008). The low-spatial-frequency (LSF) and high-spatial-frequency (HSF) information are processed by the M- and P-pathways, respectively. A photograph of a face was filtered to alter the SF components and used to investigate how the LSF and HSF components of the face contribute to its identification and recognition (Nakashima, Goto, Abe, Kaneko, Saito, Makinouchi, & Tobimatsu, 2008; Nakashima, Kaneko, Goto, Abe, Mitsudo, Ogata, Makinouchi, & Tobimatsu, 2008; Obayashi, Nakashima, Onitsuka, Maekawa, Hirano, Hirano, Oribe, Kaneko, Kanba, & Tobimatsu, 2009). The original stimuli were 256-level grayscale photographs of emotional (anger, fear and happiness) and neutral faces taken from Japanese and Caucasian Facial Expressions of Emotion (JACFEE) and Neutral Faces (JACNeuF), respectively (Matsumoto and Ekman, 1988). The object stimuli (houses) and target stimuli (shoes) were taken from our own 256-level grayscale photographs. Faces and houses for the LSF and HSF stimuli were created by image-engineering techniques with two-dimensional fast Fourier transformation (one-order Gaussian window methods for LSF; 35-order Hamming window methods for HSF) using our own program written in C language and MATLAB ver. 7 (The MathWorks Inc.). The BSF stimuli were original photographs and left unfiltered. The cutoff frequencies (< 2.5–4.0 cycles/face for LSF; > 30.0–50.0 cycles/face for HSF) were determined by measuring the psychophysical threshold for the recognition of facial expressions and houses using 30 other recruited subjects (10 females and 20 males; age range, 20-34 years; mean age, 25.7 years; unpublished data) prior to the ERP recordings. The mean luminance and contrast were controlled by normalizing the mean and standard deviation (SD) of the gray values of all stimuli using our own program written in C language (mean luminance, 48 cd/m2; mean gray value ± SD, 128 ± 40). Representative examples of the stimuli (fearful expression) are shown in Figure 3.