Eye Tracking and Spoken Language Comprehension

Eye Tracking and Spoken Language Comprehension

Elizabeth Kaplan, Tatyana Levari, Jesse Snedeker
DOI: 10.4018/978-1-5225-7507-8.ch031
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Constructing a more precise and deeper understanding of how listeners, and particularly young children, comprehend spoken language is a primary focus for both psycholinguists and educators alike. This chapter highlights how, over the course of the past 20 years, eye tracking has become a crucial and widely used methodology to gain insight into online spoken language comprehension. We address how various eye-tracking paradigms have informed current theories of language comprehension across the processing stream, focusing on lexical discrimination, syntactic analysis, and pragmatic inferences. Additionally, this chapter aims to bridge the gap between psycholinguistic research and educational topics, such as how early linguistic experiences influence later educational outcomes and ways in which eye-tracking methods can provide additional insight into the language processing of children with developmental disorders.
Chapter Preview
Top

Background

Cooper (1974) was the first researcher to use an eye-tracking paradigm to explore the relationship between spoken language and eye movements within a constructed visual scene. In his seminal paper, Cooper recorded participants’ eye movements to a 3x3 grid containing pictures of different objects as they listened to pre-recorded stories about the displayed items. He found that upon hearing the word for a displayed object, participants rapidly moved their eyes to the corresponding picture. In fact, participants directed their gaze to the correct referent even before the whole word was spoken (Cooper, 1974). The pattern of data suggested that direction of gaze was closely tied to the unfolding linguistic information.

Complete Chapter List

Search this Book:
Reset