Second Language Instruction: Extrapolating From Auditory-Visual Speech Perception Research

Second Language Instruction: Extrapolating From Auditory-Visual Speech Perception Research

Doğu Erdener
Copyright: © 2020 |Pages: 19
DOI: 10.4018/978-1-7998-2588-3.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Speech perception has long been taken for granted as an auditory-only process. However, it is now firmly established that speech perception is an auditory-visual process in which visual speech information in the form of lip and mouth movements are taken into account in the speech perception process. Traditionally, foreign language (L2) instructional methods and materials are auditory-based. This chapter presents a general framework of evidence that visual speech information will facilitate L2 instruction. The author claims that this knowledge will form a bridge to cover the gap between psycholinguistics and L2 instruction as an applied field. The chapter also describes how orthography can be used in L2 instruction. While learners from a transparent L1 orthographic background can decipher phonology of orthographically transparent L2s –overriding the visual speech information – that is not the case for those from orthographically opaque L1s.
Chapter Preview
Top

Speech Perception: Its Basics, Nature And Development

Since the 1950s one significant challenge in the enterprise of speech perception is to understand how seemingly continuous and uninterrupted speech signal – as one would observe in a spectrogram – is deciphered by humans. Research in the past half a century has shown that this question cannot be answered in a straightforward fashion. On the contrary, this question requires a multi-dimensional and a multi-disciplinary response. Given the focus of this chapter, only developmental and L2-relevant aspects will be referred to here.

Key Terms in this Chapter

Orthography: A set of rules dictating the writing conventions in a language. An alphabetically-based orthography uses graphemes to represent phonemes whereas a logographic system (e.g., Chinese languages and dialects) employ symbols to represent words or syllables.

Auditory-Visual Speech Perception: An area of experimental psychology, more specifically psycholinguistics, which investigates how auditory speech input is integrated with the visual input from the movements of visible articulators such as lips, mouth, teeth and tongue. The term is also used to refer to the process of the integration of auditory and visual speech input.

Heschl’s Gyrus: Also known as transverse temporal gyrus or Heschl’s convolutions, this is a structure in the primary auditory cortex. It is the first structure that processes the incoming auditory information in the cortex.

Teacherese: A term used to refer to high-pitched and hyper-articulated speech style that may be implicitly or explicitly be used by teachers in the classroom.

Lexical Tone: A specific pitch variation in what is called tonal languages that determines the meaning of a word. A lexical tone is carried on a syllable. The number of lexical tones show variation across the tonal languages. For instance, while Thai has five lexical tones, Cantonese has six.

Grapheme: A letter or a group of letters representing a phoneme. For instance, in the word “fat” the letter f corresponds to the phoneme [f] whereas in the word “feet” the letters ee , as a single grapheme, map onto the phoneme [i?].

Phoneme: The smallest unit of sound in a language that distinguishes two lexical items (words), e.g., b at [bæt] vs. p at [bæt].

Complete Chapter List

Search this Book:
Reset