Pushing the Envelope of Associative Learning: Internal Representations and Dynamic Competition Transform Association into Development

Pushing the Envelope of Associative Learning: Internal Representations and Dynamic Competition Transform Association into Development

Bob McMurray, Libo Zhao, Sarah C. Kucker, Larissa K. Samuelson
DOI: 10.4018/978-1-4666-2973-8.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Work in learning word meanings has argued that associative learning mechanisms are insufficient because word learning is too fast, confronts too much ambiguity, or is based on social principles. This critiques an outdated view of association, focusing on the information being learned, not the mechanism of learning. The authors present a model that embeds association learning in a richer system, which includes both internal representations to and real-time competition that enable it to select the referent of novel and familiar words. A series of simulations validate these theoretical assumptions showing better learning and novel word inference when both factors are present. The authors then use this model to understand the apparent rapidity of word learning and value of high and low informative learning situations. Finally, the authors scale the model up to examine interactions between auditory and visual categorization and account for conflicting results as to whether words help or hinder categorization.
Chapter Preview
Top

Introduction

In development, simple explanations bridging multiple domains of cognition are attractive (Chater, 1999; Gibson, 1994). This approach favors domain-general processes like association, competition, and categorization; theoretical constructs that can be powerful in combination and lead to emergent complexity. While there is always a danger of over-simplifying (c.f., Skinner, 1957), in language we have developed a bad habit. Many problems in language appear too complicated for simple processes to solve, and consequently, domain general processes have been ruled out all together. This is true in every area of language. The lack of invariant cues in speech perception suggests that simple auditory and/or categorization processes may not be up to the job (Liberman & Mattingly, 1985); syntax is too complex to be learned without negative evidence (Chomsky, 1980; Gold, 1967); and there is too much ambiguity in naming situations to determine the meaning of new words (Quine, 1960). Such claims eventually lead to innate language-specific knowledge and constrained learning (see Sloutsky, 2010, and associated special issue of Cognitive Science).

Such critiques presume to know what associative learning or categorization mechanisms can accomplish. However, if computational modeling has taught us anything, it is that simple mechanisms, replicated many times and applied to complex environments can yield unexpected power (Elman, et al., 1996; McClelland, et al., 2010; McMurray, 2007; Schlesinger & McMurray, 2012). Indeed, computational models suggest that categorization mechanisms can solve the problem of invariance (McMurray & Jongman, 2011); that neural networks (Elman, 1990) can learn aspects of syntax; and that statistics across naming events can support word learning (McMurray, Horst, & Samuelson, 2012; Siskind, 1996; Yu & Smith, 2012). When simple mechanisms are combined and scaled to real language, the consequences are often surprising and powerful.

This chapter examines one such mechanism in the context of learning word meanings: associative learning. Many have argued that word learning cannot be associative: It is fundamentally conceptual (Waxman & Gelman, 2009) or social (Golinkoff & Hirsh-Pasek, 2006), it is too fast (Medina, Snedeker, Trueswell, & Gleitman, 2011; Nazzi & Bertoncini, 2003) or it requires complex inference (Xu & Tenenbaum, 2007). However, these argue against a simplistic view of association in which raw perceptual representations are linked to each other with no intervening processes or representations. Such arguments assume that associative learning can only link unprocessed visual and auditory input from the world; it is not sensitive to similarity between words or between visual categories, and it cannot form abstract mediating representations (e.g., it cannot link words to categories, but only to raw inputs). Further, these arguments assume that things like attention or competition cannot change what is associated. In short, they are critiquing a straw-man version of behaviorism, not associative learning.

This is a view of associative learning, which no one in learning theory actually holds. Ideas like similarity and generalization have even been invoked in behaviorist accounts (Spence, 1937), and modern theories of associative learning (e.g., Livesey & McLaren, 2011; Shanks, 2007) admit that (a) internal representations can be linked to each other as well as to perceptual representations; and (b) that real-time processes shape what is learned. Associative learning is not a theory of what representations are linked, nor does it limit processing mechanisms. Thus, it may be a mistake to rule out association on the basis of the information involved (see also Smith, 2000). That said, we do not know the implications of embedding associative learning in a system employing both internal representations and real-time behavior for word learning.

Complete Chapter List

Search this Book:
Reset