Distributed Representation of Compositional Structure

Distributed Representation of Compositional Structure

Simon D. Levy
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch078
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

AI models are often categorized in terms of the connectionist vs. symbolic distinction. In addition to being descriptively unhelpful, these terms are also typically conflated with a host of issues that may have nothing to do with the commitments entailed by a particular model. A more useful distinction among cognitive representations asks whether they are local or distributed (van Gelder 1999). Traditional symbol systems (grammar, predicate calculus) use local representations: a given symbol has no internal content and is located at a particular address in memory. Although well understood and successful in a number of domains, traditional representations suffer from brittleness. The number of possible items to be represented is fixed at some arbitrary hard limit, and a single corrupt memory location or broken pointer can wreck an entire structure. In a distributed representation, on the other hand, each entity is represented by a pattern of activity distributed over many computing elements, and each computing element is involved in representing many different entities (Hinton 1984). Such representations have a number of properties that make them attractive for knowledge representation (McClelland, Rumelhart, & Hinton 1986): they are robust to noise, degrade gracefully, and support graded comparison through distance metrics. These properties enable fast associative memory and efficient comparison of entire structures without unpacking the structures into their component parts. This article provides an overview of distributed representations, setting the approach in its historical context. The two essential operations necessary for building distributed representation of structures – binding and bundling – are described. We present example applications of each model, and conclude by discussing the current state of the art.
Chapter Preview
Top

Varieties Of Distributed Representation

This article describes the various approaches found in the recent neural network literature to implementing the binding and bundling operations. Although several different models have been developed, they fall into one of two broad categories, based on the way that roles are represented and how binding and bundling are performed.

Key Terms in this Chapter

Vector Symbolic Architecture (VSA): General term for representations that use large vectors of random numbers for roles and fillers, and fast, lossy compression operations to bind fillers to roles.

Bundling: VSA operation for combining several items into a single item, through vector addition.

Tensor Products: Early form of VSA that uses the outer (tensor) product as the binding operation, thereby increasing the dimensionality of the representations without bound.

Distributed Representation: A general method of representing and storing information in which the representation of each item is spread across the entire memory, each memory element simultaneously stores the components of more than one item, and items are retrieved by their content rather than their address.

Binary Spatter Codes (BSC): VSA using bit vectors and element-wise exclusive-or (XOR) or multiplication for role/filler binding.

Binding: In its most general sense, a term used to describe the association of values with variables. In AI and cognitive science, the variables are usually a closed set of roles (AGENT, PATIENT, INSTRUMENT) and the values an open set of fillers (entities).

Cleanup Memory: Mechanism required to compensate for noise introduced by lossy compression in VSA.

Recursive Auto-Associative Memory (RAAM): Neural network architecture that uses vector/matrix multiplication for binding and iterative learning to encode structures.

Holographic Reduced Representation (HRR): The most popular variety of VSA, uses circular convolution to bind fillers to roles and circular correlation to recover the fillers or roles from the bindings.

Complete Chapter List

Search this Book:
Reset