Image-Word Mapping

Image-Word Mapping

Yang Cai (Carnegie Mellon University, USA) and David Kaufer (Carnegie Mellon University, USA)
DOI: 10.4018/978-1-61692-857-5.ch005
OnDemand PDF Download:


No Ambient Intelligence can survive without human-computer interactions. Over ninety percent of information in our communication is verbal and visual. The mapping between one-dimensional words and two-dimensional images is a challenge for visual information classification and reconstruction. In this Chapter, we present a model for the image-word two-way mapping process. The model applies specifically to facial identification and facial reconstruction. It accommodates through semantic differential descriptions, analogical and graph-based visual abstraction that allows humans and computers to categorize objects and to provide verbal annotations to the shapes that comprise faces. An image-word mapping interface is designed for efficient facial recognition in massive visual datasets. We demonstrate how a two-way mapping of words and facial shapes is feasible in facial information retrieval and reconstruction.
Chapter Preview


Although the original goal of Ambient Intelligence is to make computers fade from our persistent awareness, we still need to communicate with computers explicitly, for example, to search videos, enter a location, or simply communicate with others. Over ninety percent of information transfer in our communication is verbal and visual.

For many years, cognitive scientists have investigated visual abstraction using psychological experiments. For example, visual search using foveal vision (Wolfe, 1998; Theeuwes, 1992; Treisman and Gelade, 1980; Verghese, 2001; Yarbus, 1967; Larkin and Simon, 1987; Duchowski, et al., 2004; Kortum and Geisler, 1996; Geisler and Perry, 1998; Majaranta and Raiha, 2002) and mental rotation (Wikipedia, “Mental Rotation”). Visual abstraction models have been developed, for example, Marr’s cylinder model of human structure (Marr, 1982) and the spring-mass graph model of facial structures (Ballard, 1982). Recently, scientists began to model the relationship between words and images. CaMeRa (Tabachneck-Schijf et al., 1997), for example, is a computational model of multiple representations, including imagery, numbers and words. However, the mapping between the words and images in this system is linear and singular, lacking flexibility. An Artificial Neural Network model has been proposed to understand oil paintings (Solso, 1993), where Solso remarks that the hidden layers of the neural network enable us to map the words and visual features more effectively. With this method, Solso has argued, we need fewer neurons to represent more images. However, the content of the hidden layers of the neural network remains a mystery (See Figure 1).

Figure 1.

The two-way mapping neural network model

Because of the two- or three-dimensional structure of images and the one-dimensional structure of language, the mapping between words and images is a challenging and still undertheorized task. Arnheim observed that, through abstraction, language categorizes objects. Yet language, through its richness, further permits humans to create categorizations and associations that extend beyond shape alone (Arnheim, 1969). As a rich abstractive layer, language permits categorizations of textures, two- and three-dimensions, and sub-shapes. As an abstractive layer, language seems to be the only method we have to satisfactorily describe a human subject. To explore this insight further, Roy developed a computerized system known as Describer that learns to generate contextualized spoken descriptions of objects in visual scenes (Roy, 1999). Describer illustrates how a description database could be useful when paired with images in constructing a composite image.


Descriptions For Humans

Our framework has focused on the mapping between words and images for human facial features. Why do we focus on human faces? Humans in general and human faces in particular provide among the richest vocabularies of visual imagery in any modern language. Imaginative literature is a well-known source of such descriptions, where human features are often described in great detail. In addition, reference collections in the English language focused on visual imagery, such as description and pictorial dictionaries, never fail to have major sections devoted to descriptions of the human face. These sections are typically devoted to anatomical rather than social and cultural descriptions of faces based on cultural stereotypes and analogies. The mappings between images and faces we have been exploring are built upon such stereotypical and analogical associations.

In the following sections, we briefly overview a variety of semantic visual description methods, including multiple resolution, semantic differentiation, symbol-number, and analogy. Then, we introduce the computational implementation of these human descriptions in form of the visual, verbal and the interaction between them.

Complete Chapter List

Search this Book: