Representing Non-Rigid Objects with Neural Networks

Representing Non-Rigid Objects with Neural Networks

José García-Rodríguez (University of Alicante, Spain), Francisco Flórez-Revuelta (University of Alicante, Spain) and Juan Manuel García-Chamizo (University of Alicante, Spain)
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-59904-849-9.ch200
OnDemand PDF Download:


Self-organising neural networks try to preserve the topology of an input space by means of their competitive learning. This capacity has been used, among others, for the representation of objects and their motion. In this work we use a kind of self-organising network, the Growing Neural Gas, to represent deformations in objects along a sequence of images. As a result of an adaptive process the objects are represented by a topology representing graph that constitutes an induced Delaunay triangulation of their shapes. These maps adapt the changes in the objects topology without reset the learning process.
Chapter Preview


Self-organising maps, by means of a competitive learning, make an adaptation of the reference vectors of the neurons, as well as, of the interconnection network among them; obtaining a mapping that tries to preserve the topology of an input space. Besides, they are able of a continuous re-adaptation process even if new patterns are entered, with no need to reset the learning.

These capacities have been used for the representation of objects (Flórez, García, García & Hernández, 2001)] (Figure 1) and their motion (Flórez, García, García & Hernández, 2002) by means of the Growing Neural Gas (GNG) (Fritzke, 1995) that has a learning process more flexible than other self-organising models, like Kohonen maps (Kohonen, 2001).

Figure 1

These two applications, representation of objects and their motion, have in many cases temporal constraints, reason why it is interesting the acceleration of the learning process. In computer vision applications the condition of finalization for the GNG algorithm is commonly defined by the insertion of a predefined number of neurons. The election of this number can affect the quality of the adaptation, measured as the topology preservation of the input space (Martinetz & Schulten, 1994).

In this work GNG has been used to represent two-dimensional objects shape deformations in sequences of images, obtaining a topology representing graph that can be used for multiple tasks like representation, classification or tracking. When deformations in objects topology are small and gradual between consecutive frames in a sequence of images, we can use previous maps information to place the neurons without reset the learning process. Using this feature of GNG we achieve a high acceleration of the representation process.

One way of selecting points of interest in 2D shapes is to use a topographic mapping where a low dimensional map is fitted to the high dimensional manifold of the shape, whilst preserving the topographic structure of the data. A common way to achieve this is by using self-organising neural networks where input patterns are projected onto a network of neural units such that similar patterns are projected onto units adjacent in the network and vice versa. As a result of this mapping a representation of the input patterns is achieved that in post-processing stages allows one to exploit the similarity relations of the input patterns. Such models have been successfully used in applications such as speech processing (Kohonen, 2001), robotics (Ritter & Schulten, 1986), (Martinez, Ritter, & Schulten, 1990) and image processing (Nasrabati & Feng, 1988). However, most common approaches are not able to provide good neighborhood and topology preservation if the logical structure of the input pattern is not known a priori. In fact, the most common approaches specify in advance the number of neurons in the network and a graph that represents topological relationships between them, for example, a two-dimensional grid, and seek the best match to the given input pattern manifold. When this is not the case the networks fail to provide good topology preserving as for example in the case of Kohonen’s algorithm.

Key Terms in this Chapter

Growing Neural Gas: A self-organizing neural model where the number of units is increased during the self-organization process using a competitive Hebbian learning for the topology generation.

Topology Preserving Graph: Is a graph that represents and preserves the neighbourhood relations of an input space.

Hebbian Learning: A time-dependent, local, highly interactive mechanism that increases synaptic efficacy as a function of pre- and post-synaptic activity.

Object Representation: Is the construction of a formal description of the object using features based on its shape, contour or specific region.

Self-Organising Neural Networks: A class of artificial neural networks that are able to self-organize themselves to recognize patterns automatically without previous training preserving neighbourhood relations.

Object Tracking: Is a task within the field of computer vision that consists on the extraction of the motion of an object from a sequence of images estimating its trajectory.

Non-Rigid Objects: A class of objects that suffer deformations changing its appearence along the time.

Complete Chapter List

Search this Book: