Intelligent Data Processing Based on Multi-Dimensional Numbered Memory Structures

Intelligent Data Processing Based on Multi-Dimensional Numbered Memory Structures

Krassimir Markov, Koen Vanhoof, Iliya Mitov, Benoit Depaire, Krassimira Ivanova, Vitalii Velychko, Victor Gladun
DOI: 10.4018/978-1-4666-1900-5.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Multi-layer Pyramidal Growing Networks (MPGN) are memory structures based on multidimensional numbered information spaces (Markov, 2004), which permit us to create association links (bonds), hierarchically systematizing, and classification the information simultaneously with the input of it into memory. This approach is a successor of the main ideas of Growing Pyramidal Networks (Gladun, 2003), such as hierarchical structuring of memory that allows reflecting the structure of composing instances and gender-species bonds naturally, convenient for performing different operations of associative search. The recognition is based on reduced search in the multi-dimensional information space hierarchies. In this chapter, the authors show the advantages of using the growing numbered memory structuring via MPGN in the field of class association rule mining. The proposed approach was implemented in realization of association rules classifiers and has shown reliable results.
Chapter Preview
Top

1. Introduction

Formation of the intelligent system memory structure needs to be done simultaneously with perception of information and under the impact of the information perceived and already stored. The memory structure reflects the information perceived. Information structuring is an indispensable function of the memory. (Gladun, 2003)

The main processes of structuring include formation of associative links by means of identifying the intersections of attributive representations of objects, hierarchic regulation, classification, forming up generalized logical attributive models of classes, i.e. concepts.

Under real conditions of information perception, there is often no possibility to get at once the whole information about an object (for example, because of faulty foreshortening or lighting during the reception of visual information). That is why the processes of memory formation should allow the possibility of “portioned” construction of objects models and class models by parts.

In different processes of information processing, objects are represented by one of the two means: by a name (convergent representation) or by a set of meanings of attributes (displayed representation). The structure of memory should provide convenient transition from one representation to another.

Systems, in which the perception of new information is accompanied by simultaneous structuring of the information stored in memory, are called self-structured (Gladun et al, 2008). Self-structuring provides a possibility of changing the structure of stored in memory data during the process of the functioning because of interaction between the received and already stored information.

The building of self-structured artificial systems had been proposed to be realized on the basis of networks with hierarchical structures, named as “growing pyramidal networks” (GPN) (Gladun et al, 2008). The theory as well as practical application of GPN was expounded in a number of publications (Gladun, 1987, 1994, 2000; Gladun and Vashchenko, 2000).

Pyramidal network is a network memory, automatically tuned into the structure of incoming information. Unlike the neuron networks, the adaptation effect is attained without introduction of a priori network excess. Pyramidal networks are convenient for performing different operations of associative search. Hierarchical structure of the networks, which allows them to reflect the structure of composing objects and gender-species’ bonds naturally, is an important property of pyramidal networks. The concept of GPN is a generalized logical attributive model of objects' class, and represents the belonging of objects to the target class in accordance with some specific combinations of attributes (check vertexes). By classification manner, GPN is closest to the known methods of data mining as decision trees and propositional rule learning.

GPN realization has following stages:

  • Building the structure of a network for some initial set of objects, assigned by attributive descriptions;

  • Training the structure, with a purpose to allocate its elements, allowing classifying all objects of the initial set;

  • Recognizing the belonging to some class of objects of certain object, which not belongs to initial set of objects.

Figure 2 demonstrates the appropriate pyramidal network with the formed concepts based on training set presented in Figure 1. Check vertices PP_SYN, Por_3, 239, 163 characterize class 1, check vertexes 158, 308 and $7 characterize class 2 (Gladun et al, 2008).

Figure 2.

Pyramidal network

978-1-4666-1900-5.ch007.f02
Figure 1.

GPN training set

978-1-4666-1900-5.ch007.f01

Complete Chapter List

Search this Book:
Reset