Methodological Research for Modular Neural Networks Based on “an Expert With Other Capabilities”

Methodological Research for Modular Neural Networks Based on “an Expert With Other Capabilities”

Pan Wang, Jiasen Wang, Jian Zhang
Copyright: © 2018 |Pages: 23
DOI: 10.4018/JGIM.2018040105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article contains a new subnet training method for modular neural networks, proposed with the inspiration of the principle of “an expert with other capabilities”. The key point of this method is that a subnet learns the neighbor data sets while fulfilling its main task: learning the objective data set. Additionally, a relative distance measure is proposed to replace the absolute distance measure used in the classical subnet learning method and its advantage in the general case is theoretically discussed. Both methodology and empirical study of this new method are presented. Two types of experiments respectively related with the approximation problem and the prediction problem in nonlinear dynamic systems are designed to verify the effectiveness of the proposed method. Compared with the classical subnet learning method, the average testing error of the proposed method is dramatically decreased and more stable. The superiority of the relative distance measure is also corroborated.
Article Preview
Top

Introduction

In the system of nature and human society, the operation mode of physical and virtual worlds is dominated by some basic rules and patterns. For instance, the macroscopic world exists and evolves in a harmonious and orderly manner due to the internal effect of Newton's laws for the classical mechanics. The diversity and unity of the biology world have been determined by the principle of genetics in the life sciences and the determinism declared by Laplace has been terminated by the uncertainty in the microscopic world, etc.

Another important principle called “divide and conquer” has been confirmed by some discoveries in the brain-related sciences. Specifically, some researchers in neuroscience have found in human brain that there exist sparse connections between different neuronal groups where neurons are often densely connected and meanwhile that different response patterns are produced by the neuronal groups for different perceptual and cognitive tasks (Edelman, 1987; Fodor, 1983; Kandel, Schwartz, & Jessell, 2000). These two phenomena are respectively called the structural and functional modularity. These modularity evidences suggesting that domain-specific modules are required by the specific tasks and that a variety of modules can be coordinated for more complex tasks have reflected the principle of “divide and conquer” and have motivated the development of modular neural network (MNN) (Farooq, 2000). Therefore, it can be deemed that the principle of “divide and conquer” has given birth to MNN.

With the social development and the progress in science and technology, an interesting phenomenon has been widely witnessed in various fields, such as talent cultivation, corporate development, and especially in biology. For instance, in the microscopic aspect, it has been found that a gene called Mesp1, the main controlling factor for the development of cardiovascular, can activate the transcription factors of the heart, guide the cardiac mesoderm formation, and prevent the stem cells from differentiating into other cell types. In addition, it plays an important role in the growth of the blood and skeletal muscle (Chan et al., 2013). In the macroscopic aspect, Rapamycin, which was previously a medicine for the treatment of immune rejection after transplantation, not only helps to prolong the life of yeast, worms, fruit flies and other invertebrates, but also can be used in the targeted therapy of tumor (Harrison et al., 2009; Ma & Meng, 2010). In addition, this substance is even considered as the key factor in the evolution history of diverse species. Inspired by these discoveries, Wang et al. (2008) first proposed a principle called “an expert with other capabilities”, based on which a subnet training method in MNNs is then proposed by Wang and Wang (2012). However, few experiments have been carried out to test its effectiveness due to space constraint. Therefore, this paper is devoted to further describe this method and to testify its performance with more delicate experiments. In addition, the distance measure used in Wang and Wang (2012) is replaced by a new one, which proves theoretically and empirically more effective in general cases.

The paper is arranged as follows. The principles respectively initiating the research in MNNs and motivating the work of the authors are briefly introduced. Next, the recent advances in MNNs, reflecting the active research on MNNs, are introduced. Then, the MNN is illustrated in detail and a classical subnet training method in MNNs is introduced. Next, the principle of “an expert with other capabilities” is described in depth and then a new subnet training method as well as a specific algorithm are proposed. In addition, a relative distance measure is advanced and its superiority is theoretically proved. Then the authors present the performance of the proposed subnet learning method in the experiments associated with the approximation problem and prediction problem in dynamic systems. The superiority of the relative distance measure is also illustrated with the experiment results. Finally, this paper is concluded with some directions for future research.

Complete Article List

Search this Journal:
Reset
Volume 32: 1 Issue (2024)
Volume 31: 9 Issues (2023)
Volume 30: 12 Issues (2022)
Volume 29: 6 Issues (2021)
Volume 28: 4 Issues (2020)
Volume 27: 4 Issues (2019)
Volume 26: 4 Issues (2018)
Volume 25: 4 Issues (2017)
Volume 24: 4 Issues (2016)
Volume 23: 4 Issues (2015)
Volume 22: 4 Issues (2014)
Volume 21: 4 Issues (2013)
Volume 20: 4 Issues (2012)
Volume 19: 4 Issues (2011)
Volume 18: 4 Issues (2010)
Volume 17: 4 Issues (2009)
Volume 16: 4 Issues (2008)
Volume 15: 4 Issues (2007)
Volume 14: 4 Issues (2006)
Volume 13: 4 Issues (2005)
Volume 12: 4 Issues (2004)
Volume 11: 4 Issues (2003)
Volume 10: 4 Issues (2002)
Volume 9: 4 Issues (2001)
Volume 8: 4 Issues (2000)
Volume 7: 4 Issues (1999)
Volume 6: 4 Issues (1998)
Volume 5: 4 Issues (1997)
Volume 4: 4 Issues (1996)
Volume 3: 4 Issues (1995)
Volume 2: 4 Issues (1994)
Volume 1: 4 Issues (1993)
View Complete Journal Contents Listing