Ant Colony Optimization Applied to the Training of a High Order Neural Network with Adaptable Exponential Weights

Ant Colony Optimization Applied to the Training of a High Order Neural Network with Adaptable Exponential Weights

Ashraf M. Abdelbar, Islam Elnabarawy, Donald C. Wunsch II, Khalid M. Salama
DOI: 10.4018/978-1-5225-0063-6.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

High order neural networks (HONN) are neural networks which employ neurons that combine their inputs non-linearly. The HONEST (High Order Network with Exponential SynapTic links) network is a HONN that uses neurons with product units and adaptable exponents. The output of a trained HONEST network can be expressed in terms of the network inputs by a polynomial-like equation. This makes the structure of the network more transparent and easier to interpret. This study adapts ACOR, an Ant Colony Optimization algorithm, to the training of an HONEST network. Using a collection of 10 widely-used benchmark datasets, we compare ACOR to the well-known gradient-based Resilient Propagation (R-Prop) algorithm, in the training of HONEST networks. We find that our adaptation of ACOR has better test set generalization than R-Prop, though not to a statistically significant extent.
Chapter Preview
Top

2. The Honest Neural Network

The HONEST network can be considered to be a generalization of the sigma-pi model (Rumelhart et al., 1986), and is also similar in some ways to the ExpoNet (Narayan, 1993) and GMDH (Ivakhnenko, 1971; Puig et al., 2007) networks. An HONEST network is a feedforward network that always contains exactly three layers—although Tsai (2009; 2010) has considered variations of HONEST that use more layers. Let the external inputs to the network be denoted x1, x2,…, xn, let the output of the hidden layer neurons be denoted h1,…, hr, and let the external output of the network be denoted y1,…, ym. A connection from an input unit xj to a hidden neuron hk does not have an associated weight as in MLP networks, but rather has an associated adaptable exponent pkj. Each hidden unit hk computes the product of its inputs after first raising each input to the power of the exponent associated with its incoming connection:

978-1-5225-0063-6.ch014.m01
(1) as illustrated in Fig. 1. Hidden layer neurons do not have associated biases.

Complete Chapter List

Search this Book:
Reset