Reference Hub6
Ant Colony Optimization Applied to the Training of a High Order Neural Network with Adaptable Exponential Weights

Ant Colony Optimization Applied to the Training of a High Order Neural Network with Adaptable Exponential Weights

Ashraf M. Abdelbar, Islam Elnabarawy, Donald C. Wunsch II, Khalid M. Salama
ISBN13: 9781522500636|ISBN10: 1522500634|EISBN13: 9781522500643
DOI: 10.4018/978-1-5225-0063-6.ch014
Cite Chapter Cite Chapter

MLA

Abdelbar, Ashraf M., et al. "Ant Colony Optimization Applied to the Training of a High Order Neural Network with Adaptable Exponential Weights." Applied Artificial Higher Order Neural Networks for Control and Recognition, edited by Ming Zhang, IGI Global, 2016, pp. 362-374. https://doi.org/10.4018/978-1-5225-0063-6.ch014

APA

Abdelbar, A. M., Elnabarawy, I., Wunsch II, D. C., & Salama, K. M. (2016). Ant Colony Optimization Applied to the Training of a High Order Neural Network with Adaptable Exponential Weights. In M. Zhang (Ed.), Applied Artificial Higher Order Neural Networks for Control and Recognition (pp. 362-374). IGI Global. https://doi.org/10.4018/978-1-5225-0063-6.ch014

Chicago

Abdelbar, Ashraf M., et al. "Ant Colony Optimization Applied to the Training of a High Order Neural Network with Adaptable Exponential Weights." In Applied Artificial Higher Order Neural Networks for Control and Recognition, edited by Ming Zhang, 362-374. Hershey, PA: IGI Global, 2016. https://doi.org/10.4018/978-1-5225-0063-6.ch014

Export Reference

Mendeley
Favorite

Abstract

High order neural networks (HONN) are neural networks which employ neurons that combine their inputs non-linearly. The HONEST (High Order Network with Exponential SynapTic links) network is a HONN that uses neurons with product units and adaptable exponents. The output of a trained HONEST network can be expressed in terms of the network inputs by a polynomial-like equation. This makes the structure of the network more transparent and easier to interpret. This study adapts ACOR, an Ant Colony Optimization algorithm, to the training of an HONEST network. Using a collection of 10 widely-used benchmark datasets, we compare ACOR to the well-known gradient-based Resilient Propagation (R-Prop) algorithm, in the training of HONEST networks. We find that our adaptation of ACOR has better test set generalization than R-Prop, though not to a statistically significant extent.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.