Reference Hub1
Evolutionary Algorithm Training of Higher Order Neural Networks

Evolutionary Algorithm Training of Higher Order Neural Networks

M. G. Epitropakis, V. P. Plagianakos, Michael N. Vrahatis
ISBN13: 9781615207114|ISBN10: 1615207112|ISBN13 Softcover: 9781616922511|EISBN13: 9781615207121
DOI: 10.4018/978-1-61520-711-4.ch003
Cite Chapter Cite Chapter

MLA

Epitropakis, M. G., et al. "Evolutionary Algorithm Training of Higher Order Neural Networks." Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications, edited by Ming Zhang, IGI Global, 2010, pp. 57-85. https://doi.org/10.4018/978-1-61520-711-4.ch003

APA

Epitropakis, M. G., Plagianakos, V. P., & Vrahatis, M. N. (2010). Evolutionary Algorithm Training of Higher Order Neural Networks. In M. Zhang (Ed.), Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications (pp. 57-85). IGI Global. https://doi.org/10.4018/978-1-61520-711-4.ch003

Chicago

Epitropakis, M. G., V. P. Plagianakos, and Michael N. Vrahatis. "Evolutionary Algorithm Training of Higher Order Neural Networks." In Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications, edited by Ming Zhang, 57-85. Hershey, PA: IGI Global, 2010. https://doi.org/10.4018/978-1-61520-711-4.ch003

Export Reference

Mendeley
Favorite

Abstract

This chapter aims to further explore the capabilities of the Higher Order Neural Networks class and especially the Pi-Sigma Neural Networks. The performance of Pi-Sigma Networks is evaluated through several well known neural network training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms are implemented for Pi-Sigma neural networks training. More specifically, the distributed versions of the Differential Evolution and the Particle Swarm Optimization algorithms have been employed. To this end, each processor of a distributed computing environment is assigned a subpopulation of potential solutions. The subpopulations are independently evolved in parallel and occasional migration is allowed to facilitate the cooperation between them. The novelty of the proposed approach is that it is applied to train Pi-Sigma networks using threshold activation functions, while the weights and biases were confined in a narrow band of integers (constrained in the range [-32, 32]). Thus, the trained Pi-Sigma neural networks can be represented by using only 6 bits. Such networks are better suited for hardware implementation than the real weight ones and this property is very important in real-life applications. Experimental results suggest that the proposed training process is fast, stable and reliable and the distributed trained Pi-Sigma networks exhibit good generalization capabilities.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.